Writing floating point number to disk

M

mathieu

hi there,

I do not understand the syntax for ios_base::precision. Let say I
have a floating point:

const float f = 0.313244462;

How do I write is to a stream ?

const float f = 0.313244462;
std::cout << f << std::endl;

This truncate my number to an arbitrary number of digit after the
decimal point. all I am trying to do is write all meaningfull digit so
that when rereading this number from disk, I can find *exactly* the
same floating point number (I am using 'float' type and not 'double'
type).

Thanks
-Mathieu
 
M

moongeegee

const float f = 0.313244462;
std::cout << f << std::endl;
The output is 0.313244

const float f = 0.313244462;
cout.precision(10);
std::cout << f << std::endl;
The out put is 0.313244462
 
M

mathieu

[following initiated top post]

I do not want to specify an hardcoded '10' value in my program, this
should be deduced from the size of float and the number of significant
digit.
 
J

Juha Nieminen

mathieu said:
I do not want to specify an hardcoded '10' value in my program

Then use '100' or whatever. AFAIK ostream only outputs at most that
many decimals. It won't output trailing zeros.
 
M

Michael DOUBEZ

mathieu a écrit :
[following initiated top post]

I do not want to specify an hardcoded '10' value in my program, this
should be deduced from the size of float and the number of significant
digit.
[please don't top-post]

On 32 bit computer, float precision is +/- 5e-8 (i.e. 7 digits
precision) and double precision (64 bits) is 53 digits.

cout.setprecision(7);
should be enough for float but in doubt, you can set it to a larger
value if you want.

If you have trailing zeros, then you have a 'fixed' modifier somewhere.
 
J

James Kanze

mathieu a écrit :
[following initiated top post]
I do not want to specify an hardcoded '10' value in my
program, this should be deduced from the size of float and
the number of significant digit.
On 32 bit computer, float precision is +/- 5e-8 (i.e. 7 digits
precision) and double precision (64 bits) is 53 digits.

Really? On some 32 bit computers, maybe, but certainly not on
all. In C++0x, he can use std::numeric_limits< float
::max_digits10 for the necessary precision, but this value
wasn't available in earlier versions of the standard (and thus
likely isn't available with his current compiler). There is an
std::numeric_limits< float >::digits10, but this gives the
opposite value: the maximum number of digits you can read and
then output with no change. (For example, for an IEEE
float---the most common format on small and medium sized
machines, max_digits10 is 9, but digits10 is only 6.)
cout.setprecision(7);
should be enough for float but in doubt, you can set it to a
larger value if you want.

It's not enough on an Intel or AMD based PC, nor on a Sparc (the
machines I regularly use). It's probably not enough on a number
of other machines.
 
J

James Kanze

Then use '100' or whatever. AFAIK ostream only outputs at most
that many decimals. It won't output trailing zeros.

But he doesn't care about trailing zeros. His problem, as
originally stated, was to ensure round trip accuracy starting
from the internal representation: from an internal float to text
and back, ending up with exactly the same value. (This is, of
course, only possible if both the machine writing and the
machine reading have the same floating point format.) Ten is
sufficient for IEEE float; in fact, so is 9. For other formats,
you might need more or less.

And I would certainly not use 100. All you'll get is a lot of
extra useless and insignificant digits for most values. And
will output 100 digits if your using either fixed or scientific
format.
 
K

Kai-Uwe Bux

mathieu said:
hi there,

I do not understand the syntax for ios_base::precision. Let say I
have a floating point:

const float f = 0.313244462;

How do I write is to a stream ?

const float f = 0.313244462;
std::cout << f << std::endl;

This truncate my number to an arbitrary number of digit after the
decimal point. all I am trying to do is write all meaningfull digit so
that when rereading this number from disk, I can find *exactly* the
same floating point number (I am using 'float' type and not 'double'
type).

Maybe something like the following will help:

template < typename Float >
std::string to_string ( Float data ) {
std::stringstream in;
unsigned long const digits =
static_cast< unsigned long >(
- std::log( std::numeric_limits<Float>::epsilon() )
/ std::log( 10.0 ) );
if ( in << std::dec << std::setprecision(2+digits) << data ) {
return ( in.str() );
} else {
throw some_thing;
}
}


Best

Kai-Uwe Bux
 
M

mathieu

Maybe something like the following will help:

template < typename Float >
std::string to_string ( Float data ) {
std::stringstream in;
unsigned long const digits =
static_cast< unsigned long >(
- std::log( std::numeric_limits<Float>::epsilon() )
/ std::log( 10.0 ) );
if ( in << std::dec << std::setprecision(2+digits) << data ) {
return ( in.str() );
} else {
throw some_thing;
}

}

Best

Kai-Uwe Bux

Where is the "2+" coming from in setprecision ?

I found one of your earlier post:
http://www.archivum.info/comp.lang.c++/2005-10/msg03220.html

where this offset does not appear.

Thanks
 
M

Michael DOUBEZ

James Kanze a écrit :
mathieu a écrit :
[following initiated top post]
I do not want to specify an hardcoded '10' value in my
program, this should be deduced from the size of float and
the number of significant digit.
On 32 bit computer, float precision is +/- 5e-8 (i.e. 7 digits
precision) and double precision (64 bits) is 53 digits.

Really?

That's the values I take when computing the precision of a calculus.
On some 32 bit computers, maybe, but certainly not on
all.

Provided they use the IEEE representation, they have 24 bits to encode
the fractional part and thus 7 digits precision. I thought it was the
most widespread norm in use.
In C++0x, he can use std::numeric_limits< float
wasn't available in earlier versions of the standard (and thus
likely isn't available with his current compiler). There is an
std::numeric_limits< float >::digits10, but this gives the
opposite value: the maximum number of digits you can read and
then output with no change. (For example, for an IEEE
float---the most common format on small and medium sized
machines, max_digits10 is 9, but digits10 is only 6.)


It's not enough on an Intel or AMD based PC, nor on a Sparc (the
machines I regularly use). It's probably not enough on a number
of other machines.

I understand 8 digits since it could reduce an increased error on the
last digit ( ...88 written ...9 read ...91 ) but how they came up with 9
or more ? (not that it is critical).
 
J

Juha Nieminen

Michael said:
On 32 bit computer, float precision is +/- 5e-8 (i.e. 7 digits
precision) and double precision (64 bits) is 53 digits.

You mean that's not the case with 64-bit computers?

(Or, in other words: Why mention "32 bit computer" explicitly, given
that the exact same floating point precision is used in most 64-bit
computers as well. In fact, it was also used in 16-bit Intel-based
computers too.)
 
M

Michael DOUBEZ

Juha Nieminen a écrit :
You mean that's not the case with 64-bit computers?

Nothing is preventing a 64 bit computer to have float of 64 bits and
double of 80 bits (if they keep the norm IEEE). I don't know if there is
a 128 bit float but it is bound to appear sooner or later.

The C++ standard only guarantee that all float numbers can be
represented as a double.
(Or, in other words: Why mention "32 bit computer" explicitly, given
that the exact same floating point precision is used in most 64-bit
computers as well.

I don't know if it is the case. If you tell me it is so, I am ready to
accept it :)
In fact, it was also used in 16-bit Intel-based
computers too.)

You also can have float on fpu-less processor. In which case, the
precision is also irrelevant with the processor architecture. But I
don't see why a vendor would throw alway power when he has a 128 bit fpu.
 
J

James Kanze

James Kanze a écrit :
mathieu a écrit :
[following initiated top post]
I do not want to specify an hardcoded '10' value in my
program, this should be deduced from the size of float and
the number of significant digit.
On 32 bit computer, float precision is +/- 5e-8 (i.e. 7 digits
precision) and double precision (64 bits) is 53 digits.
Really?
That's the values I take when computing the precision of a calculus.
Provided they use the IEEE representation, they have 24 bits
to encode the fractional part and thus 7 digits precision. I
thought it was the most widespread norm in use.

The most widespread, but far from universal. Most mainframes
use something different.
I understand 8 digits since it could reduce an increased error
on the last digit ( ...88 written ...9 read ...91 ) but how
they came up with 9 or more ? (not that it is critical).

To tell the truth, I don't know. I just copied the value out of
the draft standard:). (I'd have guessed 8 myself. But I know
that I'm no specialist in this domain. I only know enough to
know how much I don't know.)
 
J

James Kanze

You mean that's not the case with 64-bit computers?
(Or, in other words: Why mention "32 bit computer" explicitly,
given that the exact same floating point precision is used in
most 64-bit computers as well. In fact, it was also used in
16-bit Intel-based computers too.)

Exactly. The problem isn't the number of bits, but the floating
point format (although I can't quite see an IEEE format of a
machine which isn't 8/16/32/64 bits). Thus, I'm aware of at
least four floating point formats in common use today, two of
which are used on 32 bit machines.
 
J

Juha Nieminen

James said:
Thus, I'm aware of at
least four floating point formats in common use today, two of
which are used on 32 bit machines.

Three, actually. Intel FPUs support 80-bit floating point numbers, and
usually you can create one with "long double".
 
R

Rune Allnor

  You mean that's not the case with 64-bit computers?

  (Or, in other words: Why mention "32 bit computer" explicitly, given
that the exact same floating point precision is used in most 64-bit
computers as well. In fact, it was also used in 16-bit Intel-based
computers too.)

floats need not be 32 bit on 64-bit computers, just the same
way ints don't need to be 16 bit on a 32-bit computer.

Rune
 
J

James Kanze

Three, actually. Intel FPUs support 80-bit floating point
numbers, and usually you can create one with "long double".

That's not what I meant. I was thinking of four different
formats for float, or for double. Off hand, for float, there's
IEEE (PC's and most Unix boxes), IBM floating point (also 32
bits, but base 16, for starters), and the floating point formats
for the Unisys mainframes (36 bits on one architecture, 48 on
the other, both base 8). There are certainly others as well,
but I don't know if you'll find them on any modern machines.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,961
Messages
2,570,131
Members
46,689
Latest member
liammiller

Latest Threads

Top