Machine precision

P

Philipp

Hello (not sure this is the right forum for that question so please redirect
me if necessary)

How can I know how many double values are available between 0 and 1?
On my machine (pentium 3) I get a sizeof(double) = 8

Is the distribution of the double values in a fixed range (eg here between 0
and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in
the range [0.9 ; 1.0[

How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my
machine).

Any good website on the subject to recommend?

Thanks Phil
 
T

Thomas Matthews

Philipp said:
Hello (not sure this is the right forum for that question so please redirect
me if necessary)

How can I know how many double values are available between 0 and 1?
On my machine (pentium 3) I get a sizeof(double) = 8

The precision of a type double is left up to the compiler. The C++
specification states a minimum precision, but your compiler is allowed
to exceed that precision, regardless of whether the processor has
the capability or not. The compiler is allowed to use software to
for floating point calculations. Summary: See your compiler
documentation or ask in a newsgroup about your compiler.

Is the distribution of the double values in a fixed range (eg here between 0
and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in
the range [0.9 ; 1.0[

My guess is that the distribution is uniform, and depends on the
limits set by the compiler.


How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my
machine).

My understanding is the DBL_EPSILON is the finest resolution for a
double. Although you may want to check the C++ specification on that.


Any good website on the subject to recommend?

Thanks Phil

Probably the site for the ANSI electronic documents.

--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.raos.demon.uk/acllc-c++/faq.html
Other sites:
http://www.josuttis.com -- C++ STL Library book
 
P

Patrick Frankenberger

Philipp said:
How can I know how many double values are available between 0 and 1?
On my machine (pentium 3) I get a sizeof(double) = 8

Is the distribution of the double values in a fixed range (eg here between 0
and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in
the range [0.9 ; 1.0[

How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my
machine).

Any good website on the subject to recommend?

C++ doubles are based on the IEEE754-standard, which most CPUs implement.

There is a nice paper titled "What Every Computer Scientist Should Know
About Floating-Point Arithmetic" by David Goldberg. It answers all of your
questions, except the DBL_EPSILON one.

DBL_EPSILON should be the smallest double d so that (1+d)!=1 IIRC.

HTH,
Patrick
 
P

P.J. Plauger

The precision of a type double is left up to the compiler. The C++
specification states a minimum precision, but your compiler is allowed
to exceed that precision, regardless of whether the processor has
the capability or not. The compiler is allowed to use software to
for floating point calculations. Summary: See your compiler
documentation or ask in a newsgroup about your compiler.

All true, but that doesn't answer the OP's question. There's only a loose
relation between the number of bytes occupied by a floating-point value
and the number of values it can represent between 0 and 1. To first order,
the representation typically uses one bit to represent the sign of the
value and one bit to represent the sign of the exponent. That's a slight
simplification, and the range of exponents is often asymmetric around 1.0.
But this is enough to tell you that, for eight-bit bytes, you can expect
about 2^62 values between 0.0 and 1.0. FWIW.
Is the distribution of the double values in a fixed range (eg here between 0
and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in
the range [0.9 ; 1.0[

My guess is that the distribution is uniform, and depends on the
limits set by the compiler.

No, the distribution is extremely *non* uniform, with values much more densely
packed close to zero.
My understanding is the DBL_EPSILON is the finest resolution for a
double. Although you may want to check the C++ specification on that.

DBL_EPSILON is the smallest value you can add to 1.0 and get a representable
answer greater than 1.0. It's a measure of the granularity of values in the
uniform range just above 1.0. (If the floating-point base is 2, the values are
twice as dense in the uniform range just below 1.0.)

The most readable intro to this stuff I've ever read is an ancient book
by Pat Sterbenz, called Floating Point Computation. Wish I could think of
a modern version that's as good. You can try reading the preambles to the
various modern floating-point formats, particularly those based on IEEE 754,
but they seldom discuss the implications of the representation.

HTH,

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

C++ doubles are based on the IEEE754-standard, which most CPUs implement.

Sadly, there is no such requirement. It is often the case, however, because
most modern processors do indeed implement IEEE 754 floating-point arithmetic.
There is a nice paper titled "What Every Computer Scientist Should Know
About Floating-Point Arithmetic" by David Goldberg. It answers all of your
questions, except the DBL_EPSILON one.

Generally good reading, if a bit alarmist.
DBL_EPSILON should be the smallest double d so that (1+d)!=1 IIRC.

You RC.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

Pete Becker

Patrick said:
C++ doubles are based on the IEEE754-standard, which most CPUs implement.

It's the other way around: most CPUs implement IEEE 754, so that's what
most C++ implementations do. The C++ standard does not require IEEE 754.
 
K

Keith S.

P.J. Plauger said:
No, the distribution is extremely *non* uniform, with values much more densely
packed close to zero.

This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).

Anyone have a simple explanation of why?

- Keith
 
G

Guest

This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).

Anyone have a simple explanation of why?

- Keith

Up to the time when most significant bit becomes one.
Then the spacing is becomes two times bigger.
You forgot about the exponent.
 
P

Patrick Frankenberger

Keith S. said:
This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).

A floating-point number is: a*2^(b-offset)
a is a signed integer and b is an unsigned integer.

HTH,
Patrick
 
R

Ron Natalie

Keith S. said:
This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).

Anyone have a simple explanation of why?
FLOATING POINT. Do you understand mantissa and exponent?
 
R

Ron Natalie

Keith S. said:
I understand politeness, shame that you do not.
I wasn't trying to be impolite, just a bit terser then usual. doubles aren't
just fractions, they shift. I thought the above would be enough of a hint
if you thought about it.
 
E

E. Robert Tisdale

Philipp said:
How can I know how many double values are available between 0 and 1?
On my machine (pentium 3) I get a sizeof(double) = 8

Is the distribution of the double values in a fixed range
(eg here between 0 and 1) uniform?
i.e. same number of values in the range [0.0 ; 0.1[
than in the range [0.9 ; 1.0[

How can I interpret the DBL_EPSILON value
(which is 2.22045e-16 on my machine).

Any good website on the subject to recommend?

Read
What Every Scientist Should Know About Floating-Point Arithmetic

http://docs.sun.com/db?p=/doc/800-7895


On your machine, a floating-point number

double x = (1 - 2*s)*m*2^e

where s in {0, 1} is the sign bit,
1/2 <= m < 1 is the *normalized* mantissa and
e is the exponent.
There are DBL_MANT_DIG = 53 binary digits
in the mantissa m but, since the most significant bit
is always 1, it isn't represented and is known as the hidden bit
so there are just 2^52 possible values for m.
For normalized double precision floating-point,
DBL_MIN_EXP = -1021 <= e <= 1024 = DBL_MAX_EXP.
When e = -1022, a *denormalized* double precision
floating-point number x = (1 - 2*s)*m*2^(-1021)
where 0 <= m < 1/2.
When e = +1025, x is Not a Number (NaN)
or a positive or negative infinity.
The IEEE representation is

SEM

where S is the sign bit,
E is an eleven bit [excess 1023] exponent
and 1 <= M < 2 is the 52 bit mantissa with a hidden 1 bit.

s = S
m = M/2
e = (E - 1023) + 1

Note that E = 0 when e = -1022 so that
the representation of +0 is all zeros.
 
O

osmium

Ron said:
FLOATING POINT. Do you understand mantissa and exponent?

In the dark ages that thing was called, mistakenly, mantissa. It has
nothing to do with the mantissa as in logarithms. Many (most?) people are
now using a much less tortured term "significand". AFAIK that word was
coined specifically for the use at hand. Much better to invent a word than
to use one wrongly, which is what has been done in this field. So if the
poster knew about mantissas (which he probably did) he would be doubly
confused.
 
J

Jack Klein

A floating-point number is: a*2^(b-offset)
a is a signed integer and b is an unsigned integer.

HTH,
Patrick

....on some platforms, perhaps all of those that you are familiar with.
The C++ language standard deliberately does not specify the
implementation details of the floating point types, and some are quite
different from the model you describe.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++ ftp://snurse-l.org/pub/acllc-c++/faq
 
K

Keith S.

Ron said:
I wasn't trying to be impolite, just a bit terser then usual. doubles aren't
just fractions, they shift. I thought the above would be enough of a hint
if you thought about it.

OK, fair enough. I obviously had not thought about it enough ;)

- Keith
 
F

Frank Schmitt

osmium said:
In the dark ages that thing was called, mistakenly, mantissa. It has
nothing to do with the mantissa as in logarithms. Many (most?) people are
now using a much less tortured term "significand".

Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and
exponent.
Just because you or anybody else doesn't like mantissa doesn't make it wrong.

regards
frank
 
K

Keith S.

Frank said:
Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and
exponent.
Just because you or anybody else doesn't like mantissa doesn't make it wrong.

Acording to Knuth "it is an abuse of terminology to call the fraction
part a mantissa, since that term has quite a different meaning in
connection with logarithms".

But this is getting a bit pedantic...

- Keith
 
G

Gary Labowitz

Keith S. said:
wrong.

Acording to Knuth "it is an abuse of terminology to call the fraction
part a mantissa, since that term has quite a different meaning in
connection with logarithms".

But this is getting a bit pedantic...

Hmm... sounds good ... but is it?

According to good ol' pedantic Webster:

Main Entry: pe·dan·tic
Pronunciation: pi-'dan-tik
Function: adjective
Date: circa 1600
1 : of, relating to, or being a pedant
2 : narrowly, stodgily, and often ostentatiously learned

Main Entry: man·tis·sa
Pronunciation: man-'ti-s&
Function: noun
Etymology: Latin mantisa, mantissa makeweight, from Etruscan
Date: circa 1847
: the part of a logarithm to the right of the decimal point

I'd say that Knuth, rather than being pedantic, was correct.

Main Entry: correct
Function: adjective
Etymology: Middle English, corrected, from Latin correctus, from past
participle of corrigere
Date: 1676
1 : conforming to an approved or conventional standard
2 : conforming to or agreeing with fact, logic, or known truth
3 : conforming to a set figure <enclosed the correct return postage>

Nevertheless, we still can USE the word mantissa for the numeric value of
floating-point encodings.
(IEEE standard calls it "the fraction.")

"When I use a word," Humpty Dumpty said in rather a scornful tone, "it means
just what I choose it to mean - neither more nor less."
Lewis Carroll
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,822
Latest member
israfaceZa

Latest Threads

Top