R
Roedy Green
It is amazing! Does someone understand it?
It is one of the most commonly asked questions. See
http://mindprod.com/jgloss/floatingpoint.html
It is amazing! Does someone understand it?
Roedy said:It is one of the most commonly asked questions. See
http://mindprod.com/jgloss/floatingpoint.html
The relative error is a million times too large to be explained by
double precision rounding on adding two numbers of similar magnitude and
equal sign. In any case, in IEEE 754 binary floating point, 0.4 has the
same representation as the result of adding the representation of 0.2 to
itself, so the String answer should have been "0.4".
I have an amazing problem with a double in Java.
Patricia said:I don't think anyone really understands the claimed output, 0.399999999
as the result of adding ((0 + 0.2) + 0.2).
The relative error is a million times too large to be explained by
double precision rounding on adding two numbers of similar magnitude and
equal sign. In any case, in IEEE 754 binary floating point, 0.4 has the
same representation as the result of adding the representation of 0.2 to
itself, so the String answer should have been "0.4".
I am concerned about the fuzz-inserting demon model, because it causes
people to accept as rounding error results that have to have a
different, as yet unknown, cause.
Patricia
Roedy said:I have added a point to the entry at
http://mindprod.com/jgloss/floatingpoint.html. Is this what you were
referring to?
"When numbers are converted from float or double to String for display
they may be truncated. Further the process of converting from binary
to decimal introduces even more errors that were not in the original
computation result, possibly because of repeaters -- fractions that
cannot be represented exactly in binary or decimal."
....Mark said:No, I think that is an example what Patricia does NOT like. The
conversion of floating point values to string is very precisely defined
in Java. Your statement gives the impression of unbounded error which is
very much not the case.
George Neuner wrote:
...
I get 0.40000000000000002220446049250313080847263336181640625
Patricia
George Neuner said:IEEE double precision has only 17 significant decimal figures.
George said:IEEE double precision has only 17 significant decimal figures. Are
you calculating that on a VAX?
...
We certainly agree on the bottom line, that the final String answer
should be "0.4". I am confused about whether you are agreeing or
disagreeing with my analysis leading to that conclusion. Could you clarify?
Patricia
George Neuner said:the default is to include 6 significant figures
George said:It wasn't clear from your post (and still isn't) whether you were
referring to binary rounding in the arithmetic or decimal rounding in
the print formatting. The "correct" answer of 0.4 should be due to
decimal rounding because the default is to include 6 significant
figures (which should be 0.40000) and to truncate trailing zeros.
I agree that either rounding doesn't explain the OP's strange results.
However, I must point out that the OP did not mention what platform
(CPU + JVM) he was using, and even though the Java spec requires
IEEE-754 arithmetic, in fact no FPU hardware is completely 754
compliant - to achieve compliance some amount of software emulation is
always necessary.
I think possibly you are confusing println with printf?
The JLS has detailed rules for floating point arithmetic.
new BigDecimal(0.4).toString()
"new BigDecimal(someDouble) returns a BigDecimal whose value is exactly
the value of the double. BigDecimal's toString() is also exact.
Although double has about the same precision as 16 significant decimal
digit arithmetic, the doubles do not align with the short decimal
numbers. Any numeric double can be exactly represented as a decimal
fraction, but it often takes a lot more than 17 digits.
George said:That's cheating
BigDecimals don't have the same semantics as hardware FP. If you have
enough bits you can represent any real number - not just the subset
also representable by IEEE double precision numbers. In any event, we
were discussing fractions representable in 52 bits.
Patricia said:The JLS has detailed rules for floating point arithmetic. Absent
"strictfp", the rules allow for extra precision in some cases. I have
yet to see Java produce lower precision than is specified. In this
particular case, the hardware would have to be *very* weird to avoid
getting the right answer.
Mark said:Actually I don't think extra precision is allowed. What is permitted,
without strictfp, is a larger exponent.
Patricia said:It is not true that "if you have enough bits you can represent any real
number". There are more real numbers than there are finite length bit
sequences.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.