Major Addition Bug?

M

Michael Geary

Phil said:
This seems to be the 100,000th post to Ruby-talk (at least
the English version).

Cool! And it was my *first* post to the group.

This Must Mean Something.
What has our player won?

I know! I know! Can I get a free copy of Ruby?

(I was up till 4AM this morning reading the pickaxe book. Had a bad cold,
couldn't sleep and couldn't think well enough to write any code, and I'd
been meaning to investigate Ruby a bit. Very interesting! I like dynamic
languages such as Python and JavaScript, and I especially like languages
with expressive power. Ruby looks like it has a lot going for it.)

-Mike
 
D

Dan Tapp

That's a very cool idea. If you do decide to do this:
http://www.imf.org/external/np/tre/sdr/db/rms_fpt.cfm

Current currency exchange rates, from the IMF.

Interesting...I just had a look at this link, as well as its
HTML-formatted counterpart at

http://www.imf.org/external/np/tre/sdr/db/rms_five.cfm

The first dataset (SDRs per Currency Unit) is reported to nine decimal
places, while the reciprocal dataset is reported to six.

So: should such a class round in accordance with the IMF policy, or
would it be acceptable to store the first dataset and invert it as
necessary, giving slightly different results? In other words, which
would give the "expected" performance to potential clients of the class?

- dan
 
X

Xavier

Offhand I'd say this bug dates back to the 1940s, and isn't
getting fixed any time soon.

Seriously, I think it's a matter of trying to store an infinite
number of digits in a finite number of bits.

For a daily life example: How many digits does it take to store
"1/3" exactly? 0.333333...



Hal

Exactly. Has anyone ever tried to write 1.1 in binary? Tell me when you've
finished writing it ;-)
Use BigNums which are probably an implementation (I haven't looked into
Ruby's) of BCD or similar. BCD (Binary Coded Decimal) has been in use for
ages. It uses one byte per decimal digit. A less wasteful form is called
packed BCD and uses a nibble (4 bits) to store a decimal digit.
BCD is older than punch cards btw.

Another way out using floats is to redefine Float::==

$ irb
irb(main):001:0> class Float
irb(main):002:1> def ==(other)
irb(main):003:2> (self-other).abs<1E-7
irb(main):004:2> end
irb(main):005:1> end
=> nil
irb(main):006:0> (625.91 + 900.00 + 22.00) == 1547.91
=> true


Hth
 
D

Dan Tapp

Dan said:
So: should such a class round in accordance with the IMF policy, or
would it be acceptable to store the first dataset and invert it as
necessary, giving slightly different results? In other words, which
would give the "expected" performance to potential clients of the class?

- dan

Never mind; I just read the notes..."official" rounding is 6 significant
digits in both tables. The fact that extraneous digits appeared in some
of the second table's entries threw me...

- dan
 
L

Lloyd Zusman

Sean O'Dell said:
Sort of makes me wonder if someone somewhere doesn't make a replacement FPU
that does all of its calculations as real decimal values, or at least
automatically corrects rounding errors.

Well, the IBM mainframes from the 1960's had decimal arithmetic hardware
built in. They could do fixed point (integer), floating point, and
decimal arithmetic via CPU instructions.

In fact, they even had several different internal formats for the
numbers used in decimal arithmetic.

COBOL and PL/I compilers from back then could use this CPU capability
directly for calculations.

Do more modern chunks of Big Iron still have built-in packed and zoned
decimal arithmetic hardware?

In any case, the machines that most of us are likely to use these days
have to do this kind of math in software only.
 
J

Joel VanderWerf

Lloyd said:
Well, the IBM mainframes from the 1960's had decimal arithmetic hardware
built in. They could do fixed point (integer), floating point, and
decimal arithmetic via CPU instructions.

In fact, they even had several different internal formats for the
numbers used in decimal arithmetic.

COBOL and PL/I compilers from back then could use this CPU capability
directly for calculations.

Do more modern chunks of Big Iron still have built-in packed and zoned
decimal arithmetic hardware?

In any case, the machines that most of us are likely to use these days
have to do this kind of math in software only.

IIRC, 68K arch has ABCD ("Add Binary-Coded Decimal") and friends.
 
T

Tim Hunter

Well, the IBM mainframes from the 1960's had decimal arithmetic hardware
built in. They could do fixed point (integer), floating point, and
decimal arithmetic via CPU instructions.

In fact, they even had several different internal formats for the numbers
used in decimal arithmetic.

COBOL and PL/I compilers from back then could use this CPU capability
directly for calculations.

Do more modern chunks of Big Iron still have built-in packed and zoned
decimal arithmetic hardware?

You bet. In Big Iron world nothing ever dies.
 
J

John W. Kennedy

Sean said:
Sort of makes me wonder if someone somewhere doesn't make a replacement FPU
that does all of its calculations as real decimal values, or at least
automatically corrects rounding errors.

First, because it takes more memory (or gives less precision), and is
slower. Second, because they're not "errors"; floating point arithmetic
is intended for use in scientific computing, and reality does not come
in neatly calibrated decimal fractions of meters, kilograms, and
seconds; and artificially constraining calculations to decimal
quantization actually increases error.

Now, /money/ is decimally quantized. That's why IBM mainframes include
decimal arithmetic, and why languages intended to be capable of
accounting tasks (COBOL, PL/I, Ada '95, RPG, and some others) do decimal
arithmetic, either by decimal hardware or by decimally scaled integer
arithmetic (counting in pennies). Java and Ruby have classes named
BigDecimal for the same reason, though I am uncertain of whether Ruby's
BigDecimal actually does what it seems to be advertising.
 
J

John W. Kennedy

Lloyd said:
Well, the IBM mainframes from the 1960's had decimal arithmetic hardware
built in. They could do fixed point (integer), floating point, and
decimal arithmetic via CPU instructions.

In fact, they even had several different internal formats for the
numbers used in decimal arithmetic.

COBOL and PL/I compilers from back then could use this CPU capability
directly for calculations.

Do more modern chunks of Big Iron still have built-in packed and zoned
decimal arithmetic hardware?

Yes. The zSeries, following the S/390, following the S/370, continues
to be compatible with the S/360.

However, these are (and always have been) fixed-point, not
floating-point. Decimal arithmetic is only important for accounting,
and accounting tasks don't want floating-point, but scaled fixed-point.

Some pre-360 machines used exclusively decimal arithmetic, such as the
702/705/7080 series, the 650/7070/7072/7074 series, the 1401/1440/1460
series, the 1410/7010 series, the 350, and the 1620. Some of these
machines were designed for accounting. Others were designed to be cheap
and easy to program in assembler language or direct machine language.
None were designed for speed.
 
J

John W. Kennedy

Joel said:
IIRC, 68K arch has ABCD ("Add Binary-Coded Decimal") and friends.

Intel has some decimal-assist instructions, too, but I understand they
are slower than decimally scaling binary integers, and have fallen by
the wayside.

--
John W. Kennedy
"Give up vows and dogmas, and fixed things, and you may grow like That.
....you may come to think a blow bad, because it hurts, and not because
it humiliates. You may come to think murder wrong, because it is
violent, and not because it is unjust."
-- G. K. Chesterton. "The Ball and the Cross"
 
J

Jeff Mitchell

Mark Hubbart said:
On May 12, 2004, at 10:27 AM, Sean O'Dell wrote:

rather than rounding, you might consider:

(625.91 + 900.00 + 22.00) - 1547.91 > 0.0000000001
==>false

or even, for the scope of your app:

class Float
def ==(other)
self - other < 0.000000000001
end
end
==>nil
(625.91 + 900.00 + 22.00) == 1547.91
==>true

or, if you aren't into redefing #==, you might just make a method, say,
#approx_eql?(other) for Float.

Just a nitpick: this should be

class Float
def ==(other)
abs(self - other) < 0.000000000001
end
end

Coming from a background in mathematics, I have personal
affection for:

class Float
def within?(epsilon, other)
abs(self - other) < epsilon
end
end

EPSILON = 1e-8
# ...
if a.within?(EPSILON, b)
# ...

I read this as, "if a is within epsilon of b" -- a common phrase
in analysis.
 
D

Dick Davies

* Michael Geary said:
Cool! And it was my *first* post to the group.

This Must Mean Something.

You're The One.
Don't be surprised if the world appears to be made up of
streaming green continuations when you next go out.
 
Z

Zsban Ambrus

Offhand I'd say this bug dates back to the 1940s, and isn't
getting fixed any time soon.

Seriously, I think it's a matter of trying to store an infinite
number of digits in a finite number of bits.

No. I'd rather think this is not the problem. The problem is
that you should not compare floating-point numbers with == or != unless you
have a good reason to do it.

In this case, you should just do this:

raise "false" if ((625.91 + 900.00 + 22.00) - 1547.91).abs>1e-12

which just succeeds fine. You don't usually need an infinite precision, and
in more complicated cases, you can't even acheive one. (You could use
rationals here, because these are only simple arithmetic operations, and
you'd get an exact result.
 
M

Mark Hubbart

Just a nitpick: this should be

class Float
def ==(other)
abs(self - other) < 0.000000000001
end
end

yeah, yeah... :) I realized when I read someone else's version of it
that I had forgotten to take the absolute value. :/

Also, I hadn't thought of something else. Since it's a floating point
number, you can't have a fixed epsilon for all comparisons. Very large
numbers would always fail unless exactly equal, and very small numbers
would always pass, even if very different.
So, epsilon should be generated from one of the floats you are
comparing. That way, if you are comparing floats that are off in the
1e200 or 1e-200 range, you will still get things right. Comparing
(num1-num2).abs to num1*1e-8 will check that they are equal within 8
significant digits.

class Float
def approx_eql?(other, epsilon=1e-8)
epsilon *= self # fit epsilon to the number
(self - other).abs < epsilon
end
end

This could be extended to allow either an epsilon, or an integer number
of digits to check.

cheers,
--Mark
 
J

Joel VanderWerf

Zsban said:
No. I'd rather think this is not the problem. The problem is
that you should not compare floating-point numbers with == or != unless you
have a good reason to do it.

Should 'ruby -W2' warn on use of Float#== ?
 
Z

Zsban Ambrus

Should 'ruby -W2' warn on use of Float#== ?

No. Certainly not.

There are valid uses for ==, you just have to know the limits
of it on floating-point numbers if you are programming.
Also, no programming language warns on it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,145
Messages
2,570,826
Members
47,371
Latest member
Brkaa

Latest Threads

Top