Phillip Gawlowski wrote in post #963602:
But that basically is my point. In order to make your program
comprehensible, you have to add extra incantations so that strings are
tagged as UTF-8 everywhere (e.g. when opening files).
However this in turn adds *nothing* to your program or its logic, apart
from preventing Ruby from raising exceptions.
s/apart from preventing Ruby from raising exceptions/but ensures
correctness of data across different systems/;
Maths and computation are not the same thing. Is there anything in the
above which applies only to Ruby and not to floating point computation
in another other mainstream programming language?
You conveniently left out that Ruby thinks dividing by 0.0 results in infinity.
That's not just wrong, but absurd to the extreme. S, we have to
safeguard against this. Just like having to safeguard against, say,
proper string encoding. If *anyone* is to blame, it's the ANSI and the
IT industry for having a) an extremely US-centric view of the world,
and b) being too damn shortsighted to create an international, capable
standard 30 years ago.
Further, you can't do any computations without proper maths. In Ruby,
you can't do computations since it cannot divide by zero properly, or
at least *consistently*.
Yes, there are gotchas in floating point computation, as explained at
http://docs.sun.com/source/806-3568/ncg_goldberg.html
These are (or should be) well understood by programmers who feel they
need to use floating point numbers.
If you don't like IEEE floating point, Ruby also offers BigDecimal and
Rational.
Works really well with irrational numbers, that are neither large
decimals, nor can they be expressed as a fraction x/x_0.
In a nutshell, Ruby cannot deal with floating points at all, and the
IEEE standard is a means to *represent* floating point numbers in
bits. It does *not* supersede natural laws, much less rules that are
in effect for hundreds of years.
And once the accuracy that the IEEE float represents isn't good enough
anymore (which happens once you have to simulate a particle system),
you move away from scalar CPUs, and move to vector CPUs / APUs (like
the MMX and SSE instruction sets for desktops, or a GPGPU via CUDA).
If Ruby were to implement floating point following some different set of
rules other than IEEE, that would be (IMO) horrendous. The point of a
standard is that you only have to learn the gotchas once.
Um, no. A standard is a means to avoid misunderstandings, and have a
well-defined system dealing with what the standard defines. You know,
like exchange text data in a standard that can cover as many of the
world's glyphs as possible.
And there is always room for improvement, otherwise I wonder why
engineers need Maple and mathematicians Mathematica.
--
Phillip Gawlowski
Though the folk I have met,
(Ah, how soon!) they forget
When I've moved on to some other place,
There may be one or two,
When I've played and passed through,
Who'll remember my song or my face.