I assume that any programming language older than a few months with
widespread usage is suitable for numeric computation
Um, you might be over-optimistic.
There's the famous feature of PL/I where 25 + 1/3 either throws a
fixed-point overflow error or evaluates to 5.33333... I've seen both
statements and I have no way to test it;
<
http://publib.boulder.ibm.com/infocenter/comphelp/v7v91/topic/com.ibm.aix.pli.doc/ibml2d41004828.htm>
says it's an error but also mentions left-truncation.
The dc program was designed to do nothing but be a Reverse Polish
Notation calculator, but by default it does only pure integer
calculation (you can enter numbers with a decimal point: the
fractional part will be discarded silently). You can change that, but
it provides only a fixed number of decimal places.
For that matter, it surprises new programmers when they learn that 1/3
equals 0, or that 1.0/3.0*3.0 does not necessarily equal 1.0, in many
languages.
(I have a dim memory that older Visual Basics were loosey-goosey about
mixing booleans and integers, and that the "not" operator is a bitwise
"not" instead of a logical one, and FALSE is 0, and integers are
ones-complement, so for all integers I except -1, both I and NOT I
were considered true. But I'm not at all 100% certain of that, and
that's not really numeric.)
But that's really quibbling. Your general point is quite true.
It's really REALLY rare for someone to find a new fundamental flaw in
a long-standing language. By far the highest chance is that they're
misunderstanding something. There's a much smaller chance that
they've hit a known long-standing problem.