P.J. Plauger wrote:
(I wrote)
Another conjecture. And what if it wasn't? Then you'd have no
excuse other than to take the argument at face value and do
the best you can with it. I say again -- it is up to the programmer
to do error-propagation analysis and decide how much precision to
trust in the final answer. The library writer doesn't have the
luxury of giving back garbage bits because they *might* not be
meaningful.
Consider this program fragment, as no real examples have
been presented so far:
for(x=0;x+60!=x;x+=60) printf("%20.15g\n",sin(x*A));
A should have the appropriate value so that x is
expressed in degrees. Now, consider the effect
of ordinary argument reduction, and unusually
accurate argument reduction.
If you do argument reduction using an appropriate
approximation to pi/4 to machine precision, you get
the answer expected by the programmer. If you use
a pi/4 much more accurate than machine precision, the
result will slowly diverge from the expected value.
But I kinda like the freedom you offer us. We could write *all*
library functions on the basis that the return value can be
anywhere between f(x - 1/2 ulp) and f(x + 1/2 ulp) -- speaking
symbolically, of course. Boy would that make it easier to compute
exponentials, powers, gamma functions, etc. But I bet you wouldn't
like the results.
(snip)
Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?
Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?
The IBM 360/91 was documented as not following the S/360
architecture because its floating point divide gave rounded
results (0.5ulp) instead of the truncated results (1ulp)
that the architecture specified.
Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster.
As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it.
The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision).
Giving a fatal error tells the user that something is
wrong and should be fixed. Supplying answers using bits
that don't really exist allows the user to believe that
useful answers are being generated, possibly wasting
much calculation and money.
Actually, the time when this happened to me, probably
when I was in ninth grade, (due to an undefined variable
if I remember right), someone carefully explained to me
why the library would refuse such a calculation. It made
sense at the time, and it still does.
-- glen