That example is more realistic than any you have posted
so far.
Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.
When they use a single precision function they expect
less accurate answers than a double precision function.
No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.
It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.
"Far above and beyond" is once again qualitative arm waving.
You may hear a weaker call to duty than others, but there's
nowhere in the C Standard that justifies an arbitrary
point beyond which it's okay to give up on sine, or any other
function. You think sine is hard to get right for some
arguments, try lgamma, tgamma, and erfc (all in C99).
The requirements of the C standard have already been
discussed, and that didn't seem to bother you any.
What, you mean that a conforming implementation has no explicit
precision requirements? I understand that; in fact I was one
of the principal proponents of that (lack of) requirement in
the C Standard. Dan Pop already responded to that issue better
than I can. It is well understood that every implementation
has certain "quality of implementation" features. (I also
pioneered use of that term.) If getc takes two days per character,
nobody will buy your library. Similarly, if math functions
trap out on certain unspecified values, word will get out and
your customers will stay away in droves.
Math libraries are tougher to quantify, because they require so
many measurements. But once you do the measurements and publish
them, you'd better be prepared to justify the results. You can't
just say, "I decided not to implement pow for negative exponents."
Or more to the point, "at some point the results of sine begin
to really suck, but I'd rather not say where that is." And if
you *do* choose to publish your threshold of suckiness, you'd
better have a rationale for choosing it. Otherwise, some third
party tester will report that your sine function occasionally
is off by 40 billion ulp, and not provide the rationale for why
that might be.
The numbers above are not for IEEE, as it hadn't been
invented yet. Extended precision has 112 bit fraction.
Oh, I see. You were talking about criteria accompanying a
40-year-old architecture, with software technology to match.
It seems that on most implementations sqrt() has the
ability to generate a fatal error when given a negative
argument.
Really? The C Standard says it should set errno to EDOM and
return some failure value. If your implementation also obeys
IEC 60559 (aka IEEE 754) then you should return a NaN. I
find very few customers who want a program to terminate rather
than produce a NaN. I know even fewer who would *ever* expect
sine to return a NaN for any finite argument.
There are no problems of any use to anyone where they
are useful.
Is that a Zen koan or just the nonsense it appears to be?
It wasn't in a class. They didn't teach it in ninth grade,
and as far as I know they don't now. It was explained to
me by a professional physicist.
As a one-time professional physicist myself, I now begin to
understand the nature of your education.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com