> news:
[email protected]... ....
>
> Uh, no. There are *lots* of representable values between DBL_MIN and zero.
You are right, I had forgotten some numbers.
>
> It is extremely *unlikely* that a representable argument x will be
> close enough to an actual multiple of pi/2 that the nearest internal
> representation of cos(x) will be zero, but it is *possible*.
Anyhow, here the proof that it can not be close enough to zero, and so
is not possible (in the remainder ^ is exponentiation):
Set: d_n = (n + 1/2) * pi rounded to 53-bit double precision. d_n is a
rational, we have to determine the denominator. If d_n has k_n binary
digits (removing trailing zero digits) after the binary point, the
denominator is 2^k_n. Rewrite (exact mathematics) as 2*d_n/(2n + 1) ~ pi,
The rational on the left has a slightly larger denominator. You can
verify that the denominator is always smaller than 2^54. When k = 0,
the value is about 1.5, so there are at most 52 bits after the binary
point. Increasing n decreases the number of bits after the binary
point, multiplying it by 2n + 1 again increases it, but not enough
to get more than 54 bits (possibly not even more than 53, but that
does not matter).
Now go to the continued fraction expansion of pi. The 30-th
convergent is: 66627445592888887/21208174623389167, where the
denominator is larger than 2^54. The error is larger than
2e-34 (absolute error). A property of convergents is that there
are *no* rational numbers with smaller denominater that are closer
to the number to approximate. As the denominator of 2*d_n/(2n+1) is
less than 21208174623389167, we have |2*d_n/(2n+1) - pi| > 2e-34. Or
|d_n - (n + 1/2) * pi| > (2n + 1) * 1e-34. And so:
cos((n + 1/2) * pi) > (2n + 1) * 1e-34, which is, eh, much larger
than DBL_MIN.
The bound is not sharp, but it is sufficient to show that the cosine
should never deliver 0.0. A similar reasoning will show the same
with long double, and actually will show the same with almost all
finite precision floating point systems. It will only not be true
if either there is a coefficient in the continued fraction expansion
that is extremely large compared to the denominator of the convergent.
So it might be true on some 3-digit decimal f-p system, but none
else (the first 20,000,000 coefficients contain only one relatively
large coefficient, it is used in the approximation 355/133).
Another possibility is when the range of exponents is "too small"
compared to the size of the mantissa.
> And,
> as I showed in an earlier posting, there are several cases where
> cos(n*pi/2) is smaller in magnitude than *DBL_MIN.
You did show examples with DBL_EPSILON. But I said that cos((n + 1/2).pi)
is in the order of magnitude of DBL_EPSILON. Not that it is larger.
With respect to a comparison with DBL_EPSILON, I would say that the
likelyhood that it is larger than the cosine is reasonable, but that
depends on the distribution of the 1 and 0 bits in the binary
expansion of (n + 1/2) * pi, which is essentially "random". And it
is only possible for small values of n. If (for instance) IEEE had
chosen a 50 bit mantissa for double, the value of cos(pi/2) would
have been the same as you have gotten with standard double but that
is because there is a run of 4 zero bits in the binary expansion of
pi/2 from bit 51 to bit 54. (Runs of identical bits make the
approximation smaller than normal.) In general, when printed in
decimal, the exponent of cos((n + 1/2).pi) should be *at most*
two less than the exponent of the associated _MIN. The first
occurrence of two might be with a floating point system with a
mantissa of 107 bits. (There is an occurrence of a row of 7 zero bits
for pi/2 at that place.)