I have tried to run this with both eclipse(CDT)+MinGW and Cygwin+GCC
#include <stdio.h>
#include <stdlib.h>
#include <float.h>
int main()
{
puts("The range of ");
printf("\tlong double is [%Le, %Le]?[%Le, %Le]\n", -LDBL_MAX, -
LDBL_MIN, LDBL_MIN, LDBL_MAX);
return EXIT_SUCCESS;
}
but got different results:
* In eclipse(CDT)+MinGW
The range of
long double is [-1.#QNAN0e+000, 3.237810e-319]?[6.953674e-310,
0.000000e+000]
* In Cygwin+GCC
The range of
long double is [-1.189731e+4932,
-3.362103e-4932]?[3.362103e-4932, 1.189731e+4932]
This is weird, and I googled it, then just found this
http://www.thescripts.com/forum/thread498535.html
The LDBL_MAX of long double is machine-dependent, but why it like this
in same machine? I guess it?s the problem with MinGW. Anyone hv any
idea?
There is absolutely nothing at all wrong with MinGW, honest. There
might be something wrong with their documentation, if they don't
explain this, or with your situation if they do document this behavior
but you didn't read the documentation.
Probably the most significant difference between the MinGW and Cygwin
packages of gcc, regardless of what IDE or other process you use to
drive them, is not in the compiler. If you get both packages based on
the same gcc version, the compilers are probably identical, or very
close to it.
Your real problem is that MinGW is like a typical gcc distribution,
basically supplied without libraries and links with the host system's
C library.
gcc happens support the full extended precision 80-bit format of the
Intel math coprocessor/FPU for long double. But Microsoft made a
marketing decision a long, long time ago, and decided not to.
Specifically:
"With the 16-bit Microsoft C/C++ compilers, long doubles are stored as
80- bit (10-byte) data types. Under Windows NT, in order to be
compatible with other non-Intel floating point implementations, the
80-bit long double format is aliased to the 64-bit (8-byte) double
format."
You can read the entire sad article on Microsoft's site, in their own
words here:
http://support.microsoft.com/kb/129209
The wording is gibberish, by the way, the 8-bit format is not
"aliased", whatever that means, to 64-bits. It merely means when you
define an object of long double, they use the same 64-bit format that
they use for ordinary double.
So when you use gcc, which does not limit the performance of the Intel
FPU to help Windows NT take over the world, it passed long double
objects and constants to functions as an 80-bit Intel FPU format
object. But Microsoft's library implementation of printf() expects to
receive a 64-bit Intel format value for either double or long double.
The result is undefined behavior.
Cygwin, on the other hand, comes with its own library that expects the
same format for long double that the compiler uses.
To this day, that decision made many years ago, renders Microsoft's
compilers crippled for some types of scientific and engineering
programming.
I'm not a die hard Microsoft basher, but this was certainly an example
of extreme stupidity on their part. They decided that their goal of
world domination was more important than the need of the programmer to
decide when a program needed the maximum precision that the hardware
could provide.
--
Jack Klein
Home:
http://JK-Technology.Com
FAQs for
comp.lang.c
http://c-faq.com/
comp.lang.c++
http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html