long double in gcc implementations

C

Chris Torek

lcw1964 said:
[...]
I have recompiled some old math code that uses long double types
throughout and provides the corresponding range and precision (up to 18
digits, with exponents up to about -/+ 4900) when compiled under an
older version of BC++.

You're making an assumption about the correct range and precision of
long double.

This range is not required by C, but is provided by the target
architecture. (That makes this *particular* detail off-topic in
comp.lang.c, of course.) The on-topic part is whether implementations
are required to distinguish between "double" and "long double",
both in terms of compile-time type (where the answer is "yes") and
in terms of sizeof(), precision, and so on (where the answer is
"no").

A related question: if you compile and run:

#include <stdio.h>

int main(void) {

printf("sizeof(double): %lu\n",
(unsigned long)sizeof(double));
printf("sizeof(long double): %lu\n",
(unsigned long)sizeof(long double));
return 0;
}

and get two different numbers, does this mean that the values for
DBL_* and LDBL_* in <float.h> must be different? (I would say
"no": for instance, you can have a compiler in which sizeof(long
double) is bigger but the extra bytes are simply wasted. Not very
useful, but then, the Standard rarely imposes any requirement that
a compiler be any good. :) )
MinGW is correct about atof(); according to the standard, it's
declared in <stdlib.h>, not in <math.h>. _atold() is non-standard, so
you can't use it in portable code.

And the (C99) standard routine for extracting a long double is
There's another potential problem. An implementation consists of two
parts ...

More precisely, most *real* implementations consist of multiple
parts (at least two, usually quite a few more). A few (usually
toy) implementations actually package everything up into one seamless
-- and hence inflexible and unextendable -- whole. (I prefer
systems with "beautiful seams", as Mark Weiser once called them.)
the compiler and the runtime library (plus the linker and
perhaps a few other things).

(those being some of the "more")
Very often the runtime library is provided as part of the operating
system, and the compiler is provided by a third party, so they
might not be in sync. If your compiler assumes that long double
has one representation, and your runtime library's implementation
of printf() assumes a different representation, you're going to
have problems. In that case, you have a non-conforming (broken)
implementation -- and there might not be much you can do about it.

I believe this is in fact the problem here. The easiest thing to
do about it is usually to find a different, less- or non-broken
implementation.

Also (though neither you nor the OP appear to need this), I have some
related comments in <http://web.torek.net/c/numbers.html>.
 
K

Keith Thompson

jacob navia said:
The other problem with mingw is the run time library. Last time
I checked they do not provide a C99 compliant printf, so it is very
difficult to print long double data.

As far as I know, C99 didn't add anything new to printf related to
long double (except "%La" and "%LA" for hexadecimal output, but I
don't think that's what we're talking about).

This:

#include <stdio.h>
int main(void)
{
long double x = 42.0;
printf("x = %Lg\n", x);
return 0;
}

is valid in both C90 and C99.

(I think some versions of gcc have printed misleading warnings about
something like this; I don't remember the exact details.)

So if mingw's runtime library doesn't support printing long doubles,
it's not a C99-specific problem. (It may be a mismatch between the
compiler and the library.)
 
K

Keith Thompson

Richard Heathfield said:
Keith Thompson said:
[...]
All this extensions can be disabled when invoking the compiler with
-ansic flag.

That's good, seriously.

But it's not true (at least for the qfloat thing), for any users that
obtained their copy of lcc-win32 prior to this discussion (see elsethread,
where Mr Navia admits that he's had to fix the compiler as a result of this
discussion).

In fairness, that particular case strikes me as a relatively minor
bug, something common to almost all software -- and he did fix it
quickly.

Since I've never really used lcc-win32, I have no real basis for
judging how buggy it is in general. (I decline, at least for now, to
judge jacob's programming skills on the basis of his debating skills.)
 
K

Keith Thompson

jacob navia said:
The mingw documentation says:
http://www.mingw.org/MinGWiki/index.php/long double

< quote >
Minimalist GNU for Windows
Printing and formatting long double values

mingw uses the Microsoft C run-time libraries and their implementation
of printf does not support the 'long double' type. As a work-around,
you could cast to 'double' and pass that to printf instead. For
example:

printf("value = %g\n", (double) my_long_double_value);

Note that a similar problem exists for 'long long' type. Use the 'I64'
(eye sixty-four) length modifier instead of gcc's 'll' (ell ell). For
example:

printf("value = %I64d\n", my_long_long_value);

See also long long

< end quote >

I suspect that this may be, if not incorrect, at least just a little
bit misleading. It seems likely that the Microsoft C run-time library
does support "long double" -- just not the same representation of
"long double" used by MinGW's compiler. But in any case, casting to
double is a decent workaround, unless you really need to display more
precision than double provides.

It would also be nice if it mentioned that "%lld" is the form
specified by the standard, not just a gcc vs. Microsoft thing. (And
it's not gcc that implements printf anyway, it's the runtime library,
which is *not* part of gcc -- though gcc does recognize printf formats
for the purpose of issuing warnings.)
 
P

P.J. Plauger

This is unclear.

FE_DFL_ENV is a macro that "designates the default environment"
as the standard says (7.6.1)

Specifically in this case however, do you set the FPU at full precision
(80 bits) or you stay in 64 bit mode?

lcc-win32 sets full precision at 80 bits.

IIRC, the mode is described in terms of the number of precision bits,
not the number of bits in the full floating-point representation.
Thus the choice is between 53-bit mode, good for 64-bit IEEE "double"
representation, or 64-bit, good for extended IEEE 80-bit "long double"
representation. Our FE_DFL_ENV ensures that the 80-bit representation
gives sensible results with Mingw. The startup code that Mingw
normally relies on does not.
The other problem with mingw is the run time library. Last time
I checked they do not provide a C99 compliant printf, so it is very
difficult to print long double data.

That may be true, but the problem goes deeper than that. Mingw, left
to its own devices, doesn't even *compute* good long double results
internally.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
L

lcw1964

Wow! My original query stimulated a lot of discussion, even when one
disregards the heated segue in the lcc-win32 business.

I thank everyone for their time and thoughtful comments. This is
obviously a more complex issue than I at first naively imagined, and I
have been directed to several helpful resources and have learned a
great deal. The question was obviously worth posting--if not, there
would not have been such involved helpful discussion.

Many thanks again,

Les
 
L

lcw1964

jacob said:
The lcc-win32 C compiler offers 100 digits floats, 128 bit integers
bignums, whatever.

After spending a few hours trying to painstakingly port some simple
code to compile under lcc-win32, converting long double to qfloat and
changing a few math functions to the idiosyncratic qfloat versions, I
get plenty of digits in my output, but everything is wrong after about
the 16th digit. Similar code using the Pascal extended type compiles
under the ancient Delphi 2.0 and offers 18 digits, within 1ULP, without
resorting to any floating type extensions that are idiosyncratic to the
compiler.

What is the point of offering a 100 digit floating point type if the 85
of those digits are meaningless?

Someone told me in this thread you get what you pay for. I would add to
that bit of wisdom that if something seems too good to be true, it
probably is.

Les
 
T

Tom St Denis

lcw1964 said:
What is the point of offering a 100 digit floating point type if the 85
of those digits are meaningless?

Someone told me in this thread you get what you pay for. I would add to
that bit of wisdom that if something seems too good to be true, it
probably is.

Well don't discount the possibility your code is incorrect and just
happened to work with an older non-standard conforming compiler.

In general though if you need huge precision [e.g. > 80 bits] you
should just look into using a bignum library and wrap your own floating
point logic around it. For instance, my LibTomFloat is a very quick
[and hardly tested] attempt at this around my free LibTomMath. Both
libs are public domain and written in portable C. Obviously if you
have some commercial need for this you may want to invest some time in
thoroughly testing/improving the implementation.

If you know the range of your values you could more easily just use a
fixed point representation. e.g. if you know your values are in the
range -7 ... 7 you only need 3 bits for the integer. So if you used a
128-bit bignum you'd have 125 bits of fraction. etc...

Tom
 
L

lcw1964

Tom said:
Well don't discount the possibility your code is incorrect and just
happened to work with an older non-standard conforming compiler.

I am duly humbled and I may have spoken prematurely. I have tested out
some of Mr. Navia's built-in math functions, compared the results to
Maple, and they do seem to render very impressive results. (As a matter
of fact, I am so impressed I would love to see the source code to try
to learn where I have gone astray.)

I will go back to the drawing board and keep an open mind to see if if
any of the constants are functions I am using are not producing interim
results to the qfloat level. Garbage in garbage out, eh?

I must admit that it is hard to get used to nonstandard libraries and
functions and I can appreciate the criticisms here around portability.

many thanks,

Les
 
L

lcw1964

lcw1964 said:
I am duly humbled and I may have spoken prematurely. I have tested out
some of Mr. Navia's built-in math functions, compared the results to
Maple, and they do seem to render very impressive results. (As a matter
of fact, I am so impressed I would love to see the source code to try
to learn where I have gone astray.)

I may need to rescind my contrition here.

I am interested in something called the error function. This following
bit of code uses the built-in versions of lcc-win32 to compute this:

#include <stdio.h>
#include <math.h>
#include <qfloat.h>
#include <stdlib.h>

char Pause(void);

int main(void)
{
char txt[80];
qfloat calc, x;

printf("Enter the x argument: ");
gets(txt);
x = atof(txt);

calc=erfcq(x);
printf("erfc(x): %.80qg\n",calc);
calc=erfq(x);
printf("erf(x): %.80qg\n",calc);
puts("");

Pause();

return 0;
}

char Pause()
{
char c;
printf("\nPress Enter to continue...");
while ((c = getchar()) != '\n') { }
return c;
}

To be modest in my expectations I only output 80 digits, as opposed to
the full 100 digit precision claimed by qfloat.

For nice "round" arguments (1.0, 2.0, 3.0), the resulting 80 digits
agree totally with the output of Maple (which I trust). It is to weep,
and I am duly amazed.

However, if I try something like 1.73 or 2.52, the result offers at
best long double accuracy, with things breaking down after the 17th
digit, or 18th if I am lucky.

I genuinely hope this has something to do with my typing of the input
via atof(). For some reason, Mr. Navia's erfcq and erfq seem to see
input like 1.73 or 2.52 as double precision at best and produce only
the double precision version of the desired result, whereas nice round
input seems to get typed appropriately as qfloat and the high precision
is reflected in the output.

For my limited personal purposes the much maligned lcc-win32 could be
very satisfactory to me, so I am interested in whether this observation
is a product of my own limited programming ability or whether it is a
genuine problem related to the lcc-w32's non-standard extended function
library. FWIW, my own code to calculate erfc() (an adaptation of some
stuff in NR in C) suffers the same problem--full high precision for
"round" input, double precision at best for input with a little
business going on after the decimal point.

Grateful for feedback--polite if possible ;)

Les
 
R

Richard Heathfield

lcw1964 said:

I may need to rescind my contrition here.

Maybe we should establish the facts before we start either attacking or
defending lcc-win32. :)
I am interested in something called the error function. This following
bit of code uses the built-in versions of lcc-win32 to compute this:

#include <stdio.h>
#include <math.h>
#include <qfloat.h>
#include <stdlib.h>

char Pause(void);

int main(void)
{
char txt[80];
qfloat calc, x;

printf("Enter the x argument: ");
gets(txt);

This is a buffer overflow waiting to happen. Use fgets(txt, sizeof txt,
stdin) instead. We can't tell, of course, whether a buffer overflow has
caused your problem, although it is probably unlikely in this case, since
you have a nice big buffer, nice short inputs, and a (presumably) careful
and non-malicious user.

To be modest in my expectations I only output 80 digits, as opposed to
the full 100 digit precision claimed by qfloat.

For nice "round" arguments (1.0, 2.0, 3.0), the resulting 80 digits
agree totally with the output of Maple (which I trust). It is to weep,
and I am duly amazed.

However, if I try something like 1.73 or 2.52, the result offers at
best long double accuracy, with things breaking down after the 17th
digit, or 18th if I am lucky.

Perhaps if you could express your requirements in more detail, we could
establish whether there is a problem with qfloat.

For my limited personal purposes the much maligned lcc-win32

Nobody has maligned the compiler. What concerns quite a few of us in
comp.lang.c is the way in which Mr Navia abuses the newsgroup for
commercial ends, pointing out this or that feature of his product without
bothering to mention that said features are non-portable. There's nothing
wrong with non-portable features, but in a newsgroup devoted to
portability, if he must mention them at all he ought to mention that their
use will render the user's code non-portable. If he wants to trumpet about
his features, he can do so in comp.compilers.lcc, surely? I mean, this guy
has his very own newsgroup, for heaven's sake!
could be
very satisfactory to me, so I am interested in whether this observation
is a product of my own limited programming ability or whether it is a
genuine problem related to the lcc-w32's non-standard extended function
library.

Have you considered comparing against gcc?
 
G

Gordon Burditt

To be modest in my expectations I only output 80 digits, as opposed to
the full 100 digit precision claimed by qfloat.

For nice "round" arguments (1.0, 2.0, 3.0), the resulting 80 digits
agree totally with the output of Maple (which I trust). It is to weep,
and I am duly amazed.

However, if I try something like 1.73 or 2.52, the result offers at
best long double accuracy, with things breaking down after the 17th
digit, or 18th if I am lucky.

There is no exact representation of most decimal numbers in binary
floating point with only a finite number of bits, and atof() is
going to give you only the precison of a double. You need a atoqf(),
if there is such a thing. Or perhaps fgets() and sscanf() with
%qf. It's non-standard, but so will be everything involved with a
non-standard high-precision floating point type. If the input is
only good to 15 digits (IEEE double, a not uncommon implementation),
the output is not likely to be much better. If you expect 80-digit
precision, you have to not chop it to 15 digits at any point in the
calculation.

1.73 as long double:
Before: 1.72999999999999999990892701751121762754337396472692489624023437500000000000000000
Value: 1.73000000000000000001734723475976807094411924481391906738281250000000000000000000
After: 1.73000000000000000012576745200831851434486452490091323852539062500000000000000000

1.73 as double:
Before: 1.729999999999999760191826680966187268495559692382812500000000
Value: 1.729999999999999982236431605997495353221893310546875000000000
After: 1.730000000000000204281036531028803437948226928710937500000000

1.73 as float:
Before: 1.729999899864196777343750000000000000000000000000000000000000
Value: 1.730000019073486328125000000000000000000000000000000000000000
After: 1.730000138282775878906250000000000000000000000000000000000000
I genuinely hope this has something to do with my typing of the input
via atof().

Well, I can't prove it, but it's extremely likely. Most decimal
numbers are infinite repeating binary numbers. Chop them and you'll
lose precision in the output.
For some reason, Mr. Navia's erfcq and erfq seem to see
input like 1.73 or 2.52 as double precision at best and produce only
the double precision version of the desired result, whereas nice round
input seems to get typed appropriately as qfloat and the high precision
is reflected in the output.
For my limited personal purposes the much maligned lcc-win32 could be
very satisfactory to me, so I am interested in whether this observation
is a product of my own limited programming ability or whether it is a
genuine problem related to the lcc-w32's non-standard extended function
library. FWIW, my own code to calculate erfc() (an adaptation of some
stuff in NR in C) suffers the same problem--full high precision for
"round" input, double precision at best for input with a little
business going on after the decimal point.

Gordon L. Burditt
 
R

Richard Heathfield

[Attributions restored, after Mr Burditt was careless enough to forget them
or remove them]

Gordon Burditt said:
lcw1964 wrote:

There is no exact representation of most decimal numbers in binary
floating point with only a finite number of bits, and atof() is
going to give you only the precison of a double.

Good spot. I should have seen that myself. The atof function does indeed
return a double, so any precision the OP may have typed in beyond double's
capacity is lost at this point.

<snip>
 
J

jacob navia

lcw1964 a écrit :
gets(txt);
x = atof(txt);

Here you should use atoq. For some stupid reason
I was missing this function. It is fixed now, and will
be in the next release.

atof returns a double and will spoil precision.

Another "gotcha" is that all number in qfloat precision
should be suffixed with q like

qfloat s = 1.23q;

if not they will be read as double precision only.

Please email me if you see any problems since
it is better not to discuss this compiler specific
stuff in this group. You can email

comp.compilers.lcc too, if you want.

jacob
 
J

jacob navia

lcw1964 a écrit :
#include <stdio.h>
#include <math.h>
#include <qfloat.h>
#include <stdlib.h>

char Pause(void);

int main(void)
{
char txt[80];
qfloat calc, x;

printf("Enter the x argument: ");
gets(txt);
x = atof(txt);

/////changing here atof to atoq!!!!
x = atoq(txt);
calc=erfcq(x);
printf("erfc(x): %.80qg\n",calc);
calc=erfq(x);
printf("erf(x): %.80qg\n",calc);
puts("");

Pause();

return 0;
}

char Pause()
{
char c;
printf("\nPress Enter to continue...");
while ((c = getchar()) != '\n') { }
return c;
}
This produces:
erfc(x):
0.0144215001718195025688119286002446062133963068162465467077964669091485918397397063
erf(x):
0.98557849982818049743118807139975539378660369318375345329220353309085140816026029

Stephen Wolfram's Mathematica yields:
0.0144215001718195025688119286002446062133963068162465467077964669091485918397397063
1507594187479276740
 
L

lcw1964

Thanks, Messr. Navia:

I need to apologize to the group for naively breaching etiquette by
straying outside the expected parameters of discussion. I should take
up further discussion of the issue with you or in the .lcc group.

Les
 
H

Herbert Rosenau

lcw1964 a écrit :

Yes. There is a political rat's nest here.

Some people in this group, think that lcc-win32 is a bad compiler
since it has good features in it.

Stop spamming for your incompatible to the whole world thing that as
you have declared by yourself is neither a C++ nor a C compiler but
something to confuse peoples.

Stop spamming because there is nothing that can make your properitary
product more useable than each and any other compiler that can claim
in some way to be a C compiler.

Stop spamming immediately as this group is not designed to promote
software. Stop spamming now because wincrap is not the only system
that needs a C compiler any your crap is useable only on some kinds of
one single properitary os - and even not only there.

lcc-win32 is a really bad compiler because it needs to get spammed
from its developer.

Spammy piss off!

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
 
H

Herbert Rosenau

Richard Heathfield wrote:
[lots of repetitions of the phrase "unutterably stupid man" addressed to
Jacob Navia]

Please try making your points without calling people names. It's childish
and distracting. I don't frequent comp.lang.c for the arguments, but when
there *are* arguments I'd at least expect the participants to remain civil.

Oh, a person who is misusing this group to spam for his properitary
product where he has already proven multiple times that his
understunding of the topic of this group is to say it courteous highly
incomplete is needed from time to time.

It seems you are relative new to this group, so you does not know the
history of mr. Navia you should ask mr. google for him and what it has
to say about the person you says nobody should call his name.
Calling mr. Navia "unutterably stupid" as many times as possible is for your
own benefit, as nobody expects such epithets to convince anyone of anything,
except perhaps the immaturity of the speaker. That he was being disingenuous
by deliberately misrepresenting your (and other people's) views doesn't
change that.

No, it is only pure fact as proven by himself.
In short, if you want to insult someone on a personal level, please use
e-mail. I'm pretty sure your post would have conveyed its non-personal
points equally well without the insults.

No, there is really no insult but there are enough comments about the
person you says nobody should call his name. Mr. google will help you
to get informed.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
 
H

Herbert Rosenau

True, I speak about my compiler system in this group,

s/speak/spam/

and I think that I
have the right to do so. Specifically, when a user has precision
problems, I think I can point out that after years of effort I have a
compiler system that offers 100 digits in the standard version.

This group is about standard C - it is NOT about incompatible
extensions. So jacob navia is spamming for a product that is NOT
related to this group.


--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,073
Messages
2,570,539
Members
47,197
Latest member
NDTShavonn

Latest Threads

Top