If so, how do you cope with the other dependencies? Not just technical,
but documentation. Do you provide replacement man pages?
Not so much replacement pages as additional ones. For instance, it
might be in a different section or be called "gcc printf" so it
doesn't interfere with the system "printf" man page.
On a somewhat related note, anybody know of freely available source
for
a test program that exercises all the differences between c99 and c89?
How about
linking to libraries built with the system libc?
Well, let's say that the underlying system is c89 and the compiler
wants to (really) support c99. That means any function that differs
needs a different version, and in some cases replacement header files
are needed, but in others [like for log()], not. Imagine then
"c99_features.h" which consists of something like:
#ifdef STDC99
#define log(a) c99_log(a)
#define fprintf() c99_fprintf() /* args omitted for brevity */
etc.
/* no difference between func() in c89 and c99? Then
no entry in here is needed. */
#endif /* STDC99 */
and the compiler provides a library libc99.a (or whatever). I don't
see what the problem would be, except if the underlying system simply
doesn't support some bit or test needed for c99 compliance (if such
bits or tests exist). When compiled with -std=c99 generated code
would actually call the c99_ routines and not the native ones. If the
program is linked to a library which is in turn linked to the system
library those would all use the native log() and fprintf(), and
presumably would not include any code that would exercise c99
features. Even if a pointer to a c99 routine is passed in to a native
lib function to be used as a callback, because of the defines, it
would be the c99_ version that is called, and those are presumably all
upwardly compatible.
I don't know, it just seems strange that user level code should be
forever tied to whatever C standard is provided by the system's libc.
Other languages do not have this constraint, and newer standards can
be supported on older platforms. Why should C be hobbled just because
the underlying OS happened to be written in the same language?
Anyway, for reference purposes, this is what I added to my program to
get it to compile on Solaris 8. With this it passes all but 5 of 172
tests, and of those 5 failures, 4 have to do with minor c99
incompatibilities (log(-1) returns -inf and not NaN, case differences
for inf/nan when output by fprintf, one instance where the result is
-0 instead of 0). The 5th test failure seems to be an algorithm
difference for tan(), where tan(pi/2) comes out slightly different
(and not inf in either case, but 16331239353195370.000000 vs.
16331778728383844.000000).
No c99 additions to printf were used, so that wasn't a problem:
#ifdef SOL8
#include <sys/inttypes.h> /* because no stdint.h, needed for
uintptr_t */
#include "sol8.h"
#else
#include <stdint.h> /* for uintptr_t */
#endif
where sol8.h is:
#if !defined(SOL8_GUARD)
#define SOL8_GUARD 1
#include <ieeefp.h> /* get Solaris FP_SNAN etc.*/
/* these are the C99 flags */
# define FP_NAN 0
# define FP_INFINITE 1
# define FP_ZERO 2
# define FP_SUBNORMAL 3
# define FP_NORMAL 4
# define NAN ( ((double) 0.0)/ ((double) 0.0) )
# define isnormal(x) (fpclass (x) >=
FP_NZERO ? FP_NORMAL : 0)
# define isinf(x) (fpclass (x) == FP_NINF || fpclass (x) ==
FP_PINF ? FP_INFINITE : 0)
# define isnan(x) (fpclass (x) == FP_SNAN || fpclass (x) ==
FP_QNAN ? FP_NAN : 0)
# define iszero(x) (fpclass (x) == FP_NZERO || fpclass (x) ==
FP_PZERO ? FP_ZERO : 0)
#define fmax(A,B) ( A>=B ? A : B)
#define fmin(A,B) ( A<=B ? A : B)
#define round(A) ( A >= 0 ? floor(A+0.5) : -floor(-A+0.5) )
int fpclassify(double x){
int status;
switch(fpclass(x)){
case FP_SNAN:
case FP_QNAN: status=FP_NAN; break;
case FP_NINF:
case FP_PINF: status=FP_INFINITE; break;
case FP_NDENORM:
case FP_PDENORM: status=FP_SUBNORMAL; break;
case FP_NZERO:
case FP_PZERO: status=FP_ZERO; break;
case FP_NNORM:
case FP_PNORM: status=FP_NORMAL; break;
}
return(status);
}
#endif /* SOL8_GUARD */
Regards,
David Mathog