B
Bart C
I've always had a problem knowing exactly how wide my integer variables were
in C, and the little program below has increased my confusion.
Run on 3 compilers on the same cpu (32-bit pentium), sometimes int and long
int are the same, and long long int is twice the width; or sometimes both
long int and long long int are twice the width of int.
This apparently all quite normal according to my c99 draft and c-faq.com.
However it doesn't alter the fact that this is all very 'woolly' and
ambiguous.
Integer widths that obey the rule short < int < long int <long long int
(instead of short<=int<=long int or whatever) would be far more intuitive
and much more useful (as it is now, changing int x to long int x is not
guaranteed to change anything, so is pointless)
Given that I know my target hardware has an 8-bit byte size and natural word
size of 32-bits, what int prefixes do I use to span the range 16, 32 and
64-bits? And perhaps stay the same when compiled for 64-bit target?
Is there any danger of long long int ending up as 128-bits? Sometimes I
might need 64-bits but don't want the overheads of 128.
Or should I just give up and use int32_t and so on, and hope these are
implemented in the compiler?
(Haven't seen anything similar for floating point in my C99 draft, but it
seems better behaved, except for long double which gives results of 96 or
128 below (neither of which match the 80-bits of my cpu).)
Thanks,
Bart
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
int main(void)
{
char c;
short int si;
short s;
int i;
long l;
long int li;
long long int lli;
float f;
double d;
long double ld;
printf("C = %3d bits\n",sizeof(c)*CHAR_BIT);
printf("SI = %3d bits\n",sizeof(si)*CHAR_BIT);
printf("S = %3d bits\n",sizeof(s)*CHAR_BIT);
printf("I = %3d bits\n",sizeof(i)*CHAR_BIT);
printf("L = %3d bits\n",sizeof(l)*CHAR_BIT);
printf("LI = %3d bits\n",sizeof(li)*CHAR_BIT);
printf("LLI = %3d bits\n",sizeof(lli)*CHAR_BIT);
printf("\n");
printf("F = %3d bits\n",sizeof(f)*CHAR_BIT);
printf("D = %3d bits\n",sizeof(d)*CHAR_BIT);
printf("LD = %3d bits\n",sizeof(ld)*CHAR_BIT);
}
in C, and the little program below has increased my confusion.
Run on 3 compilers on the same cpu (32-bit pentium), sometimes int and long
int are the same, and long long int is twice the width; or sometimes both
long int and long long int are twice the width of int.
This apparently all quite normal according to my c99 draft and c-faq.com.
However it doesn't alter the fact that this is all very 'woolly' and
ambiguous.
Integer widths that obey the rule short < int < long int <long long int
(instead of short<=int<=long int or whatever) would be far more intuitive
and much more useful (as it is now, changing int x to long int x is not
guaranteed to change anything, so is pointless)
Given that I know my target hardware has an 8-bit byte size and natural word
size of 32-bits, what int prefixes do I use to span the range 16, 32 and
64-bits? And perhaps stay the same when compiled for 64-bit target?
Is there any danger of long long int ending up as 128-bits? Sometimes I
might need 64-bits but don't want the overheads of 128.
Or should I just give up and use int32_t and so on, and hope these are
implemented in the compiler?
(Haven't seen anything similar for floating point in my C99 draft, but it
seems better behaved, except for long double which gives results of 96 or
128 below (neither of which match the 80-bits of my cpu).)
Thanks,
Bart
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
int main(void)
{
char c;
short int si;
short s;
int i;
long l;
long int li;
long long int lli;
float f;
double d;
long double ld;
printf("C = %3d bits\n",sizeof(c)*CHAR_BIT);
printf("SI = %3d bits\n",sizeof(si)*CHAR_BIT);
printf("S = %3d bits\n",sizeof(s)*CHAR_BIT);
printf("I = %3d bits\n",sizeof(i)*CHAR_BIT);
printf("L = %3d bits\n",sizeof(l)*CHAR_BIT);
printf("LI = %3d bits\n",sizeof(li)*CHAR_BIT);
printf("LLI = %3d bits\n",sizeof(lli)*CHAR_BIT);
printf("\n");
printf("F = %3d bits\n",sizeof(f)*CHAR_BIT);
printf("D = %3d bits\n",sizeof(d)*CHAR_BIT);
printf("LD = %3d bits\n",sizeof(ld)*CHAR_BIT);
}