64-bit integers where the implementation supports max 32-bit ints

J

James Harris

On a 16-bit C compiler which supports up to 32-bit integers but no larger
how feasible is it to support 64-bit integer values and is there a good way
to do so? Best I can think of is to pass structs around principally because,
AIUI, they can be returned from functions. But is there a better way?

Such a struct would be along the lines of

struct custom64 {
uint32_t low;
uint32_t high;
};
typedef struct custom64 custom64_t;

(For the 64-bit compiler this would instead be "typedef uint64_t
custom64_t;".)

I know I could pass around pointers-to-64-bit-integer but that would
probably result in some cases of needing malloc to store the values and lead
to more complexity. So structs seem better if they would work. I can live
with some inconvenience like not being able to specify literals and not
having printf formats for them.

This is for writing some C where it would be a big help if some of the
source code modules could be compiled to 16-bit object code and to 64-bit
object code and yet still work on 64-bit integers where necessary. So if it
were possible to write the following and just have the above different
definitions of the custom64_t type that would be especially useful.

custom64_t val1, val2;
val1 = func(val2);

I think I'll only need such explicitly long integers in rare cases and won't
need to do much arithmetic on them so it probably wouldn't be too burdensome
to manipulate the few references by callable routines if necessary. In other
words I can live without

val1 - val2

and replace it with

custom64_subtract(val1, val2);

It's early days but I think the main issues will be declaring them and
passing them to and from other functions. Arithmetic would be nice-to-have
but I cannot see any way to do that.

BTW, I should say I'm sure there are extensive libraries for wide number
manipulation but they are not what I want. Something short and simple that
will fit in a few lines would be much preferable to something pre-written
and extensive.

Am I on the right lines? Is there a 'standard' way to do stuff like this in
C?

James
 
S

Shao Miller

On a 16-bit C compiler which supports up to 32-bit integers but no larger
how feasible is it to support 64-bit integer values and is there a good way
to do so? Best I can think of is to pass structs around principally because,
AIUI, they can be returned from functions. But is there a better way?

Such a struct would be along the lines of

struct custom64 {
uint32_t low;
uint32_t high;
};
typedef struct custom64 custom64_t;

(For the 64-bit compiler this would instead be "typedef uint64_t
custom64_t;".)

I know I could pass around pointers-to-64-bit-integer but that would
probably result in some cases of needing malloc to store the values and lead
to more complexity. So structs seem better if they would work. I can live
with some inconvenience like not being able to specify literals and not
having printf formats for them.

This is for writing some C where it would be a big help if some of the
source code modules could be compiled to 16-bit object code and to 64-bit
object code and yet still work on 64-bit integers where necessary. So if it
were possible to write the following and just have the above different
definitions of the custom64_t type that would be especially useful.

custom64_t val1, val2;
val1 = func(val2);

I think I'll only need such explicitly long integers in rare cases and won't
need to do much arithmetic on them so it probably wouldn't be too burdensome
to manipulate the few references by callable routines if necessary. In other
words I can live without

val1 - val2

and replace it with

custom64_subtract(val1, val2);

It's early days but I think the main issues will be declaring them and
passing them to and from other functions. Arithmetic would be nice-to-have
but I cannot see any way to do that.

BTW, I should say I'm sure there are extensive libraries for wide number
manipulation but they are not what I want. Something short and simple that
will fit in a few lines would be much preferable to something pre-written
and extensive.

Am I on the right lines? Is there a 'standard' way to do stuff like this in
C?

Windows has 'LARGE_INTEGER', along these lines:


http://msdn.microsoft.com/en-us/library/windows/desktop/aa383713(v=vs.85).aspx
 
G

glen herrmannsfeldt

James Harris said:
On a 16-bit C compiler which supports up to 32-bit integers but no larger
how feasible is it to support 64-bit integer values and is there a good way
to do so? Best I can think of is to pass structs around principally because,
AIUI, they can be returned from functions. But is there a better way?

Struct is a fine way. Many 16 bit compilers do 32 bit operations
by subroutine call, as it is too much work to do inline.
(Some do + and - inline, * and / by call.)

All are easy to write except divide. The divide algorithm is
in Knuth's "The Art of Computer Programming", I believe volume 2,
but you should check.

-- glen
 
K

Keith Thompson

James Harris said:
On a 16-bit C compiler which supports up to 32-bit integers but no larger
how feasible is it to support 64-bit integer values and is there a good way
to do so? Best I can think of is to pass structs around principally because,
AIUI, they can be returned from functions. But is there a better way?

Since 1999, the ISO C standard has required the type "long long" to be
at least 64 bits wide. Strictly speaking, if you're using a compiler
that doesn't support "long long", then you're not using a conforming C
compiler. To know how to work around that, you'd have to know what
other language features it doesn't support (such as, say, passing
structs by value).

If you're working with a compiler that conforms reasonably well to C90,
which didn't require long long or any 64-bit integer type, then you do
have a lot more
Such a struct would be along the lines of

struct custom64 {
uint32_t low;
uint32_t high;
};
typedef struct custom64 custom64_t;

(For the 64-bit compiler this would instead be "typedef uint64_t
custom64_t;".)

If you use "typedef uint64_t custom64_t;" for compilers that support
64-bit integers, it will be easy to write code that assumes custom64_t
is an integer type, which will break on your non-64-bit platform. You
might be better off defining it as a struct with a single uint64_t
member, and writing alternative functions/macros to perform operations
on them.

The name "uint32_t" was added by the same standard (C99) that mandated
the existence of "long long". Do you have a C90 compiler that supports
uint32_t as an extension? If not, you might need to define uint32_t
yourself.

If you care about the representation of your custom64_t mapping onto a
64-bit integer, endianness is going to be an issue. You might want to
use an #ifdef to control the order of the "low' and "high" members. If
the representation is never stored in a file or transmitted, that
probably doesn't matter.

[...]
BTW, I should say I'm sure there are extensive libraries for wide number
manipulation but they are not what I want. Something short and simple that
will fit in a few lines would be much preferable to something pre-written
and extensive.

Am I on the right lines? Is there a 'standard' way to do stuff like this in
C?

Yes, I think you're on the right track. If you have 32-bit unsigned
integers but not 64-bit unsigned integers, a struct like you've defined
is a good way to emulate them. Implementing the operations you need is,
of course, left as an exercise. And probably you don't need to
implement *all* the operations; if you never divide 64-bit integers, you
don't have to implement 64-bit division.
 
J

James Harris

Keith Thompson said:
Since 1999, the ISO C standard has required the type "long long" to be
at least 64 bits wide. Strictly speaking, if you're using a compiler
that doesn't support "long long", then you're not using a conforming C
compiler. To know how to work around that, you'd have to know what
other language features it doesn't support (such as, say, passing
structs by value).

True. The 16-bit compiler could be a problem in this - it already has been
in other ways. :-(

I have checked that it does compile structs as return values ... but not run
the code yet....
If you're working with a compiler that conforms reasonably well to C90,
which didn't require long long or any 64-bit integer type, then you do
have a lot more


If you use "typedef uint64_t custom64_t;" for compilers that support
64-bit integers, it will be easy to write code that assumes custom64_t
is an integer type, which will break on your non-64-bit platform. You
might be better off defining it as a struct with a single uint64_t
member, and writing alternative functions/macros to perform operations
on them.

Good point. I currently compile 16-bit and 32-bit at the same time but don't
yet have a way to compile 64-bit.... Actually, having said that the 32-bit
compiler represents 64-bit numbers as long longs (and the 16-bit compiler
represents them as structs) so even though I am not compiling to 64-bit yet
the issue you raised should be covered.

FWIW the section of the header which defines the 64-bit types is currently
as follows. It's not fully tested yet.

/*
* 64-bit integers
*/

#if INT_MAX == 0x7fffffffffffffff
typedef signed int s64_t;
#elif LONG_MAX == 0x7fffffffffffffff
typedef signed long s64_t;
#elif LLONG_MAX == 0x7fffffffffffffff
typedef signed long long s64_t;
#else
typedef struct {u32_t low; s32_t high;} s64_t; /* Limited use */
#endif

#if UINT_MAX == 0xffffffffffffffff
typedef unsigned int u64_t;
#elif ULONG_MAX == 0xffffffffffffffff
typedef unsigned long u64_t;
#elif ULLONG_MAX == 0xffffffffffffffff
typedef unsigned long long u64_t;
#else
typedef struct {u32_t low; u32_t high;} u64_t; /* Limited use */
#endif

The name "uint32_t" was added by the same standard (C99) that mandated
the existence of "long long". Do you have a C90 compiler that supports
uint32_t as an extension? If not, you might need to define uint32_t
yourself.

The 16-bit compiler doesn't have those definitions but the 32-bit compiler
does. So I have chosen different names and come up with a header which
includes the following. There are similar sections for other integer widths.
The 32-bit stuff is simpler than that for 64-bit, above.

/*
* 32-bit integers
*/

#if INT_MAX == 0x7fffffff
typedef signed int s32_t;
#elif LONG_MAX == 0x7fffffff
typedef signed long s32_t;
#else
# error "No type candidate for s32_t"
#endif

#if UINT_MAX == 0xffffffff
typedef unsigned int u32_t;
#elif ULONG_MAX == 0xffffffff
typedef unsigned long u32_t;
#else
# error "No type candidate for u32_t"
#endif
If you care about the representation of your custom64_t mapping onto a
64-bit integer, endianness is going to be an issue. You might want to
use an #ifdef to control the order of the "low' and "high" members. If
the representation is never stored in a file or transmitted, that
probably doesn't matter.

FWIW I hate #ifs which select different pieces of code. They seem to me to
be all sorts of wrong. What I am trying to do is write the code as small
modules and pick the correct module at link time. That's a topic in itself.

James
 
K

Keith Thompson

James Harris said:
news:[email protected]... [...]
FWIW the section of the header which defines the 64-bit types is currently
as follows. It's not fully tested yet.

/*
* 64-bit integers
*/

#if INT_MAX == 0x7fffffffffffffff
typedef signed int s64_t;
#elif LONG_MAX == 0x7fffffffffffffff
typedef signed long s64_t;
#elif LLONG_MAX == 0x7fffffffffffffff
typedef signed long long s64_t;
#else
typedef struct {u32_t low; s32_t high;} s64_t; /* Limited use */
#endif

This might cause problems if the 16-bit compiler's preprocessor can't
handle a 64-bit constant like 0x7fffffffffffffff. I'm not sure there's
a really good solution. But it's likely you'll only get some warnings,
and you can ignore them as long as it selects the right definition.

[...]
The 16-bit compiler doesn't have those definitions but the 32-bit compiler
does. So I have chosen different names and come up with a header which
includes the following. There are similar sections for other integer widths.
The 32-bit stuff is simpler than that for 64-bit, above.

Hmm. Personally, I'd be inclined to use the *same* names defined by
C99.

Some years ago, Doug Gwyn wrote a collection of files under the name
"q8" which provide an implementation of C99-specific headers (including
<stdint.h> for use with pre-C99 compilers. They're public domain, so
feel free to use them any way you like.

For example:

#if _STDC_VERSION__ >= 199901L
#include <stdint.h>
#else
#include <mystdint.h>
#endif

You can then freely use uint32_t and friends (except, of course, that
your uint64_t won't directly support arithmetic operations).

[...]
 
J

James Harris

Keith Thompson said:
James Harris said:
news:[email protected]... [...]
FWIW the section of the header which defines the 64-bit types is
currently
as follows. It's not fully tested yet.

/*
* 64-bit integers
*/

#if INT_MAX == 0x7fffffffffffffff
typedef signed int s64_t;
#elif LONG_MAX == 0x7fffffffffffffff
typedef signed long s64_t;
#elif LLONG_MAX == 0x7fffffffffffffff
typedef signed long long s64_t;
#else
typedef struct {u32_t low; s32_t high;} s64_t; /* Limited use */
#endif

This might cause problems if the 16-bit compiler's preprocessor can't
handle a 64-bit constant like 0x7fffffffffffffff. I'm not sure there's
a really good solution. But it's likely you'll only get some warnings,
and you can ignore them as long as it selects the right definition.

The 16-bit C compilers I've tried are quirky in a number of ways. For the
case in point they don't complain about the big constants which, I agree, is
a bit odd. Hopefully the equality comparisons would help any such compiler
pick the else clause but I know not to rely on that working in all cases.

It's a pity there's no such thing as a preprocessor assert to check the
results - and, yes, I have seen various clever but convoluted attempts to
make one.

[...]
The 16-bit compiler doesn't have those definitions but the 32-bit
compiler
does. So I have chosen different names and come up with a header which
includes the following. There are similar sections for other integer
widths.
The 32-bit stuff is simpler than that for 64-bit, above.

Hmm. Personally, I'd be inclined to use the *same* names defined by
C99.

I could see it that way. However, at present all names are clearly distinct
from those standard names so it's more obvious that "all errors are my own"!
In contrast to your #if selection below I can have just a single include,
i.e.

#include "my header"

I'll keep your recommendation in mind, though. I'm just trying this stuff
out as yet and may well come to a realisation that something else is a
better idea than what seems good at this early stage.
Some years ago, Doug Gwyn wrote a collection of files under the name
"q8" which provide an implementation of C99-specific headers (including
<stdint.h> for use with pre-C99 compilers. They're public domain, so
feel free to use them any way you like.

I looked at that before but didn't like the way the choices were made. The
Q8 header has lots of sections like

#if compiler x and target y
....
#elif compiler x and target z
....
etc

The Q8 selection process seems pretty fragile. As you suggest, it could be
copied and adapted but at present I prefer the selections to be based on
values in limits.h. That was recommended to me by folks in this newsgroup
and works really well. It depends entirely on real values rather than values
which should be the case for a given compiler and environment. That's got to
be better, right!
For example:

#if _STDC_VERSION__ >= 199901L
#include <stdint.h>
#else
#include <mystdint.h>
#endif

You can then freely use uint32_t and friends (except, of course, that
your uint64_t won't directly support arithmetic operations).

AIUI both <> and "" refer to implementation-defined places but wouldn't the
above normally have mystdint.h in quotes rather than angle brackets?

James
 
J

James Harris

Keith Thompson said:
James Harris said:
news:[email protected]... [...]
FWIW the section of the header which defines the 64-bit types is
currently
as follows. It's not fully tested yet.

/*
* 64-bit integers
*/

#if INT_MAX == 0x7fffffffffffffff
typedef signed int s64_t;
#elif LONG_MAX == 0x7fffffffffffffff
typedef signed long s64_t;
#elif LLONG_MAX == 0x7fffffffffffffff
typedef signed long long s64_t;
#else
typedef struct {u32_t low; s32_t high;} s64_t; /* Limited use */
#endif

This might cause problems if the 16-bit compiler's preprocessor can't
handle a 64-bit constant like 0x7fffffffffffffff. I'm not sure there's
a really good solution. But it's likely you'll only get some warnings,
and you can ignore them as long as it selects the right definition.

Interestingly it *has* caused a problem - not on the 0x7f... tests but on
those for 0xff.... From tests it seems likely that the 16-bit preprocessor
keeps only the low 32 bits of the constant (which is understandable given
how it will convert such a number to binary) so a string of 16 Fs matches a
string of 8 Fs.

Fortunately the tests against 0x7f... work properly.

So I wonder if I could assign the unsigned values along with the signed ones
as in the following.

/*
* 64-bit integers
*/

#if INT_MAX == 0x7fffffffffffffff
typedef signed int s64_t;
typedef unsigned int u64_t;
#elif LONG_MAX == 0x7fffffffffffffff
typedef signed long s64_t;
typedef unsigned long u64_t;
#elif LLONG_MAX == 0x7fffffffffffffff
typedef signed long long s64_t;
typedef unsigned long long u64_t;
#else
typedef struct {u32_t low; s32_t high;} s64_t; /* Limited use */
typedef struct {u32_t low; u32_t high;} u64_t; /* Limited use */
#warning "Using structures for s64_t and u64_t"
#endif

In other words, is it safe to assume that <something>_MAX and
U<something>_MAX will be the same size but with the top bit set on the
latter? (If so I should probably select all signed and unsigned values in
the same way, i.e. based on the signed maxima.)

What fun. ;-)

James
 
J

James Kuyper

On 08/06/2013 06:15 AM, James Harris wrote:
....
It's a pity there's no such thing as a preprocessor assert to check the
results - and, yes, I have seen various clever but convoluted attempts to
make one.

What does "preprocessor assert" mean to you, such that #if combined with
#error fails to qualify?

....
AIUI both <> and "" refer to implementation-defined places but wouldn't the
above normally have mystdint.h in quotes rather than angle brackets?

Yes. While the locations that are searched are implementation-defined
for both forms, the key difference is that when you use "", it first
searches an implementation-defined set of places, and if that search
fails, falls back to searching the same set of places used by <>. In
other words, "" causes a search that is at least as wide as that caused
by <>, and on any reasonable implementation it will be a wider one,
generally including user-specified locations. Standard headers are
guaranteed to be found by either form, but a non-standard header might
be found only by "", so it's generally a good idea to reserve <> for
standard headers, and "" for everything else.
The POSIX headers are guaranteed, by the POSIX standard, to be found by
#include <>, so I count them as standard headers for this purpose. I
presume other operating systems might make similar guarantees.
 
M

Malcolm McLean

FWIW I hate #ifs which select different pieces of code. They seem to me to
be all sorts of wrong. What I am trying to do is write the code as small
modules and pick the correct module at link time. That's a topic in itself.
Yes, it means you have to test the code on a 16 bit compiler as well as a
32 bit one to know that it's correct, because effectively you've got two
programs sharing the same source file. Also the syntax is rather hard to
read.

stdint.h causes endless problems. ffmpeg uses it because it needs 64 bit
types, and it breaks on Visual Studio, I suspect deliberately. But it is
the right answer, everyone defining their own fixed-size type was worse.
A lot of fixed-size types are used unnecessarily, but they are sometimes
needed.

C should have a mul() function that returns the high portion of a value.
Most chips have a machine instruction which does this. But it doesn't, and
it's not easy to write a portable one.
 
J

James Harris

James Kuyper said:
On 08/06/2013 06:15 AM, James Harris wrote:
...

What does "preprocessor assert" mean to you, such that #if combined with
#error fails to qualify?

Ah, sorry - was thinking of verifying/asserting the *sizes* of the defined
types.

James
 
J

James Kuyper

Ah, sorry - was thinking of verifying/asserting the *sizes* of the defined
types.

OK - obviously that can't be done by a preprocessor assert, because
types don't exist yet, so sizeof(type) is meaningless, and is converted
into a syntax error during evaluation of #if conditions.

However, why are you interested in the sizes? I got the impression from
your previous messages that you were mainly interested in the ranges,
*_MIN to *_MAX, which are available in <limits.h> and <stdint.h>, and
are testable in #if.
 
E

Eric Sosman

James Harris said:
news:[email protected]... [...]
FWIW the section of the header which defines the 64-bit types is currently
as follows. It's not fully tested yet.

/*
* 64-bit integers
*/

#if INT_MAX == 0x7fffffffffffffff
typedef signed int s64_t;
#elif LONG_MAX == 0x7fffffffffffffff
typedef signed long s64_t;
#elif LLONG_MAX == 0x7fffffffffffffff
typedef signed long long s64_t;
#else
typedef struct {u32_t low; s32_t high;} s64_t; /* Limited use */
#endif

This might cause problems if the 16-bit compiler's preprocessor can't
handle a 64-bit constant like 0x7fffffffffffffff. I'm not sure there's
a really good solution. But it's likely you'll only get some warnings,
and you can ignore them as long as it selects the right definition.
[...]

One approach is to write tests like

#if ((INT_MAX >> 16) >> 16) == 0x7fffffff

Details: We know that preprocessor arithmetic is equivalent to that
of the execution environment's widest integer types, which are no
narrower than `[unsigned] long', which are at least 32 bits wide.
Therefore, the 16-bit shifts above are well-defined (although a
31-bit shift of a signed value might not be), and two of them
eliminate 32 low-order bits. Also, we know that INT_MAX is one
less than a power of two, so if any 1-bits remain after shifting
we can conclude that all the eliminated bits were also 1's.

Similar arrangements like

#if ((INT_MAX >> 30) >> 30) == 0x7

would also work, in pretty much the same way.
 
J

James Harris

James Kuyper said:
OK - obviously that can't be done by a preprocessor assert, because
types don't exist yet, so sizeof(type) is meaningless, and is converted
into a syntax error during evaluation of #if conditions.

However, why are you interested in the sizes? I got the impression from
your previous messages that you were mainly interested in the ranges,
*_MIN to *_MAX, which are available in <limits.h> and <stdint.h>, and
are testable in #if.

Not quite. I am *using* the ranges in limits.h to define the sizes of
various signed and unsigned integers, namely:

[su]8_t
[su]16_t
[su]32_t
[su]64_t

The idea is that these can be used in the code in their own right (not all
of my compilers have stdint.h but the project requires a lot of
specific-sized integers) and can also be used elsewhere in headers to set
the sizes of other integers needed in the project.

In terms of the 'preprocessor assert' issue I was thinking it would be a
good idea to verify that they were all the expected sizes. ATM I verify them
by running some code as in

$ ./a.out
sizeof s8_t 1
sizeof u8_t 1
sizeof s16_t 2
sizeof u16_t 2
sizeof s32_t 4
sizeof u32_t 4
sizeof s64_t 8
sizeof u64_t 8
sizeof sint_t 8
sizeof uint_t 8
sizeof bptr_t 8

The above set is for the 64-bit target which is why [su]int_t have size 8.
(I believe gcc defaults to 4 byte ints on 64-bit targets.) The last one,
bptr_t is for byte pointers.

James
 
M

Malcolm McLean

sizeof sint_t 8

sizeof uint_t 8

The above set is for the 64-bit target which is why [su]int_t have size 8.
(I believe gcc defaults to 4 byte ints on 64-bit targets.)
int should be the natural register size, which means 64 bits on a 64 bit
system. That also means that, expect for the annoying but practically
unimportant case of a byte array that takes up over half the memory,
int can index any array.
 
K

Keith Thompson

James Harris said:
AIUI both <> and "" refer to implementation-defined places but wouldn't the
above normally have mystdint.h in quotes rather than angle brackets?

Yes, I should have written #include "mystdint.h".
 
K

Keith Thompson

Malcolm McLean said:
sizeof sint_t 8

sizeof uint_t 8

The above set is for the 64-bit target which is why [su]int_t have size 8.
(I believe gcc defaults to 4 byte ints on 64-bit targets.)
int should be the natural register size, which means 64 bits on a 64 bit
system. That also means that, expect for the annoying but practically
unimportant case of a byte array that takes up over half the memory,
int can index any array.

Perhaps it "should". Nevertheless, int is typically 32 bits on 64-bit
systems, probably because making it 64 bits would mean you can't have
both a 16-bit type and a 32-bit type (unless the implementation resorts
to extended integer types).

Perhaps it would have made sense to add a "short short" type, so you
could have:
char 8 bits
short short 16 bits
short 32 bits
int 64 bits
but I don't see that happening any time soon.
 
J

James Kuyper

However, why are you interested in the sizes? I got the impression from
your previous messages that you were mainly interested in the ranges,
*_MIN to *_MAX, which are available in <limits.h> and <stdint.h>, and
are testable in #if.

Not quite. I am *using* the ranges in limits.h to define the sizes of
various signed and unsigned integers, namely:

[su]8_t
[su]16_t
[su]32_t
[su]64_t

The idea is that these can be used in the code in their own right (not all
of my compilers have stdint.h but the project requires a lot of
specific-sized integers) and can also be used elsewhere in headers to set
the sizes of other integers needed in the project.

In terms of the 'preprocessor assert' issue I was thinking it would be a
good idea to verify that they were all the expected sizes. ...

Well, pre-processing is translation phase 4; types don't exist, and
therefore don't have a size, until translation phase 7, so you're
looking for something that occurs later than pre-processing, but earlier
than assert() itself. C2011 provides _Static_assert() for that purpose,
but if you can't even count on support for stdint.h, I guess you can't
use _Static_assert().
 
B

Ben Bacarisse

James Harris said:
James Kuyper said:
OK - obviously that can't be done by a preprocessor assert, because
types don't exist yet, so sizeof(type) is meaningless, and is converted
into a syntax error during evaluation of #if conditions.

However, why are you interested in the sizes? I got the impression from
your previous messages that you were mainly interested in the ranges,
*_MIN to *_MAX, which are available in <limits.h> and <stdint.h>, and
are testable in #if.

Not quite. I am *using* the ranges in limits.h to define the sizes of
various signed and unsigned integers, namely:

[su]8_t
[su]16_t
[su]32_t
[su]64_t

The idea is that these can be used in the code in their own right (not all
of my compilers have stdint.h but the project requires a lot of
specific-sized integers) and can also be used elsewhere in headers to set
the sizes of other integers needed in the project.

In terms of the 'preprocessor assert' issue I was thinking it would be a
good idea to verify that they were all the expected sizes. ATM I verify them
by running some code as in

You can get a compile-time check by writing code that generates a
constraint violation based on the size. There are lots of options and I
am not sure there is any definitive "best practice" way to do it. Hers
is just one:

#define SA_JOIN(a, b, c) a##b##c
#define SA_ID(p, l, s) SA_JOIN(p, l, s)
#define STATIC_ASSERT(b) \
struct SA_ID(assert_on_, __LINE__, _struct) { \
int SA_ID(assert_on_line_, __LINE__, _failed) : !!(b); \
}

STATIC_ASSERT(sizeof(int) == 4);
STATIC_ASSERT(sizeof(int) == 5);

You can't guarantee good messages, but you will get something from the
compiler. Reasonable ones will fail to compile (rather than just warn)
when given a zero width bit-field. gcc says:

t.c:9:1: error: zero width for bit-field 'assert_on_line_9_failed'

<snip>
 
S

Stephen Sprunk

sizeof sint_t 8
sizeof uint_t 8

The above set is for the 64-bit target which is why [su]int_t have
size 8. (I believe gcc defaults to 4 byte ints on 64-bit targets.)

int should be the natural register size, which means 64 bits on a 64
bit system.

Of course, that assumes a useful definition of "natural". For instance,
there are three common models for "64-bit" machines: ILP64, I32LP64 and
IL32LLP64. And proponents of each will explain why their model is more
"natural" than the others.
That also means that, expect for the annoying but practically
unimportant case of a byte array that takes up over half the memory,
int can index any array.

That assumes an int is as wide as a void*, which is not true on I32LP64
and IL32LLP64 systems, e.g. Linux and Windows on x86-64.

S
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,954
Messages
2,570,116
Members
46,704
Latest member
BernadineF

Latest Threads

Top