jacob navia said:
lcc-win32 uses this:
void *calloc(size_t n,size_t s)
{
long long siz = (long long)n * (long long)s;
void *result;
if (siz>>32)
return 0;
result = malloc((size_t)siz);
if (result)
memset(result,0,(size_t)siz);
return result;
}
Which of course is non-portable; there's no guarantee that long long
is bigger than size_t, or that size_t is 32 bits. (That's not a
criticism; code in a C library implementation is not required to be
portable.)
Incidentally, the casts in the arguments to malloc() and memset()
aren't necessary; the value of siz will be converted implicitly to
size_t as long as the declarations of malloc() and memset() are
visible. That's a style issue; the generated code should be the same
with or without the casts.
I wonder if "siz&~0xFFFFFFFF" might be marginally more efficient than
"siz>>32". It tests the upper 32 bits of siz without having to shift
them into the lower 32 bits. I have no particular expectation that it
*is* more efficient, and it's the kind of micro-optimization I
wouldn't recommend in ordinary code, but it might be appropriate in a
runtime library. (I expect it would be faster on some systems, slower
on others, and equivalent on yet others, but if you're only targeting
a single platform it's something to think about.)
sizeof(long long)=8*8
sizeof(size_t)=4*8
Quibble: I think you mean:
CHAR_BIT=8
sizeof(long long)=8
sizeof(size_t)=4
As you know, sizeof yields the size in bytes, not in bits.