size_t - why?

R

rayw

I used to believe that size_t was something to do with integral types, and
the std.

Something along the lines of ..

a char is 8 bits,

a int >= a char

a long >= int

etc

Meaning that a compiler might only provide 8 bit longs, and still be
compliant.

So, I thought size_t was something 'extra' ... 'size_t is guaranteed to have
enough bits to be able to hold the size of some array/malloc'ed memory' etc.

However, it seems as though size_t is *usually* an unsigned long - so prop.1
can't be right (can someone correct that - or point me to the right bit of
the stds please).

So, now I'm confused, and, yes, I've googled, and I can't find a rational
for size_t. I've searched =my great value for money= copy of
INCITS+ISO+IEC+9899-1999.pdf, but the adobe reader sucks in terms of its
ability to accept search terms like 'type_t NEAR rationale' etc.
 
R

Richard Bos

rayw said:
I used to believe that size_t was something to do with integral types, and
the std.

Something along the lines of ..

a char is 8 bits,

a int >= a char

a long >= int

etc

Meaning that a compiler might only provide 8 bit longs, and still be
compliant.

No. A long must be larger than or equal to an int, but it must also be
at least 32 bits. This means that it's legal for all of char, int and
long to have the same size, and sizeof (long) == sizeof (int) == sizeof
(char) == 1, but then CHAR_BIT must be at least 32.
However, it seems as though size_t is *usually* an unsigned long - so prop.1
can't be right

That's not the right inference, though. _Usually_ an unsigned long is
larger than a char. It's true that your first property is incorrect, but
it doesn't follow from the definition of size_t on any given platform.

Richard
 
S

Skarmander

rayw said:
I used to believe that size_t was something to do with integral types, and
the std.

Something along the lines of ..

a char is 8 bits,
No, though this is very common. A char is CHAR_BIT bits, where CHAR_BIT
a int >= a char
Not quite. An int is at least 16 bits (the minimum value it can hold
must be -32767 or smaller, the maximum value 32767 or greater). A char
could be 16 bits too, of course.
a long >= int
Nope. A long is at least 32 bits. An int could be 32 bits as well, but a
long may not be 16 bits, and an int may.
I could rehash the exact rules, but you're better off rereading them
yourself (you state below that you have a copy of the standard). Look up
Meaning that a compiler might only provide 8 bit longs, and still be
compliant.
No. Longs must be at least 32 bits long. A compiler may, however,
provide chars that are 32 bits long and still be compliant (that is,
sizeof long == 1).
So, I thought size_t was something 'extra' ... 'size_t is guaranteed to have
enough bits to be able to hold the size of some array/malloc'ed memory' etc.
To be precise, size_t is the type used by sizeof. Informally, size_t is
the type we can use to count bytes.
However, it seems as though size_t is *usually* an unsigned long - so prop.1
can't be right (can someone correct that - or point me to the right bit of
the stds please).
size_t is usually an unsigned long because 32-bit platforms (with 32-bit
integers, 32-bit longs and 32-bit addresses) are very common. On such
platforms "unsigned long" is the natural choice for size_t.
So, now I'm confused, and, yes, I've googled, and I can't find a rational
for size_t. I've searched =my great value for money= copy of
INCITS+ISO+IEC+9899-1999.pdf, but the adobe reader sucks in terms of its
ability to accept search terms like 'type_t NEAR rationale' etc.
The reason size_t is not simply a long on all platforms is because an
arithmetic type of at least 32 bits is not necessarily a natural choice
for the type used to hold the size of an object.

In 7.17.4, the standard recommends
"The types used for size_t and ptrdiff_t should not have an integer
conversion rank greater than that of signed long int unless the
implementation supports objects large enough to make this necessary."

This explicitly acknowledges the possibility of size_t being greater
than a long (though it is recommended that this not be so unless
actually necessary, because older programs or badly written newer
programs might break if this does not hold). On the flip side, size_t
might be smaller on small platforms where single objects cannot exceed a
certain size, although memory as a whole may be larger.

size_t is left abstract so platforms are not artificially constrained.

S.
 
R

rayw

Skarmander said:
No, though this is very common. A char is CHAR_BIT bits, where CHAR_BIT

Not quite. An int is at least 16 bits (the minimum value it can hold must
be -32767 or smaller, the maximum value 32767 or greater). A char could be
16 bits too, of course.

Nope. A long is at least 32 bits. An int could be 32 bits as well, but a
long may not be 16 bits, and an int may.

I could rehash the exact rules, but you're better off rereading them
yourself (you state below that you have a copy of the standard). Look up

No. Longs must be at least 32 bits long. A compiler may, however, provide
chars that are 32 bits long and still be compliant (that is, sizeof long
== 1).

To be precise, size_t is the type used by sizeof. Informally, size_t is
the type we can use to count bytes.

size_t is usually an unsigned long because 32-bit platforms (with 32-bit
integers, 32-bit longs and 32-bit addresses) are very common. On such
platforms "unsigned long" is the natural choice for size_t.

The reason size_t is not simply a long on all platforms is because an
arithmetic type of at least 32 bits is not necessarily a natural choice
for the type used to hold the size of an object.

In 7.17.4, the standard recommends
"The types used for size_t and ptrdiff_t should not have an integer
conversion rank greater than that of signed long int unless the
implementation supports objects large enough to make this necessary."

This explicitly acknowledges the possibility of size_t being greater than
a long (though it is recommended that this not be so unless actually
necessary, because older programs or badly written newer programs might
break if this does not hold). On the flip side, size_t might be smaller on
small platforms where single objects cannot exceed a certain size,
although memory as a whole may be larger.

size_t is left abstract so platforms are not artificially constrained.

Thanks to both, esp. 'S'. All clear now - and thanks for the stds
reference - although to find it you must *not* be using the Adobe reader -
or maybe you've the patience of a saint.
 
S

Skarmander

rayw wrote:
Thanks to both, esp. 'S'. All clear now - and thanks for the stds
reference - although to find it you must *not* be using the Adobe reader -
or maybe you've the patience of a saint.
I am actually using Acrobat Reader. The standard has an excellent index
(try this before anything else), and is well-organized in chapters.
(Also, having looked up various things, I have a feeling for where stuff
goes.)

You're right that the search function is pretty much useless in this
case, except when I recall part of the exact wording of something.

S.
 
T

Tim Prince

Skarmander said:
In 7.17.4, the standard recommends
"The types used for size_t and ptrdiff_t should not have an integer
conversion rank greater than that of signed long int unless the
implementation supports objects large enough to make this necessary."

This explicitly acknowledges the possibility of size_t being greater
than a long (though it is recommended that this not be so unless
actually necessary, because older programs or badly written newer
programs might break if this does not hold).

It is "actually necessary" for implementations like 64-bit Windows,
where long was chosen as a 32-bit data type, but 40 bits may be required
to hold the size of an object. I won't argue whether the 32-bit long
makes it broken, but I can't agree with those who claim that shortening
size_t would fix it.
 
J

Jordan Abel

Nope. A long is at least 32 bits. An int could be 32 bits as well, but a
long may not be 16 bits, and an int may.

thus the = in ">="
sizeof(long)*CHAR_BIT >= sizeof(int)*CHAR_BIT
/* clearly what he meant, and what you were debating anyway. */
32 >= 16
32 >= 32
64 >= 32
36 >= 18
36 >= 21
and so on.

Reading further, it does seem that he forgot the minimum size rules,
though.

for a concise representation of the rules:

8 <= char <= short <= int <= long <= long long
16 <= short
32 <= long
64 <= long long
size_t is usually an unsigned long because 32-bit platforms (with
32-bit integers, 32-bit longs and 32-bit addresses) are very common.

A common name for such platforms is ILP32
On such platforms "unsigned long" is the natural choice for size_t.

There are also standards [not the C standard itself, but others which
build on it] which require sizeof(size_t) <= sizeof(long)
The reason size_t is not simply a long on all platforms is because an
arithmetic type of at least 32 bits is not necessarily a natural choice
for the type used to hold the size of an object.

For example, I believe that on PDP-11 UNIX [which was before the C
standard and size_t] it uses an unsigned int [for the result of sizeof
and the parameter to malloc, etc]
 
C

CoffeeGood

Just a comment, CHAR_BIT is not defined by gcc and setting it with
-DCHAR_BIT=16
has no effect on the size of a char.
 
R

Richard Heathfield

CoffeeGood said:
Just a comment, CHAR_BIT is not defined by gcc

#include said:
and setting it with
-DCHAR_BIT=16
has no effect on the size of a char.

Of course not. CHAR_BIT is descriptive. It's telling you how many bits are
in a char on that platform, not inviting you to make up your own figure.
 
S

Skarmander

CoffeeGood said:
Just a comment, CHAR_BIT is not defined by gcc and setting it with
-DCHAR_BIT=16
has no effect on the size of a char.
CHAR_BIT is defined in <limits.h>.

Defining it yourself makes no sense. The size macros reflect the
platform's details; they do not configure it.

S.
 
R

Randy Howard

CoffeeGood wrote
(in article
Just a comment, CHAR_BIT is not defined by gcc and setting it with
-DCHAR_BIT=16
has no effect on the size of a char.

Huh? Why on earth would you think it is a tunable parameter?

Try looking in limits.h on your implementation and seeing what
it says.
 
P

pete

CoffeeGood said:
Right. That's what I was saying. Your point is?

I couldn't understand what you were saying.
It seemed to me as though you were saying
that CHAR_BIT wasn't defined by the implementation.

What do you think CHAR_BIT is defined by?
 
R

rayw

CoffeeGood said:
Right. That's what I was saying. Your point is?

I think the point was that limits.h is just a file that [hopefully]
describes your compiler's limits.

My limits.h says that UINT_MAX is 0xffffffff, so a UINT is 32 bits
[according to my limits.h]

However - the compiler might be mislead, e.g., if I've somehow nuked my
include files, and that sizeof(unsigned int) is the actual truth of the
matter.
 
C

CoffeeGood

Just beyond your grasp.

What's beyond your grasp is that you use Usenet
to insult people in order to feel better about
yourself because you have low self-esteem.
 
R

Randy Howard

CoffeeGood wrote
(in article
What's beyond your grasp is that you use Usenet
to insult people in order to feel better about
yourself because you have low self-esteem.

Once again (after being asked not to) you have nuked the
attributions, but you are referring to Richard Heathfield above.
He is one of the most respected "regulars" in the group, and
does not insult people typically, in fact he is often the one
asking others to calm down.

It is not insulting for people (typically) asking for help to
find out that the reason they need help is that they don't
understand it fully yet. In fact, if that were not the case,
they wouldn't be needing help at all.

He probably objected to what looks on paper to be a lie, namely
you claiming that you had felt that CHAR_BIT was descriptive in
a response to his, after you had pretty adequately demonstrated
the opposite position, by trying to set CHAR_BIT with a
-DCHAR_BIT directive to your compiler. Most people get a bit
snippy when lied to.

Perhaps I misunderstood, and you can explain to me and the rest
of the readers of this thread how those posts should be
interpreted differently.
 
Z

Zoran Cutura

rayw said:
CoffeeGood said:
Right. That's what I was saying. Your point is?

I think the point was that limits.h is just a file that [hopefully]
describes your compiler's limits.

My limits.h says that UINT_MAX is 0xffffffff, so a UINT is 32 bits
[according to my limits.h]

Actually it means UINT is at least 32 bits but could be bigger even
though the alloweable maximum value is 0xffffffff.
On a binary computer one can get the number of bits of an unsigned int by
CHAR_BIT*sizeof(unsigned int).
 
K

Keith Thompson

Zoran Cutura said:
rayw said:
CoffeeGood said:
CHAR_BIT is descriptive.

Right. That's what I was saying. Your point is?

I think the point was that limits.h is just a file that [hopefully]
describes your compiler's limits.

My limits.h says that UINT_MAX is 0xffffffff, so a UINT is 32 bits
[according to my limits.h]

Actually it means UINT is at least 32 bits but could be bigger even
though the alloweable maximum value is 0xffffffff.
On a binary computer one can get the number of bits of an unsigned int by
CHAR_BIT*sizeof(unsigned int).

And on a non-binary computer, one can't have a conforming C
implementation (unless it emulates binary on top of whatever the
hardware uses). Your statement is equally true without the "On a
binary computer" qualification.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,994
Messages
2,570,223
Members
46,813
Latest member
lawrwtwinkle111

Latest Threads

Top