size_t literals?

J

James Kanze

James said:
James Kanze wrote:
[..] (To tell the truth, I
can't imagine a compiler vendor being stupid enough not to make
long 64 bits if the hardware supported it.)
Just imagine Microsoft.
I prefer not to:).
Seriously, the Microsoft compilers I've used all had size_t as a
32 bit quantity. Presumably, if and when Microsoft extends
their compiler to support 64 bit systems, both size_t and long
will become 64 bit quantities. Making long smaller than a
pointer will break a lot of code. (Admittedly already broken,
but that's not the point---it works with todays compilers.)
What relationship there is between size_t and "Making long
smaller than a pointer"? How can you compare a pointer with
long? You can only compare long with ptrdiff_t.

None. This is a parenthetical remark (in the context of this
thread.) In practice, there's a lot of code out there that
assumes that you can cast a pointer to a long (although the
standard doesn't guarantee it). If the size of a long is
smaller than the size of a pointer, this code is broken. Thus,
on a 64 bit machine, with 64 bit linear addressing, it would
take real perversion on the part of a vendor to make long
anything other than 64 bits.
 
I

Ioannis Vranos

James said:
What do you mean by "built in"? And what relevance does it have
here? The C++03 standard defines "integral type" as one of
bool, char, unsigned char, signed char, unsigned short, short,
unsigned int, int, unsigned long, long and wchar_t. That's an
exhaustive list. There aren't any more.


ptrdiff_t is also an implementation-defined, signed integer type.



K&R2 includes C89 towards the end of the book, and there, size_t is
mentioned as "implementation-defined".

No. I mean 34 bits. The value can't be represented in less
than 33 bits, and of course, it must fit into a long to be
legal, and a long also needs a bit for the sign, so it is only
legal (according to C++03) if long is at least 34 bits.


Can you mention the page of the C++03 where this is mentioned?
 
J

James Kanze

James said:
James Kanze wrote:
[..] (To tell the truth, I
can't imagine a compiler vendor being stupid enough not to make
long 64 bits if the hardware supported it.)
Just imagine Microsoft.
I prefer not to:).
Seriously, the Microsoft compilers I've used all had size_t as a
32 bit quantity. Presumably, if and when Microsoft extends
their compiler to support 64 bit systems,
So, the fact that they "support" Windows XP x64 and Windows Vista
x64 editions doesn't count? Are you expecting them to "support"
64-bit Solaris? Or AIX? Or HP-UX? Or Linux? Or Tru64 Unix?

Is there an x64 version of Windows? I didn't know that. Maybe
they do support 64 bit implementations, then. It's news to me.
Seriously, they have had a 64 bit compiler as long as they had
a 64-bit OS, several years now.

Well, There was a 64 bit Windows a long time ago. On the DEC
Alphas. But that's dead, and has been for a while. I didn't
know that they were supporting x64, though; all of the x64
systems I have access to are running Linux.
And, yes, imagine that on x64 Windows 'long' is still 32 bits
and 'size_t' is 64 bits.

I'll bet that breaks a lot of code.
I am not sure whether you want to, but you might, read about
why Microsoft did *not* make their 'long' 64 bits on Windows
x64. It's amusing. They claim backward compatibility, i.e.
they did not want to break any existing code...

Now that does sound amusing. And contradictory. And strange,
given all of the discussions which went on about the issues when
mainstream Unix moved to 64 bits, and above all in the C
committee, when C99 introduced the extended integral types.
It's not like the issues hadn't been considered. The only way
to completely avoid breaking code, of course, is to make
everything 32 bits: pointers, long, and int. And all of the 64
bit systems I know have options to allow this, precisely so that
code won't be broken. Beyond that, there is an awful lot of
code (mostly C, I suspect, but probably some C++ as well) which
assumes that 1) unsigned long is the longest integral type, and
2) that you can cast a pointer to it and back without loss of
value. If you don't care about existing code, of course,
there's nothing to stop you from making pointers 64 bits, and
long 32 bits. But only if you don't care about existing code.
 
A

Alf P. Steinbach

* James Kanze:
Now that does sound amusing. And contradictory. And strange,
given all of the discussions which went on about the issues when
mainstream Unix moved to 64 bits, and above all in the C
committee, when C99 introduced the extended integral types.
It's not like the issues hadn't been considered. The only way
to completely avoid breaking code, of course, is to make
everything 32 bits: pointers, long, and int. And all of the 64
bit systems I know have options to allow this, precisely so that
code won't be broken. Beyond that, there is an awful lot of
code (mostly C, I suspect, but probably some C++ as well) which
assumes that 1) unsigned long is the longest integral type, and
2) that you can cast a pointer to it and back without loss of
value. If you don't care about existing code, of course,
there's nothing to stop you from making pointers 64 bits, and
long 32 bits. But only if you don't care about existing code.

Microsoft instead went the way of outfitting their 32-bit compiler with a lot of
warnings about 64-bit compatibility.

Cheers,

- Alf
 
J

James Kanze

Victor Bazarov wrote:
The same stupid "reasons" we will get stuck with long long and
is unsigned equivalent.

It's part of C99, and will be part of C++0x. You're not the
onlyl one who isn't particularly happy about that, though.
Precisely because of the reason you give.
Code considering that long is 32-bits exactly, without
checking numeric_limits of <limits>, or <climits> values
first, is aimed to a specific implementation.
They could have changed the size of long to 64-bits on their
64-bit systems. Them and other companies.

Everyone else did. I've never seen a compiler where pointers
were 64 bits, but long was only 32. (I did once use a compiler
where pointers were 48 bits, but long was only 32. But that was
a bit exceptional, and I think, in that special case, justified.
Still, it broke an awful lot of existing code.)
Now we will get long long, breaking the real existing
fundamental assumption/guarantee that long is the largest
signed integral built in type.

The C standards committee argued longly about this, precisely
because they didn't want to break this guarantee. (In C, in
particular, it's very important. If you want to output a
typedef'ed unsigned integral type, the only way to do so was to
cast it to unsigned long, and use the "%lu" format specifier.
(In C99, of course, you have to cast it to uintmax_t, and use
the "%ju" specifier.) In the end, C accepted long long because
it was existing practice, and they recognized the need for an
ever increasing number of integral types. But they insisted on
creating a framework with things like intmax_t and uintptr_t, so
that programmers would have some guarantees in the future.
Completely irrational.

There are conflicting requirements. It's hard to find a perfect
solution. (Of course, making long 32 bits on a 64 bit system is
about the worst thing you could do. While it's clear that the
future will probably hold 128 bit machines, and then 256 bit
machines, etc., etc., there's not need to break everyone's code
immediately, just for the fun of it.
 
J

James Kanze

James Kanze wrote:
ptrdiff_t is also an implementation-defined, signed integer type.

Not in C++03. It's a typedef to an implementation-defined,
signed integral type. Typedef does not create a new type.
K&R2 includes C89 towards the end of the book, and there, size_t is
mentioned as "implementation-defined".

Yes, but implementation defined contains limits. It a typedef
to an implementation defined unsigned integral type. It must be
an unsigned integral type. In §6.1.2.5 (C90): "There are four
signed integer types[...] For each signed integer type, there is
a corresponding (but different) unsigned integar type[...] "The
type char, the signed and unsigned integer types, and enumerated
types are collectively called integral types." And in the
above, "integral types" is in italics, which means that this is
the definition of the term (as used in the standard).

C++ excludes enumeration types (since int's don't convert
implicitly to enums), and adds wchar_t and bool (the first is a
typedef in C, and the second didn't exist in C90). See §3.9.1.
Can you mention the page of the C++03 where this is mentioned?

§2.13.1/2,3:

The type of an integer literal depends on its form,
value and suffix. If it is decimal and has no suffix,
it has the first of these types in which its value can
be represented: int, long int; if the value cannot be
represented as a long int, the behavior is
undefined.[...]

A program is ill-formed if one of its translation units
contains an integer literal that cannot be represented
by any of the allowed types.

(And yes, there is a blatent contradiction there. In the first
paragraph, it's undefined behavior, and in the second, the
program is ill-formed; i.e. requires a diagnostic. I've always
based my interpretation on this last paragraph.)

Note that this is an explicit change from C, where the order for
a decimal constant is int, long int, unsigned long int. And no
statement with regards to whether it is undefined or ill-formed
(instead of two contradicting statements).
 
I

Ioannis Vranos

James said:
Not in C++03. It's a typedef to an implementation-defined,
signed integral type. Typedef does not create a new type.


That's what I meant, you are playing with the words here. :)


Yes, but implementation defined contains limits.

No, it doesn't.


It [is] a typedef
to an implementation defined unsigned integral type.

which is not one of the built in types necessarily.


Can you mention the page of the C++03 where this is mentioned?

§2.13.1/2,3:

The type of an integer literal depends on its form,
value and suffix. If it is decimal and has no suffix,
it has the first of these types in which its value can
be represented: int, long int; if the value cannot be
represented as a long int, the behavior is
undefined.[...]

A program is ill-formed if one of its translation units
contains an integer literal that cannot be represented
by any of the allowed types.

(And yes, there is a blatent contradiction there. In the first
paragraph, it's undefined behavior, and in the second, the
program is ill-formed; i.e. requires a diagnostic. I've always
based my interpretation on this last paragraph.)


I do not see anything about 34 bits in the text you quoted.
 
M

Micah Cowan

Ioannis Vranos said:
That's what I meant, you are playing with the words here. :)




No, it doesn't.

It most certainly does. Implementation defined behavior is
_unspecified_ behavior for which the implementation must provide
documentation. Unspecified behavior means that the implementation must
choose from the allowed set of possibilities.

If there were no limitations on implementation-defined, then an
implementation could choose to ignore the requirement of an unsigned
integer type (which is what it would be doing for your case anyway),
and use a float.

C++03 quite clearly states that there are exactly four unsigned
integer types. Therefore, if an implementation were to choose a type
that were not one of those four, it would not, in fact, be an
"unsigned integer type", as defined by the standard.
It [is] a typedef
to an implementation defined unsigned integral type.

which is not one of the built in types necessarily.

James is quite correct that C++ has not made allowances for other
integer types. Which is probably an oversight on the part of the
standard's authors (and one which will be corrected in C++0x), but
that doesn't change the fact.
 
I

Ioannis Vranos

Micah said:
James is quite correct that C++ has not made allowances for other
integer types. Which is probably an oversight on the part of the
standard's authors (and one which will be corrected in C++0x), but
that doesn't change the fact.


So an implementation-defined unsigned integer type is always one of the
4 built in unsigned integer types?
 
I

Ioannis Vranos

Ioannis said:
So an implementation-defined unsigned integer type is always one of the
4 built in unsigned integer types?



C++03 mentions:

"1.3.5 implementation-defined behavior [defns.impl.defined]
behavior, for a well-formed program construct and correct data, that
depends on the implementation and that each implementation shall document".


This means that this can be true:

sizeof(size_t)> sizeof(unsigned long)
 
P

Pete Becker

Ioannis said:
So an implementation-defined unsigned integer type is always one of the
4 built in unsigned integer types?



C++03 mentions:

"1.3.5 implementation-defined behavior [defns.impl.defined]
behavior, for a well-formed program construct and correct data, that
depends on the implementation and that each implementation shall
document".


This means that this can be true:

sizeof(size_t)> sizeof(unsigned long)

By itself it doesn't say that. Implementation-defined can mean that the
implementation must choose something within stated constraints. For
example, [support.types]/3 in the C++0x draft says: "The macro NULL is
an implementation-defined C++ null pointer constant ..." The phrase
"implementation-defined" does not mean that the macro NULL need not be
a C++ null pointer constant. Similarly, an "implementation-defined
unsigned integer type" is an unsigned integer type, as defined by the
standard, and the choice of which one is up to the implementation.
 
V

Victor Bazarov

Pete said:
[..] Implementation-defined can mean that
the implementation must choose something within stated constraints.

Well, the "stated constraints" are open to interpretation, AFAICS.
For example, [support.types]/3 in the C++0x draft says: "The macro
NULL is an implementation-defined C++ null pointer constant ..." The
phrase "implementation-defined" does not mean that the macro NULL
need not be a C++ null pointer constant. Similarly, an
"implementation-defined unsigned integer type" is an unsigned integer
type, as defined by the standard, and the choice of which one is up
to the implementation.

If the implementation provides additional unsigned integer types,
it can choose to make 'size_t' a synonym for one of those, and not
one of the four it _shall_ provide according to [basic.fundamental].
At least that's how I read the "implementation-defined" in this
particular case.

And if there is any doubt, here is the relevant quote:
[basic.fundamental]
2 "There are four signed integer types: ... ; the other signed
integer types are provided to meet special needs.

3 For each of the signed integer types, there exists a
corresponding (but different) unsigned integer type: ..."

In my understanding, if the application chooses to provide the "other
signed integer types", it should also provide unsigned equivalents for
those.

Then, according to the 'size_t' definition, it is *absolutely free*
to make 'size_t' a typedef of one of those "other" unsigned integer
types, and /not/ limit itself to the four explicitly enumerated in
[basic.fundamental]/2 or /3.

V
 
I

Ioannis Vranos

Pete said:
C++03 mentions:

"1.3.5 implementation-defined behavior [defns.impl.defined]
behavior, for a well-formed program construct and correct data, that
depends on the implementation and that each implementation shall
document".


This means that this can be true:

sizeof(size_t)> sizeof(unsigned long)

By itself it doesn't say that. Implementation-defined can mean that the
implementation must choose something within stated constraints. For
example, [support.types]/3 in the C++0x draft says: "The macro NULL is
an implementation-defined C++ null pointer constant ..." The phrase
"implementation-defined" does not mean that the macro NULL need not be a
C++ null pointer constant. Similarly, an "implementation-defined
unsigned integer type" is an unsigned integer type, as defined by the
standard, and the choice of which one is up to the implementation.


Let's avoid talking about C++0x until it gets ratified. So do you also
agree that the following condition can never be true?

sizeof(size_t)> sizeof(unsigned long)
 
J

Jeff Schwab

James said:
Wait a minute, Victor.

I believe Victor was wittily pointing out that what he meant by "2^64 -
1" should have been clear already to the thinking reader; Juha's
interpretation was correct, but (as noted) pedantic. There ought to be
an emoticon for sarcasm. :)
 
P

Pete Becker

Pete said:
C++03 mentions:

"1.3.5 implementation-defined behavior [defns.impl.defined]
behavior, for a well-formed program construct and correct data, that
depends on the implementation and that each implementation shall
document".


This means that this can be true:

sizeof(size_t)> sizeof(unsigned long)

By itself it doesn't say that. Implementation-defined can mean that the
implementation must choose something within stated constraints. For
example, [support.types]/3 in the C++0x draft says: "The macro NULL is
an implementation-defined C++ null pointer constant ..." The phrase
"implementation-defined" does not mean that the macro NULL need not be
a C++ null pointer constant. Similarly, an "implementation-defined
unsigned integer type" is an unsigned integer type, as defined by the
standard, and the choice of which one is up to the implementation.


Let's avoid talking about C++0x until it gets ratified. So do you also
agree that the following condition can never be true?

sizeof(size_t)> sizeof(unsigned long)

If we're talking about C++03, no, it is not allowed. The
"implemenation-defined unsigned integer type" must be one of the
unsigned integer types, and the largest of those is unsigned long.
 
P

Pete Becker

If the implementation provides additional unsigned integer types,
it can choose to make 'size_t' a synonym for one of those, and not
one of the four it _shall_ provide according to [basic.fundamental].
At least that's how I read the "implementation-defined" in this
particular case.

Yes, indeed. But the argument that Ioannis made was not that
"implementation-defined unsigned integral type" means any of the types,
including extended types, specified by the standard. Extended types
didn't exist in C90, and C++03 relies on C90.
And if there is any doubt, here is the relevant quote:
[basic.fundamental]
2 "There are four signed integer types: ... ; the other signed
integer types are provided to meet special needs.

3 For each of the signed integer types, there exists a
corresponding (but different) unsigned integer type: ..."

Yes, those are extended integer types. They're new in C++0x, in large
part because C99 added them to C90.
In my understanding, if the application chooses to provide the "other
signed integer types", it should also provide unsigned equivalents for
those.

Then, according to the 'size_t' definition, it is *absolutely free*
to make 'size_t' a typedef of one of those "other" unsigned integer
types, and /not/ limit itself to the four explicitly enumerated in
[basic.fundamental]/2 or /3.

Yes, to one of those (if they exist). Not to an arbitrary made-up type.
The requirement is that it be one of the unsigned integral types, which
now includes extended integral types, provided that the implementation
provides them.
 
J

James Kanze

[...]
C++03 quite clearly states that there are exactly four unsigned
integer types. Therefore, if an implementation were to choose a type
that were not one of those four, it would not, in fact, be an
"unsigned integer type", as defined by the standard.

The authors of the C++ standard quite clearly intend for the
fundamental types to remain 100% compatible with C. When C++98
was being written, C99 hadn't appeared yet; the wording in the
C++ standard reflects C90., which didn't have the extended
types.

The wording in C++0x reflects C99. (C++03 was just a bug fix
release, and didn't change anything fundamental.)
 
J

James Kanze

Pete said:
[..] Implementation-defined can mean that
the implementation must choose something within stated constraints.
Well, the "stated constraints" are open to interpretation, AFAICS.

Not really.
For example, [support.types]/3 in the C++0x draft says: "The macro
NULL is an implementation-defined C++ null pointer constant ..." The
phrase "implementation-defined" does not mean that the macro NULL
need not be a C++ null pointer constant. Similarly, an
"implementation-defined unsigned integer type" is an unsigned integer
type, as defined by the standard, and the choice of which one is up
to the implementation.
If the implementation provides additional unsigned integer types,

If the type isn't described in §3.9.1, it isn't an "unsigned
integer type". The standard uses words in very strictly defined
senses, and in this case, it defines clearly what "unsigned
integer type" means in the standard in §3.9.1. An
implementation may define additional types (e.g. __uint64), and
these types may behave like unsigned integers, but they don't
meet the definition of "unsigned integer types" given in §3.9.1.
it can choose to make 'size_t' a synonym for one of those, and not
one of the four it _shall_ provide according to [basic.fundamental].
At least that's how I read the "implementation-defined" in this
particular case.
And if there is any doubt, here is the relevant quote:
[basic.fundamental]
2 "There are four signed integer types: ... ; the other signed
integer types are provided to meet special needs.
3 For each of the signed integer types, there exists a
corresponding (but different) unsigned integer type: ..."
In my understanding, if the application chooses to provide the
"other signed integer types", it should also provide unsigned
equivalents for those.

I think you're missing a subtility of the English language. The
last clause in the sentence you quote from paragraph 2 has a
definite article. It's not talking about any old "other signed
integer types"; it's talking about those just mentionned (and
only those). The part you cut talked about int; this part talks
about the other signed integral types, to wit signed char,
signed short and signed long. And no others.

Note that all of this was longly discussed in the C committee
prior to C99. Before C99, there was an absolute guarantee that
long was the largest signed integer, and unsigned long the
largest unsigned integer. It was an important guarantee,
especially in C (think of printf), and the C committee didn't
break it lightly.
 
M

Micah Cowan

Victor Bazarov said:
Pete said:
[..] Implementation-defined can mean that
the implementation must choose something within stated constraints.

Well, the "stated constraints" are open to interpretation, AFAICS.
For example, [support.types]/3 in the C++0x draft says: "The macro
NULL is an implementation-defined C++ null pointer constant ..." The
phrase "implementation-defined" does not mean that the macro NULL
need not be a C++ null pointer constant. Similarly, an
"implementation-defined unsigned integer type" is an unsigned integer
type, as defined by the standard, and the choice of which one is up
to the implementation.

If the implementation provides additional unsigned integer types,
it can choose to make 'size_t' a synonym for one of those, and not
one of the four it _shall_ provide according to [basic.fundamental].
At least that's how I read the "implementation-defined" in this
particular case.

Except that it's not possible for there to _be_ such a thing as
unsigned integer types that aren't provided by the Standard, since it
purports to define the set of "unsigned integer types" in
full. Therefore anything outside of that set is _not_ an unsigned
integer type, by the Standard's definition.

Using some other type than these four is no different than using a
float. Neither are "unsigned integer types".
And if there is any doubt, here is the relevant quote:
[basic.fundamental]
2 "There are four signed integer types: ... ; the other signed
integer types are provided to meet special needs.

3 For each of the signed integer types, there exists a
corresponding (but different) unsigned integer type: ..."

In my understanding, if the application chooses to provide the "other
signed integer types", it should also provide unsigned equivalents for
those.

Except that "other", in this specific context, unambiguously means
"other than int", since it directly follows an explanation of "plain
ints". If this had been written by anyone else, I'd suspect you were
being deliberately misleading.

Note that, in all this, I'm speaking only of the actual text of the
Standard. I have my suspicions that this isn't what was actually
intended by the writers (you'd have to ask them); I think this is
especially likely since they appear to have remedied it for the
upcoming standard.
 
I

Ioannis Vranos

James said:
The wording in C++0x reflects C99. (C++03 was just a bug fix
release, and didn't change anything fundamental.)

+ guarantying that the most containers allocate continuous memory space
and thus their members can also be accessed with regular pointers.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

size_t Question 9
size_t in C++ 0
size_t, ssize_t and ptrdiff_t 56
size_t in inttypes.h 4
The problem with size_t 45
size_t problems 466
64 bit C++ and OS defined types 71
size of bitfieds / strange warning 12

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,822
Latest member
israfaceZa

Latest Threads

Top