And so should I, but here goes...
If you convert from an int to a size_t the compiler is quite justifed
in warning about a signed to unsigned conversion. If you convert from
a size_t to an int it is quite justified in warning about the reverse,
or maybe a size truncation.
The compiler is also quite justified in warning if you use a signed
type to index an array.[1] By convention, they don't, but if they did,
would you be advocating a change to the language to solve the
"problem"?
You can get round this problem by littering your code with casts. The
alternative of messing with warning levels isn't a route you can go
down to solve this problem, because you might need a higher warning
level for other purposes.
Warnings should not be controlled by "level" -- there is not
reasonable total ordering in the severity of warnings. I think you
have had bad luck with your tools.
However it is tolerable.
The real fun starts when you start passing about integers by
indirection, or printing them out, or saving them to files. You can of
course make code work, though it is very easy to write something that
will in fact break if a size is greater than the range of an int, or
sizeof(size_t) doesn't equal sizeof(int). Array operations are so
common that by introducing size_t you have made a fundamental change
to the language.
The real answer is to deprecate size_t, make int big enough to address
any array except huge char arrays (we can live with this little
inconsistency), and then introduce smaller types on 64 bit machines to
aid the micro-optimiser, who might want a fast but small integer. 64
bit int will be reasonably fast on any practical 64 bit architecture,
we are talking about shaving off cache usage and cycles to squeeze out
the last drop of efficiency.
At the time, size_t was a huge relief. Every project had decided how
to represent size_t things in its own way and this was very bad for
portability. Mandating int as the type for sizes would have meant
that millions of lines of code would have to be at least checked to
make sure that it would not break (either because of the performance
or because of pre-standards assumptions about the sizes of types).
size_t was good, and for those of up it helped, it has very few
negative connotations.
Now that we have it, we have to compare the costs and benefits of (a)
continuing to use it; (b) going the Malcolm McLean route. This is
where I get stuck on your argument. Used properly, size_t has almost
no costs. You can re-name it if you don't like the name and you can
reserve a few values for error returns if you like to play such tricks
(I raise these two because ugliness and negative error returns have
been cited as advantages of the MM way). What is the "size_t problem"
that your proposal tries to "solve"?
Sure, if you refuse to use it you get problems from some compilers
warning you and incompatibility with size_t pointers. What else could
you expect?
[1] Obviously this is true in the trivial sense that a compiler can
complain about anything it likes. What I mean is that after declaring
'int i, a[3];', an expression like 'a
' is certainly wrong if i is
negative but passing a negative i where a size_t is expected is only
probably wrong. I am not advocating this warning, just pointing out
that is it as justifiable as many others.