J
James Kanze
If you are in the "use int everywhere", the unsigned size_type
of the standard containers is arguably a mistake (or
non-optimal design, at best).
Many people think so.
You only need the extra range if you have things like a char
array larger than half the addressable space. And when you do, *that*
is probably a mistake too.
Practically speaking, you never need the extra range. Today at
least---I think the choice of unsigned was made when development
was on a 16 bit MS-DOS machine, and I've personally written code
which used arrays of over 32K char on such machines.
Having size unsigned is a problem if you try to compute it by
subtracting two pointers, because that result is signed. You then
quickly end up mixing signed and unsigned artihmetic, which is
definitely *not* good.
Historically, there is a very difficult problem to solve.
size_t must be big enough to contain the size of any possible
object, which usually means you either make it unsigned, or make
it have mor bits than a pointer has. Given its role, the choice
is obvious, but it does have the problem you mention: if
pointers are 32 bits, and you can use most of the memory (more
than half) in a single object, then you need 33 bits to
represent the difference between two pointers. But doing this
only affects a very small minority of programs, and comes at
a not insignificant runtime cost (or did, on older 32 bit
machines). The choice made by the C committee (sometime before
1989) represents a valid compromise, even if it has some serious
drawbacks.
Using size_t as the default size_type in the C++ standard
library is a different issue, and it's definitely not a good
choice.