Why? It's a classic application of "fail fast" at work: going
into an array with -x __happens__. E.g. bad decrement
somewhere gives you -1, or, bad difference gives (typically
small!) -x. Now, that typically ends in reading/writing bad
memory, which is with small negatives detected quickly only if
you're lucky. If, however, that decrement/ subtraction is done
unsigned, you typically explode immediately, because there's a
very big chance that memory close to 0xFFFF... ain't yours.
Sorry, but the array class will certainly catch a negative index
(provided it uses a signed type for indexes).
Conceptually, there is an argument in favor of using a cardinal,
rather than an integer, as the index type, given that the
language (and the library) forces indexes to start at 0. (My
pre-standard array classes didn't, but that's another issue.)
But C++ doesn't have a type which emulates cardinal, so we're
stuck here. The fact remains that the "natural" type for all
integral values is int---it's what you get from an integral
literal by default, for example, it's what short, char, etc.
(and there unsigned equivalents!, if they fit in an int, which
they usually do) promote to. And mixing signed and unsigned
types in arithmetic expressions is something to be avoided. So
you want to avoid an unsigned type in this context.
True, but why are signed and unsigned mixed in the first
place? I say, because of the poor design! IOW, in a poor
design, it's bad. So how about clearing that up first?
That's what we're trying to do. Since integral literals and, in
contexts where the usual arithmetic conversions apply, unsigned
char and unsigned shorts have signed type, you're pretty much
stuck.
I might add that a compiler is allowed to check for arithmetic
overflow in the case of signed arithmetic, and not in the case
of unsigned arithmetic. Realistically, I've only heard of one
that did, however, so this is more a theoretical argument than a
practical one.
True, but they exist for signed types, too. Only additional
problem with unsigned is that subtraction is more tricky (must
know that a>b before doing a-b). But then, I question the
frequency at which e.g. sizes are subtracted.
Indexes are often subtracted. And there's no point in
supporting a size larger than that you can index.
And even then (get this!), it's fine. Result is __signed__ and
it all works.
Since when? And with what compiler? The standard states
clearly that for *all* binary operators between the same type,
the results have that type.
(Hey, look! Basic math at work: subtract two natural numbers
and you don't get a natural number!)
C++ arithmetic doesn't quite conform to the rules of basic
arithmetic. To a certain degree, it can't, since basic
arithmetic deals with infinite sets---you can't get overflow.
Unsigned arithmetic in C++ explicitely follows completely
different rules. (In passing: if you do happen to port to a
machine not using 2's complement, unsigned arithmetic is likely
to be significantly slower than signed. The C++ compiler for
the Unisys 2200 even has an option to turn off conformance here,
because of the performance penalty it exacts.)
Well, it works unless you actually work on an array of bytes,
but that example is contrived and irrelevant, I mighty agree
with you there.
I also question the relevance of signed for subtraction of
indices, because going into an array with a-b where a<b is
just as much of a bug as with unsigned. So with signed, there
has to be a check (if (a- b>=0)), with unsigned, there has to
be a check (if (a>b)). So I see no gain with signed, only
different forms.
There's a fundamental problem with signed. Suppose I have an
index into an array, and a function which, given that index,
returns how many elements forward or back I shoud move. With
unsigned indexes, the function must return some sort of struct,
with a flag indicating whether the offset if positive or
negative, and the calling code needs an if. With signed
indexes, no problem---the function just returns a negative value
to go backwards.
[...]
You claim that these potential bugs are important. I claim
that they are not, because I see very little subtraction of
indices in code I work with, and very little backwards-going
loops.
So we work with different types of code.
Note that if you subtract pointers, you also get a signed value
(possibly undefined, if you allow arrays to have a size greater
than std::numeric_limits said:
That may be different for you, but I'll still wager that these
are overall in low percentiles.
You also conveniently chose to overlook (or worse yet, call it
hand- waiving) the true nature of a count and an index (they
are natural numbers). I can't see how designing closer to
reality can be pointless.
They are a subsets of the natural numbers (cardinals), and the
natural numbers are a subset of integers. C++ has a type which
sort of approximates integers; it doesn't have a type which
approximates cardinals. The special characterists of unsigned
types mean that they are best limited to raw memory (no
calculations), bit maps and such (only bitwise operations) and
cases where you need those special chacteristics (modulo
arithmetic). Generally speaking, when I see code which uses
arithmetic operators on unsigned types, and doesn't actually
need modulo arithmetic, I suppose that the author didn't really
understand unsigned in C++.