R
Richard Bos
jacob navia said:Randy Howard a écrit :
Not at all. Please just see what the proposal really was
before going to answer to phantasy proposals.
The proposal was discussing the idea of using a signed
type to avoid problems with small negative numbers
that get translated into huge unsigned ones.
And that's where it breaks down. You see, where are you going to _get_
these small negative numbers? Are you going to get a negative
multiplicand from sizeof? No, because that's defined as giving a
positive number under all circumstances. Is your programmer going to
specify a negative number of objects? Hardly likely. That would be a
blunder of the first order.
So whence the negative number? Probably, one supposes, from multiplying
two largeish positive numbers and getting a signed integer overflow. Ah,
but! But signed integer overflow causes undefined behaviour. So the
error is not trying to allocate a negative number of bytes, the error is
computing the negative in the first place, and it's an error that is
allowed to be fatal and cannot reliably be caught.
Of course, there _is_ an easy way to stop the undefined behaviour. That
way is not to use signed integers for sizes in the first place.
Multiplying an unsigned integer by an (unsigned) size_t gives you
another unsigned integer. The multiplication cannot overflow, and cannot
cause UB. It _can_ wrap around, but that error is fairly easy to detect;
the way to do this is left as an exercise to the reader, but should not
evade any first-year student of C.
So, by suggesting that instead of the unsigned size_t, we should use
signed int or ssize_t, you are effectively advocating replacing a safe
method of handling malloc() in which overly large sizes are easily
spotted, by an unsafe method in which overly large numbers cause
untrappable errors which can only be caught after the damage has already
been done, and in which the program may crash before you even get to
check whether the result is negative at all. Is that wise? Seems to me
that it's not.
Richard