R
Richard Heathfield
jacob navia said:
Anyone who cares about how unsigned integer arithmetic works in C.
Overflow doesn't occur with unsigned integer types in C. We covered that.
No, overflow *doesn't happen* for unsigned integers.
Yes, it does.
It's implementation-defined. The result of multiplying two unsigned types
together depends on their values and the width of the type.
What you call nonsense is in fact good sense. Using calloc is almost always
the wrong thing, since calloc writes 0 to every byte in the allocated
block, which is hardly ever the right thing to do. (If all-bits-zero meant
NULL for pointers and 0.0 for floating point values, that would be a
different matter, but they don't so it isn't.) Furthermore, if n is an
unsigned type (as it should be, in my opinion), n * sizeof *p can't
overflow so there is nothing for calloc to test.
If you can meaningfully allocate 65521*65552 bytes in a single aggregate
object, then sizeof has to be able to report the size of such an object,
which means size_t has to be at least 33 bits, which means the "bug" you
refer to doesn't occur. If size_t is no more than 32 bits, it doesn't make
sense to try to allocate an aggregate object > 2^32-1 bytes in size.
As a matter of fact, it will give you at least the number of bytes you
requested (if it gives you any bytes at all). It is not malloc's fault if
you have misinterpreted unsigned integer arithmetic.
If what I say is insane, it should be easy to disprove. But you've never
managed it yet.
Then you will understand that to use a signed type instead of an unsigned
type as an argument to malloc is to introduce the potential for undefined
behaviour without solving the problem you were setting out to solve, and is
therefore a bad idea.
Who cares about rings?
Anyone who cares about how unsigned integer arithmetic works in C.
We are speaking about overflow in a well defined
context.
Overflow doesn't occur with unsigned integer types in C. We covered that.
Yes, the C standard defines the behavior
for overflow, and overflow then, it is defined
for unsigned integers.
No, overflow *doesn't happen* for unsigned integers.
This doesn't mean that it
doesn't happen or that this "ring" stuff is meaningful.
Yes, it does.
Or are you implying that
65521 x 65552 is 65296 and NOT 4295032592
It's implementation-defined. The result of multiplying two unsigned types
together depends on their values and the width of the type.
The nonsense of heathfield becomes worst given the context where
he is saying that we should go on using this construct:
malloc (n * sizeof *p)
to allocate memory instead of using calloc that should test
for overflow!
What you call nonsense is in fact good sense. Using calloc is almost always
the wrong thing, since calloc writes 0 to every byte in the allocated
block, which is hardly ever the right thing to do. (If all-bits-zero meant
NULL for pointers and 0.0 for floating point values, that would be a
different matter, but they don't so it isn't.) Furthermore, if n is an
unsigned type (as it should be, in my opinion), n * sizeof *p can't
overflow so there is nothing for calloc to test.
The bug I am referring to is when you multiply
65521 * 65552 --> 65 296
If you can meaningfully allocate 65521*65552 bytes in a single aggregate
object, then sizeof has to be able to report the size of such an object,
which means size_t has to be at least 33 bits, which means the "bug" you
refer to doesn't occur. If size_t is no more than 32 bits, it doesn't make
sense to try to allocate an aggregate object > 2^32-1 bytes in size.
Since malloc doesn't see anything wrong it will succeed,
giving you a piece of RAM that it is NOT as long as you
would think it is, but several orders of magnitude SMALLER.
As a matter of fact, it will give you at least the number of bytes you
requested (if it gives you any bytes at all). It is not malloc's fault if
you have misinterpreted unsigned integer arithmetic.
Even when heathfield says a patently insane stuff he is
"the guru" and people here will justify his ramblings.
If what I say is insane, it should be easy to disprove. But you've never
managed it yet.
I am well aware of what the standard says for overflow.
Then you will understand that to use a signed type instead of an unsigned
type as an argument to malloc is to introduce the potential for undefined
behaviour without solving the problem you were setting out to solve, and is
therefore a bad idea.