two questions

R

Richard Bos

Capstar said:
unsigned int a = UINT_MAX;

printf("%u\n", a);
printf("%u\n", ++a);
When I compile and run this the result is:
4294967295
0

This doesn't surprise me because I overflowed a. But I was wondering if
the standard defines this behaviour,

For unsigned types, yes. Not for signed types or non-integers.
Another question I have is about the assert macro. Is it commonly seen
as a good practice to use this or is it better/safer to just always do a
check yourself and return if something is wrong?

If the answer depends on data you get from your users (even if
indirectly, for example to a file), check, and report _nicely_. Don't
bluntly abort just because someone made a typo.
If the answer depends _only_ on data you have generated yourself, or
which might have come from your user but which you have already
validated and should be correct - in other words, if the error in
question is a programming error, not a data error - then you can use
assert(). Even so, it's never nice to simply lose your user half an hour
of data entered because _you_ made a programming error, so be careful.

Richard
 
C

Capstar

Hi NG,

I have the next program:

#include <stdio.h>
#include <limits.h>

int main(void)
{
unsigned int a = UINT_MAX;

printf("%u\n", a);
printf("%u\n", ++a);

return 0;
}

When I compile and run this the result is:
4294967295
0

This doesn't surprise me because I overflowed a. But I was wondering if
the standard defines this behaviour, or if it depends on the cpu used.
All cpu's I know (not that many) will just overflow when you add one to
the maximum value for that particular cpu, but I don't think it is
unreasonable to design a cpu which just leaves the value at UINT_MAX.

Another question I have is about the assert macro. Is it commonly seen
as a good practice to use this or is it better/safer to just always do a
check yourself and return if something is wrong?

Thanks,
Mark
 
J

Joona I Palaste

Capstar said:
I have the next program:
#include <stdio.h>
#include <limits.h>
int main(void)
{
unsigned int a = UINT_MAX;
printf("%u\n", a);
printf("%u\n", ++a);
return 0;
}
When I compile and run this the result is:
4294967295
0
This doesn't surprise me because I overflowed a. But I was wondering if
the standard defines this behaviour, or if it depends on the cpu used.
All cpu's I know (not that many) will just overflow when you add one to
the maximum value for that particular cpu, but I don't think it is
unreasonable to design a cpu which just leaves the value at UINT_MAX.

The standard defines that unsigned types will safely wrap around when
overflowed. Signed types don't have to, they can do anything at all
 
S

Sidney Cadot

Capstar said:
Hi NG,

I have the next program:

#include <stdio.h>
#include <limits.h>

int main(void)
{
unsigned int a = UINT_MAX;

printf("%u\n", a);
printf("%u\n", ++a);

return 0;
}

When I compile and run this the result is:
4294967295
0

This doesn't surprise me because I overflowed a. But I was wondering if
the standard defines this behaviour, or if it depends on the cpu used.
All cpu's I know (not that many) will just overflow when you add one to
the maximum value for that particular cpu, but I don't think it is
unreasonable to design a cpu which just leaves the value at UINT_MAX.

As already pointed out, behavior such as this is actually mandated by
the standard. However, it may be useful to point out why leaving
UINT_MAX at UINT_MAX upon increment wouldn't be a very good choice.

The integers modulo (UINT_MAX+1), with addition and multiplication as
usually defined, form what is mathematically called a "ring". This gives
a number of properties that are very desirable, such as:

a*(b+c) = a*b + a*c
For each (a,b), there exist an element "-b" such that a+b+(-b)==a

[[...and many more...]]

Guarantees such as these would be violated when introducing a barrier
like UINT_MAX. It's perhaps a bit difficult to appreciate why this would
be necessarily be "bad" unless you have a bit of algebra background.
Suffice it to say that many algorithms (e.g., arbitrary precision
arithmetic) would be very difficult to get right, without the ring
properties.


Best regards,

Sidney
 
C

Capstar

Sidney said:
Capstar said:
Hi NG,

I have the next program:

#include <stdio.h>
#include <limits.h>

int main(void)
{
unsigned int a = UINT_MAX;

printf("%u\n", a);
printf("%u\n", ++a);

return 0;
}

When I compile and run this the result is:
4294967295
0

This doesn't surprise me because I overflowed a. But I was wondering
if the standard defines this behaviour, or if it depends on the cpu
used. All cpu's I know (not that many) will just overflow when you add
one to the maximum value for that particular cpu, but I don't think it
is unreasonable to design a cpu which just leaves the value at UINT_MAX.


As already pointed out, behavior such as this is actually mandated by
the standard. However, it may be useful to point out why leaving
UINT_MAX at UINT_MAX upon increment wouldn't be a very good choice.

The integers modulo (UINT_MAX+1), with addition and multiplication as
usually defined, form what is mathematically called a "ring". This gives
a number of properties that are very desirable, such as:

a*(b+c) = a*b + a*c
For each (a,b), there exist an element "-b" such that a+b+(-b)==a

[[...and many more...]]

Guarantees such as these would be violated when introducing a barrier
like UINT_MAX. It's perhaps a bit difficult to appreciate why this would
be necessarily be "bad" unless you have a bit of algebra background.
Suffice it to say that many algorithms (e.g., arbitrary precision
arithmetic) would be very difficult to get right, without the ring
properties.


Best regards,

Sidney

Thanks for all of your quick replies.
This info is very helpfull.

Mark
 
M

Martin Ambuhl

Capstar wrote:
[...]
This doesn't surprise me because I overflowed a. [declared as unsigned int]
But I was wondering if
the standard defines this behaviour,
Yes

or if it depends on the cpu used.
No

Another question I have is about the assert macro. Is it commonly seen
as a good practice to use this or is it better/safer to just always do a
check yourself and return if something is wrong?

This is a religious issue. The angels are on the side that uses assert()
during development and debugging; it should never be used to (mis)handle
run-time errors in production code. But, there are heathen among us in clc.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,135
Messages
2,570,784
Members
47,342
Latest member
KelseyK737

Latest Threads

Top