However, on a ones-complement system,
-n = ~n [snippage; hence]
-0 == (~ 000) == 111
But, mathematically, -0 == 0, so 111 must equal 000
So, either 111 is an illegal value (representing a number that has no legal
existance), and thus a "trap value", or the language must arrange to make -0
equal to 0.
Incidentally, this is not *necessarily* a purely theoretical
discussion. The Univac 1100 series of computers (whose native OS
was EXEC-8, at least as used in the University of Maryland) had
9-bit "char", 18-bit "short", 36-bit "int", and used ones' complement
arithmetic, in their C compiler(s). This was all before the first
C standard came out in 1989, and I do not know if the C compiler(s)
was/were ever made C89-conformant -- but the machines did exist
and did have at least one C compiler.
I never used these machine enough to learn precisely how they
treated negative zero in integer operations. One might note,
however, that machines with ones' complement or sign-and-magnitude
integer arithmetic avoid a nasty problem that occurs in today's
two's complement hardware:
int x = INT_MIN;
printf("x = %d\n", x); /* outputs -32768 or -2147483648, for instance */
x = -x; /* what happens here? */
printf("-x = %d\n", x); /* what gets printed? */
If you try this on your machine (for the generic "you" reading
this), you will probably see that -x is *still* negative, and
x == -x. If your machine used one of the other two representations,
-x would be positive.