C programming in 2011

K

Kleuskes & Moos

http://en.wikipedia.org/wiki/Burroughs_large_systemsdescribres
a family of non-hypothetical systems that used sign-and magnitude
representation for integers (integers were simply floating-point
numbers with a zero exponent). These systems existed before the
C language became popular (Algol was their main high level language),
but UINT_MAX==INT_MAX would be a logical choice for a C implementation
on such a system.

Dating back from the 60's. Ok. Welcome back to the stone age.

In short: UINT_MAX==INT_MAX is only found in hypothetical and
prehistoric machines. Which still does not invalidate the point that
the situation in question is (anno domini 2011) hypothetical at best.

There is a reason, and a very good one, too, that sign-and-magnitude
isn't used anymore. If you have a different opinion, please post the
hardware in question. I'd be curious.
 
A

Angel

In short: UINT_MAX==INT_MAX is only found in hypothetical and
prehistoric machines. Which still does not invalidate the point that
the situation in question is (anno domini 2011) hypothetical at best.

There is a reason, and a very good one, too, that sign-and-magnitude
isn't used anymore. If you have a different opinion, please post the
hardware in question. I'd be curious.

Regardless of what is available now, you don't know what the future
might bring. In order to ensure maximum portability, it's best not to
make any assumptions about the implementation unless you really have to.

There is already a lot of broken software out there because programmers
assumed 32-bit integers (even though 64-bit systems have been around
longer than most people think), we really don't need more broken
software because programmers make assumptions about UINT_MAX that may
or may not be true in the future.
 
N

Noob

J. J. Farrell said:
Why on earth should he? Who ever claimed such an environment exists, or
isn't silly? It's legal in C for such an environment to exist, that's
all. If someone is choosing to write code which is portable to all
possible legal C implementations, they have to allow for it. If they
only need to make their code portable to all environments which there is
the slightest predictable possibility of ever coming across, they can
ignore it.

Would this work (I'm uncertain about pre-processor arithmetic type promotion)

#if UINT_MAX <= INT_MAX
#error UNSUPPORTED PLATFORM !!!
#endif
 
A

Angel

Would this work (I'm uncertain about pre-processor arithmetic type promotion)

#if UINT_MAX <= INT_MAX
#error UNSUPPORTED PLATFORM !!!
#endif

Assuming that you include limits.h before this test, I don't see any
reason why this shouldn't work.
 
J

James Kuyper

Such a system would not be able to operate, since every relative jmp-
instruction involves the addition (or subtraction, which is basically
the same). ...

You can't do addition or subtraction with signed integers? That's news
to me.
... So name that fabled platform.

Can you not read? I said "I claim no familiarity with any such system".
I suppose I could assign an arbitrary name to a purely hypothetical
system, but to what end?
 
J

James Kuyper

What other methods of doing relative jumps did you have in mind? Since
you think there's an alternative, it's up to you to name it.

How about adding a signed integer value to the current address, which is
also a signed integer, giving a result which is also a signed integer,
and therefore a valid address on that platform?
Ah. Another hypothetical system that defies the laws of common
sense...

No, it's not another hypothetical system, it's a more specific example
of the same hypothetical. Keith said as much: "On the hypothetical
system in question ...". You might try reading more closely.

....
To me, the flurry of hypotheticals merely indicates that i was right,
but you don't like admitting it, so you're grasping at hypothetical
straws.

No, it simply means that we prefer to avoid writing code that relies
upon guarantees not provided by the standard. You don't care about that,
and you have every right not to care about it. With that right comes the
responsibility of accepting the consequences in the event (which you
consider very unlikely to occur) that your code needs to be ported to an
implementation which violates them.
Besides, a pattern of bits on the address bus, and this would be the
best objection you can make, can be interpreted as signed _or_
unsigned at the whim of whomsoever is interpreting it. It really makes
no difference, it's just a question of how you interpret them.

It makes no difference in 2's complement; it can make a big difference
in the other two permitted representations of signed integers. But you
are, of course, free to assumes that 2's complement is the only
representation in use now or ever again in the future, if that's what
you want to assume.

....
Actually, Keith's example is somewhat poorly chosen. Because of the
equivalence between 2's complement signed operations and corresponding
unsigned operations, a platform which could only natively support signed
math would have to be using one of the other two representations.
Ok.

Let me pose two simple questions...

How does a CPU add 1 and 1 and arrive at the (correct) answer: 2?

We postulated that this hypothetical platform has built-in hardware
support for signed arithmetic. Do you need details in the form of
hardware layout and a list of the supported machine instructions? I'm
not a hardware designer, I'm sure I'd make lots of mistakes in any
attempt to provide such a specification. Are you really in doubt about
the feasibility of implementing signed arithmetic?
How do signed and unsigned versions of this operation differ?

The signed instructions interpret the memory as signed integers in,
let's say, 1's complement representation. Unsigned versions of the
operation are non-existent, and unsigned arithmetic must therefore be
emulated, since the signed operations would not produce the correct
result if the unsigned result would be greater than INT_MAX, or if the
result of applying those operations would be negative.
The question "why doesn't anyone design a computer that does not
support unsigned arithmatic" should be obvious, if you got the above
two questions right, and hypothetical, highly unpractical systems
should then be laid to rest.

You'll need to explain the point you're trying to make. It's not at all
clear to me what problem you're thinking of, so I can't evaluate whether
it makes any sense to worry about that problem.
 
K

Kleuskes & Moos

Regardless of what is available now, you don't know what the future
might bring. In order to ensure maximum portability, it's best not to
make any assumptions about the implementation unless you really have to.

There is already a lot of broken software out there because programmers
assumed 32-bit integers (even though 64-bit systems have been around
longer than most people think), we really don't need more broken
software because programmers make assumptions about UINT_MAX that may
or may not be true in the future.

Blahdiblahdiblah...

First off, i'm not assuming anything, just making the point that
UINT_MAX == INT_MAX is valid in hypothetical cases only and when a
(alledged) counterexample is finally found, it turns out to be
straight out of the jurassic age and never supported C in in the first
place. Instead it ran ALGOL and (with some prodding) COBOL.

There's good programming practice, such as not assuming any type to be
of any particular size, unless dictated by the standard, and there's
the sheer sillyness portrayed in this subthread.

Of course the standards guys allow for all kinds of stuff nobody would
normally use anymore, just to suit the fringe cases, but then there's
fringe cases and there's "CPU's that don't support unsigned integers".
 
N

Noob

Kleuskes said:
How does a CPU add 1 and 1 and arrive at the (correct) answer: 2?
How do signed and unsigned versions of this operation differ?

Tangential nit :)

Not all operations can ignore the "signed-ness" of its operands.
Consider a platform (such as x86) with 32-bit registers, and a
32x32->64 multiply operation.

Regards.
 
K

Kleuskes & Moos

How about adding a signed integer value to the current address, which is
also a signed integer, giving a result which is also a signed integer,
and therefore a valid address on that platform

As i already pointed out elsethread, it doesn't make any difference.
Most relative jumps rely on signed integers for the offset anyway, but
ultimately any idea of 'signed integers' is something _we_ assign to a
specified set of bits, it's not inherent in the contents of the
address-bus and memory chips only look to see whether an adressline is
high or low. So you can interpret addresses any way you want, treating
them as unsigned integers is simply more convenient.

But hey... If you prefer to view them as signed integers...
No, it's not another hypothetical system, it's a more specific example
of the same hypothetical. Keith said as much: "On the hypothetical
system in question ...". You might try reading more closely.

The 'hypothetical' was enough. 'Hypothetical' means 'I have no
counterexample so i'll just start playing games'. At least Ike dug up
a nice fossil of a brontosaur.
No, it simply means that we prefer to avoid writing code that relies
upon guarantees not provided by the standard.

How nice.
You don't care about that,

Nope. I just program _real_ computers instead of hypothetical ones.
Besides, your claim to know what i care about or not is not only
presumptious, but also quite pigheaded.
and you have every right not to care about it.

How nice.
With that right comes the
responsibility of accepting the consequences in the event (which you
consider very unlikely to occur) that your code needs to be ported to an
implementation which violates them.

So far, we've got one example out of the dark ages which _might_
satisfy the condition that started this subthread, and a host of
hypothetical, and highly implausible systems which might if the
designers of the CPU were _very_ silly.

Your rather pompous assertions about my sense of responsability does
nothing to convince me otherwise, but achieve rather the opposite and
convinces me you have no arguments but ad-hominem ones.
It makes no difference in 2's complement; it can make a big difference
in the other two permitted representations of signed integers. But you
are, of course, free to assumes that 2's complement is the only
representation in use now or ever again in the future, if that's what
you want to assume.

If i ever encounter an UNIVAC 7094 again, i'll be on my toes. And
thanks for pointing out in such clarity why the other two mandated
numbersystems are obsolete by now.
We postulated that this hypothetical platform has built-in hardware
support for signed arithmetic.

Anyone can postulate anything, but no-one can run any software on
postulated machines, so the answer is already wrong.

Do you need details in the form of
hardware layout and a list of the supported machine instructions?

Just the basic circuit will do. Even a list of components for a 2-bit
adder
I'm
not a hardware designer, I'm sure I'd make lots of mistakes in any
attempt to provide such a specification. Are you really in doubt about
the feasibility of implementing signed arithmetic?

Nope. I just wanted to know whether or not you have any idea what
you're talking about. It appears you have no idea.

See http://en.wikipedia.org/wiki/Full_adder
The signed instructions interpret the memory as signed integers in,
let's say, 1's complement representation. Unsigned versions of the
operation are non-existent, and unsigned arithmetic must therefore be
emulated, since the signed operations would not produce the correct
result if the unsigned result would be greater than INT_MAX, or if the
result of applying those operations would be negative.


You'll need to explain the point you're trying to make. It's not at all
clear to me what problem you're thinking of, so I can't evaluate whether
it makes any sense to worry about that problem.

In terms of hardware, the actual gates being used, both 1's complement
(mainly because of the double 0) and signed magnitude are expensive.
You need extra hardware to take care of a lot of things and extra
hardware costs time, money and dissipates power.
2's complement can use the same circuits to do addition, subtraction
(and hence comparisons), signed and unsigned, using the same circuits.
That, basically, is why every processor uses 2's complement nowadays,
instead of one of the others.

And that's why i feel quite comfortable knowing UINT_MAX==INT_MAX only
in hypothetical cases.
 
K

Keith Thompson

Kleuskes & Moos said:
Kleuskes & Moos said:
On 05/31/2011 08:57 AM, Kleuskes & Moos wrote:
...
The closest the standard comes to directly constraining UINT_MAX
relative to INT_MAX is in 6.2.5p9: "The range of nonnegative values of a
signed integer type is a subrange of the corresponding unsigned integer
type, ...". That corresponds to the constraint UINT_MAX >= INT_MAX; it
is not violated by UINT_MAX==INT_MAX.
True. But then there's "not violating a constraint" and there's sheer,
unadulterated sillyness of actually creating an implementation in
which UINT_MAX==INT_MAX, since you would be throwing away half the
range of the unsigned integer (and that goes in all three mandated
numbersystems).
So while it may not violate the standard, it _is_ silly, and i would
very much like to know which compiler meets UINT_MAX==INT_MAX. Just so
that i can avoid it. [...]
An obvious possibility is a platform with no hardware support for
unsigned integers - depending upon the precise details of it's
instruction set, it could be easiest to implement unsigned ints as
signed ints with a range restricted to non-negative values. However, I
claim no familiarity with any such system.
Such a system would not be able to operate, since every relative jmp-
instruction involves the addition (or subtraction, which is basically
the same). So name that fabled platform.

How do you know what's involved in relative jump instructions on a
hypothetical system that might not even exist?

What other methods of doing relative jumps did you have in mind? Since
you think there's an alternative, it's up to you to name it.

I have no idea, and no, it's not up to me to name it. You've made an
assertion about how relative jump instructions work, presumably on *all*
systems.
Ah. Another hypothetical system that defies the laws of common
sense... Great, what advantages does your hypothetical system with
signed memory addresses have? More importantly, how does that jive
with the electronics of the address-bus of that hypothetical system?

("jibe")

I wasn't aware that signed addresses defied the laws of common
sense. Perhaps they do. I do not claim that such systems have
any particular advantages, or even that they necessarily exist (I
honestly don't know whether they do or not), merely that they're
possible.

Now that I think about it, perhaps a virtual machine would be more
likely to have such characteristics, especially if it's implemented
in a language that doesn't support unsigned arithmetic (Pascal,
for example). I have some documentation on the old UCSD Pascal system;
I'll look into it later.
Objections concerning some hypothetical system which uses signed
integers as memory addresses and do not support nsigned integers are
not quite taken seriously on my side of the NNTP-server. Why not
invent a hypothetical system which uses pink bunnies to address
memory?

Why not? There's no particular reason such a system coulnd't
support a conforming C implementation.
Yes. And the chip-select signal might be delivered by invisible pink
unicorns, the Carry Flag might be hoisted by Daffy Duck and interrupts
might be implemented by Yosemity Sam, firing his guns at the hootenest-
tootenest-shootenest programmable interrupt controller north, east,
south AAAAND west of the Pecos.

And you find these as plausible as UINT_MAX==INT_MAX?
To me, the flurry of hypotheticals merely indicates that i was right,
but you don't like admitting it, so you're grasping at hypothetical
straws.

You were right about what, exactly?

This is not a zero-sum game where one of us wins and one of us loses.
It's a technical discussion.

Do any systems with UINT_MAX==INT_MAX actually exist? I don't know; I
make no claim one way or the other. You claim that there are no such
systems; as far as I know, you may be right.

But this is comp.lang.c, not comp.arch, and the main point I've been
making is that such systems are permitted by the C standard. (I'm
reasonably sure I'm right about that; if you agree, that doens't mean
you're admitting defeat.)
Ok. So name a (non-hypothetical) example of adddresses not being
unsigned integers. I wager a case of Grolsch you won't be able to. Not
because it's not possible, just because it's unpractical.

You're asking me to support a claim I never made.
Besides, a pattern of bits on the address bus, and this would be the
best objection you can make, can be interpreted as signed _or_
unsigned at the whim of whomsoever is interpreting it. It really makes
no difference, it's just a question of how you interpret them.

But ok, let's say we have a system with a 16-bit address space,
with addresses ranging from 0 to 65535, and with each byte in that
range being addressible. What happens if you start with a pointer
to memory location 65535, and then increment it? Unless there's
hardware checking, the result will probably point to address 0.
Or, equivalently, we started with address -1 and incremented it to
address 0.
But, given the practicalities of hardware design, they are usually
interpreted as being unsigned, simply because it's more convenient.

If address arithmetic doesn't check for overflow or wraparound, it may
be *conceptually* more convenient to think of addresses as unsigned, but
the hardware doesn't necessarily care.
Ok.

Let me pose two simple questions...

How does a CPU add 1 and 1 and arrive at the (correct) answer: 2?

Magic. :cool:} Seriously, I'm not much of a hardware guy, but we can
agree that CPUs can add.
How do signed and unsigned versions of this operation differ?

Depends on the system. On some systems, there might be a hardware trap
on overflow; the conditions in which that trap will be triggered might
differ for signed and unsigned add instructions. Maybe. And no, I
don't have concrete examples.
The question "why doesn't anyone design a computer that does not
support unsigned arithmatic" should be obvious, if you got the above
two questions right, and hypothetical, highly unpractical systems
should then be laid to rest.


True. But i doubt you'll find ANY that match the exotic, nay,
excentric hardware you describe. I still dare yoou to name a single
CPU that does not support unsigned integers, and i'm very confident
you won't find any.

Again, I've never claimed that such a CPU exists.
 
K

Keith Thompson

James Kuyper said:
Actually, Keith's example is somewhat poorly chosen. Because of the
equivalence between 2's complement signed operations and corresponding
unsigned operations, a platform which could only natively support signed
math would have to be using one of the other two representations.

I was thinking of a (hypothetical) system that traps on signed overflow.
This might be more plausible for a virtual machine than for real hardware.

[...]
 
K

Kleuskes & Moos

Tangential nit :)

Not all operations can ignore the "signed-ness" of its operands.
Consider a platform (such as x86) with 32-bit registers, and a
32x32->64 multiply operation.

Regards.

Absolutely right.
 
K

Kleuskes & Moos

As i already pointed out elsethread, it doesn't make any difference.
Most relative jumps rely on signed integers for the offset anyway, but
ultimately any idea of 'signed integers' is something _we_ assign to a
specified set of bits, it's not inherent in the contents of the
address-bus and memory chips only look to see whether an adressline is
high or low. So you can interpret addresses any way you want, treating
them as unsigned integers is simply more convenient.

But hey... If you prefer to view them as signed integers...



The 'hypothetical' was enough. 'Hypothetical' means 'I have no
counterexample so i'll just start playing games'. At least Ike dug up
a nice fossil of a brontosaur.



How nice.


Nope. I just program _real_ computers instead of hypothetical ones.
Besides, your claim to know what i care about or not is not only
presumptious, but also quite pigheaded.


How nice.


So far, we've got one example out of the dark ages which _might_
satisfy the condition that started this subthread, and a host of
hypothetical, and highly implausible systems which might if the
designers of the CPU were _very_ silly.

Your rather pompous assertions about my sense of responsability does
nothing to convince me otherwise, but achieve rather the opposite and
convinces me you have no arguments but ad-hominem ones.



If i ever encounter an UNIVAC 7094 again, i'll be on my toes. And
thanks for pointing out in such clarity why the other two mandated
numbersystems are obsolete by now.



Anyone can postulate anything, but no-one can run any software on
postulated machines, so the answer is already wrong.

 Do you need details in the form of


Just the basic circuit will do. Even a list of components for a 2-bit
adder


Nope. I just wanted to know whether or not you have any idea what
you're talking about. It appears you have no idea.

Seehttp://en.wikipedia.org/wiki/Full_adder












In terms of hardware, the actual gates being used, both 1's complement
(mainly because of the double 0) and signed magnitude are expensive.
You need extra hardware to take care of a lot of things and extra
hardware costs time, money and dissipates power.
2's complement can use the same circuits to do addition, subtraction
(and hence comparisons), signed and unsigned, using the same circuits.
That, basically, is why every processor uses 2's complement nowadays,
instead of one of the others.

And that's why i feel quite comfortable knowing UINT_MAX==INT_MAX only
in hypothetical cases.

Adding to that, i've never in my career encountered a situation where
i had to assume anything beyond what's in the standard about either
UINT_MAX or INT_MAX.
 
J

James Kuyper

The 'hypothetical' was enough. 'Hypothetical' means 'I have no
counterexample so i'll just start playing games'.

It doesn't bother me that I have no counter-example, because the point
I'm making only concerns with what the standard mandates and allows.
Keeping track of what's actually true on all current implementations of
C is something that would require more work than I have time for.

If that constitutes "playing games" in your book, then feel free to
think of it that way; I'll continue to think of what you're doing as
"making unjustified assumptions".

....
How nice.


Nope. I just program _real_ computers instead of hypothetical ones.

I program real computers with code that will work even on hypothetical
ones, so long as there is a compiler on those machines which is at least
backwardly compatible with C99; which means my code will continue to
work even if your assumptions about what "real machines" do become
inaccurate.

....
Besides, your claim to know what i care about or not is not only
presumptious, but also quite pigheaded.

You're correct, I can only infer what you care about from your actual
words, you could secretly care about these issues far more than your
words imply. But I can only pay attention to your words, I don't have
the power to read your mind. I'll have to form my guesses as to what you
care about based upon your actual words. If those guesses are
inaccurate, it's due to a failure of your words to correctly reflect
your true beliefs.

....
....
Your rather pompous assertions about my sense of responsability

I said nothing about your sense of responsibility, only your
responsibilities themselves. That you might be insensible of those
responsibilities seems entirely plausible, and even likely.

....
Nope. I just wanted to know whether or not you have any idea what
you're talking about. It appears you have no idea.

I have an idea, it's just not my specialty. I know enough to know that
all three of the methods permitted by the C standard for representing
signed integers have actually been implemented, on the hardware level,
on real machines, and that only one of those methods (admittedly, the
most popular) provides any support for your arguments. That is, it would
provide such support for your arguments, if it were the only possible
way of doing signed integer arithmetic; but it isn't.

....
In terms of hardware, the actual gates being used, both 1's complement
(mainly because of the double 0) and signed magnitude are expensive.

Which is very different from being impossible, which is what would need
to be the case to support your argument. The other ways of representing
signed integers have their own peculiar advantages, or they would never
have been tried. When our current technologies for making computer chips
have become obsolete, and something entirely different comes along to
replace them, with it's own unique cost structure, the costs could
easily swing the other way.

I don't know whether this particular assumption will ever fail. But if
you make a habit of relying on such assumptions, as seems likely, I can
virtually guarantee that one of your assumptions will fail. Code which
avoids making unnecessary assumptions about such things will survive
such a transition; the programmers who have to port the other code will
spend a lot of time swearing at you.
 
J

James Kuyper

On 06/01/2011 12:37 PM, Kleuskes & Moos wrote:
....
Adding to that, i've never in my career encountered a situation where
i had to assume anything beyond what's in the standard about either
UINT_MAX or INT_MAX.

Then why in the world are you arguing against restricting one's
assumptions to those guaranteed by the standard?
 
S

Seebs

Dating back from the 60's. Ok. Welcome back to the stone age.

Okay, key insight:

What makes sense in hardware has been known to change RADICALLY over the
course of a couple of decades.
In short: UINT_MAX==INT_MAX is only found in hypothetical and
prehistoric machines. Which still does not invalidate the point that
the situation in question is (anno domini 2011) hypothetical at best.

But it does invalidate the presumed argument that it will *remain*
hypothetical and that it is reasonable to assume that it's irrelevant.

-s
 
K

Kleuskes & Moos

On 06/01/2011 12:37 PM, Kleuskes & Moos wrote:
...


Then why in the world are you arguing against restricting one's
assumptions to those guaranteed by the standard?

I wasn't. I was just making the point.
 
K

Kleuskes & Moos

It doesn't bother me that I have no counter-example, because the point
I'm making only concerns with what the standard mandates and allows.
Keeping track of what's actually true on all current implementations of
C is something that would require more work than I have time for.

If that constitutes "playing games" in your book, then feel free to
think of it that way; I'll continue to think of what you're doing as
"making unjustified assumptions".

...




I program real computers with code that will work even on hypothetical
ones, so long as there is a compiler on those machines which is at least
backwardly compatible with C99; which means my code will continue to
work even if your assumptions about what "real machines" do become
inaccurate.

...


You're correct, I can only infer what you care about from your actual
words, you could secretly care about these issues far more than your
words imply. But I can only pay attention to your words, I don't have
the power to read your mind. I'll have to form my guesses as to what you
care about based upon your actual words. If those guesses are
inaccurate, it's due to a failure of your words to correctly reflect
your true beliefs.

...


I said nothing about your sense of responsibility, only your
responsibilities themselves. That you might be insensible of those
responsibilities seems entirely plausible, and even likely.

<snip>

Ok. Now the debate has degraded to ad-hominem remarks, any further
discussion is a waste of time.

Have a nice day.
 
L

lawrence.jones

Kleuskes & Moos said:
There is a reason, and a very good one, too, that sign-and-magnitude
isn't used anymore. If you have a different opinion, please post the
hardware in question. I'd be curious.

Note the decimal arithmetic was dead, buried, and fossilized until IBM
resurrected it a couple years ago and it suddenly became the hotest
thing in hardware design.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,093
Messages
2,570,613
Members
47,230
Latest member
RenaldoDut

Latest Threads

Top