Ok, a relative jump is basically
PC <-- PC+offset
Not that it's necessarily the case that a CPU must directly support
relative jumps, but that's beside the point. (And no, I'm not claiming
that there are CPUs that don't support relative jumps, or that there
aren't; it's just not particularly relevant to C.)
But nothing in that description implies unsigned arithmetic.
Presumably the offset is signed. Why can't the PC (program counter)
be signed?
Basically, it makes no big difference, and if you insist on viewing
the PC as signed, then you would end up with a consistent picture. It
may just not be that convenient. In a simple staightforward addressing
scheme, say a 16-bit address bus, your memory layout would be slighly
weird if you install 48k (16 ROM/32kRAM) of actual memory. You would
have one range [0..32k> and one [-32k..-16k>, assuming the layout of
your address-bus isn't something weird, too. And trust me... The
hardware guys don't like that and will probably call you "very silly,
indeed", in polite terms and/or behind your back, of course, if you
ask them to design something like that.
(Asserting that it's unsigned does not answer the above question.)
I hope the above satisfies your demand.
As pointed out elsthread, signed or unsigned is in the eye of the
beholder. Viewing them as unsigned is simply more convenient, having
an address range [0...MemSize>.
Sometimes you argue that signed addresses defy the laws of common sense.
And sometimes you argue that addresses can be viewed as either signed or
unsigned, whichever is more convenient. I see a contradiction.
In theory, you can view them as whatevery you want. What matters is
not what numerical value _we_ assign to an address, its what's
presented to the hardware what matters. On many systems, especially if
you're dealing with Nkx8 memory, no single memory chip is connected to
every address-line, that is, sees the entire address. The numerical
values are imposed and you can think of them as being signed, if you
like, but as explained above, it just ain't practical.
Virtual processors are just as valid a target for a C implementation as
real ones.
True.
You spend a great deal of time arguing about something that doesn't
interest you.
True. Bu then again, discussion sharpens the mind, and every
participant may end up having learned something, so i still consider
it worthwhile to bicker about not-so-interesting hypotheticals.
And besides... It's fun.
As you've already *asserted* several times.
Yup. I've spent quite some time explaining why i asserted that,
written extensive posts, was accused of logical fallacies, rebuked for
not strictly adhering to the standard, and called irresonsable by
insinuation, but, sofar, nobody's proven the simple assertion wrong.
So please. Bring an example and i will stand corrected.
[...]
The C standard, by it's very nature, is stated as generally as
possible, and, as i've stated upthread, the standard allows for a lot
of implausible implementations including ones in which the condition
in question is true, simply to give compiler-makers enough leeway to
cater to even the most obscure hardware. That's a good thing.
That does not mean that every possibility allowed by the standard
should be taken into consideration, especially those that, by their
very nature, are _very_ implausible.
The C standard is stated very generally, but *not* "as generally as
possible". There are plenty of characteristics, some of them things
that have existed in real-world systems, that are excluded (some of
which can be worked around in software). Some examples: Bytes must
be at least 8 bits. Type int must be at least 16 bits. Data must
be represented in binary. Signed integers must be represented in
either 2's-complement, 1s'-complement, or sign-and-magnitude (biased
representations are not permitted). Floating-point is mandatory.
Ok. You got a nit. It's _only_ stated _very_ generally and not "as
generally as possible". Don't you get a strange feeling that
you might be arguing over semantics here? I would.
Most of the flexibility is there either because there have been
real-world systems that need it, or because a greater range of
permitted characteristics is simpler to describe. It's fairly
safe to assume that there are no systems with 137-bit bytes, but
the standard doesn't go out of its way to forbid it.
Fortunately not, because there's some _really_ weird hardware out
there.
Especially in the museum. Did you ever hear of "bit-slicers"? It's
perfectly
feasable to build a 137-bit computer using those. 68HC11 has 13 bits
addresses
and a harvard architecture.
But the again, it just ain't practical. Last use i heard of a
bitslicer
was as a fire control in a goalkeeper anti-missile gun, but i don't
remember
much of the detail.
I digress...
Not if the hardware doesn't actually interact with the nuclear
arsenal. But some systems do exactly that, and if they're
implemented in C (which, personally, I hope they're not) they had
bloody well better avoid anything approaching undefined behavior.
Perhaps more realistically, you might assume that a C program with
undefined behavior can't possibly reformat your hard drive -- but
if the process it's running in has the capability of invoking the
"please reformat my hard drive" routine, and a function pointer
somewhere gets clobbered, it could very well do that.
The point to be made was: "not everything allowed by the standard is
likely, or even possible".
I'm sorry i made you think that i was trying to imply that _any_
invocation of undefined behavior is allowable, but sometimes i
think people argueing the details of The Standard are also
capable of reading a simple post without making undue inferences.
I will try to spell it out next time.
[...]
I'm sure you can figure that one out for yourself.
That wasn't the point of the question.
That seems plausible. What's the point?
The point, which you've acknowledged, is that it can be perfectly
reasonable to think of memory addresses as signed integers.
Yes. But it's just not practical, and i've explained above _why_ it's
not practical. Besides, i've pointed out on several occasions that
it's
possible. It's not practical, is the point.
Which means that you can't reasonably use memory addresses as an
argument that all systems *must* support unsigned arithmetic.
I don't argue they *must*, i argue they *do*. Not for theoretical
reasons, but for very practical reasons. There's a shitload of weird
and
excentric systems one might, hypothetically, build and which would,
theoretically, work, bot the point is, nobody does, since there's no
reason to.
That's one of the two contradictory points you made. The other point
you made is that the "practicalities of hardware design" imply somehow
that addresses are necessarly unsigned.
I said it's more practical to view them as such, because it's simpler.
You can just view the n bits of the address line as an n-bit number
instead of a 2's complement number. And if you're making electronics-
designs, you like things like that to be simple and straightforward
and not be hindered by positive and negative adresses when there's no
good reason to do so.
And since the CPU-designers view adresses as unsigned integers, and
the board-designers view adresses as unsigned integers, it's best
software guys don't try to invent signed addresses, since that only
complicates things for no good reason whatsoever.
And as I explained, *if the system traps on signed overflow*,
there is a very real difference between signed and unsigned addition.
Not that big a difference.
Again, you've spent a great deal of time discussing something that you
find non-interesting.
I try not to, but then again, you must eat the porridge to get the
raisins.
Are you trying to convince me that I shouldn't be interested in them either?
Not really. I'm merely trying to avoid getting drawn into a quagmire
of ill-conceived,
ad-hoc hypotheticals.