yeti said:
LoL...behaviour will still be perfectly predictable (if not easy to
predict). If you take the same computer with same intial state and run
the same crap code which messes up the control information then you
WILL end up with the same sequence of instruction and output EVERYTIME.
You are basically arguing the mechanistic view of the universe --
that if you just knew the state of everything precisely enough, that
you could predict -exactly- what would happen. That view has
been disproven by quantum physics experiments. And basically,
every modern fast computer is a quantum physics experiment -- modern
chip designers put a lot of work into reducing the influence of
random quantum behaviour (except where they -want- random quantum
behaviour.)
You also have a restricted view of what a computer is, and of how
computing is performed. There are, for example, biological computers,
in which starting and ending protein hooks are inserted into vats of
proteins, allowing the answer to self-assemble biologically. And Q-bit
(Quantum bit) based computers are actively being worked on;
unfortunately they aren't particularily stable as yet.
It is
basic assumption for all programming languages (including assembly and
machine code) that computers should have perfectly predictable
bahaviour.
Without this asumption you won't be sure what a computer would do after
you gave it an instruction to execute.
You are ignoring multiprocessor computers: when you have multiple
processors, the relative order that things happen in becomes
indeterminate, with one run seldom being like the next; nanosecond
differences in interrupts determine which processor gets a shared
resource first, and the happenstance orders *do* propogate into
the problem as a whole.
You are also not taking into account network computers. When
computer X sends a message to computer Y, the timing with which
Y receives the message is variable, and it is trivial to construct
examples in which the timing variations trigger different actions.
Y might not receive the message at all. Y might receive two (or
more) copies of the message. Y might sometimes be able to detect
that something went missing (or is taking the slow boat), but
for some protocols Y won't be able to tell. Y might request
a resend -- that's different behaviour, not predictable by X.
Possibly everything will be straightened out by the time you
get to the application layer, but there are important applications
(such as video broadcasting) in which losses are expected and
a lot of work goes into making the data stream robust "enough"
for use.
And you are not taking into account that some cpus have
(deliberate) random-number generators, and that those random
numbers are important to a variety of activities, including
various kinds of security (or even just making a computer
game less predictable.)
Non-determinism happens in a wide variety of circumstances
in computing, and there are different strategies for dealing
with it, down to chip doping strategies and up to the application
layers.
When the C standard says that something has undefined behaviour and
that the implementation can define that behaviour if it wants, the
standard *does* mean to include the possibility that the implementation
will behave non-deterministically. There is a different kind of
behaviour, "unspecified behavior" if my mind isn't too asleep yet, for
which the standard does not say exactly what happens, but for which the
implementation is not allowed to cause the program to fail. For
undefined behaviour, the results really are subject to change without
notice.