(snip on quantum randomness)
OK, I will try anyway. (Even though I am also not a specialist.)
You want statistical independence.
Actually, no.
If you have the previous N bits, can you predict the next bit,
especially as N gets larger.
The favorite example is radioactivity, where at a given time,
a radioactive nucleus has some probability to decay. The assumption
is that the decay of any nucleus is statistcially independent of
any other. That is, that there is no overlap in the wave function.
But there is overlap in the wave function, it is just extrememly low.
Wave function overlap decreases exponentially with distance,
(or maybe exponentially with the square of the distance).
The ratio of atomic radius to nuclear radius is about 1e5, so
the overlap might be exp(-1e5).
Way simplifying things, if you have the previous exp(1e5) bits,
you might be able to predict something about the next one.
(Especially, as that is only for two atoms, and you want a lot
more of them.)
You can predict something based upon that overlap, but what you can
predict is only a shift in the probability distribution. The actual
decay time is still a perfectly random selection from that distribution.
You cannot predict the actual time until the next decay. No matter how
much information you have, the time until the next decay could be either
arbitrarily long or arbitrarily short, without violating any of the laws
of quantum physics as they are currently understood. That is the
fundamental distinction between quantum randomness and
pseudo-randomness. If you knew the full internal state of a
pseudo-random number generator, and the algorithm it uses, you could
determine the next random number precisely.
It's not just a matter of some of the universe's state information being
hidden from us. Einstein, Podalsky and Rosen (EPR) tried to interpret
quantum uncertainty as being due to "hidden variables" - state
information about the universe that we were unaware of (and which we
might inherently be incapable of being aware of). They deliberately left
the details of what that state information was and how it influences the
measurements completely unspecified. Despite leaving it unspecified,
they were able to describe a quantum-mechanical experiment, and a
statistic that could be calculated from measurements that could be taken
while running that experiment. They rigorously derived a requirement
that this statistic must be greater than or equal to 1, regardless of
how the hidden variables actually worked. Quantum mechanics, on the
other hand, predicted that the value of that statistic should be 0.5.
From this, EPR concluded that quantum mechanics was unrealistic, and
could therefore be, at best, only an approximation to reality.
At the time their paper was published, it was not possible to conduct
the experiment with sufficient precision to clearly distinguish a value
of 1 from a value of 0.5. Many years later, when scientist were finally
able to perform it, reality decided not to cooperate with EPR's concept
of "realism". The measured value unambiguously confirmed the quantum
mechanical prediction, violating the constraint that EPR had derived
from assuming that hidden variables were involved.
Scientists still believe that quantum mechanics can only be an
approximation to reality - but it's no longer because of the fundamental
role that true randomness plays in the theory.
I don't want to start an extended discussion of EPR - even experts get
into long pointless arguments talking about it. I just want to say that,
when I talk about "really random", I'm talking about the kind of thing
that EPR were implicitly assuming was inherently impossible when they
derived their limit equation.