E
Eric Sosman
I'm getting a tiny-cum-microscopic, but nevertheless fatal,
difference in the behavior of the exact same C code compiled
on one 64-bit linux machine...
I concur with Ben Bacarisse's assessment of the code's
readability; my head hurts, too! But I toughed it out with
Tylenol long enough to notice one possible source of trouble:
Your pseudo-random number generator produces `float' values.
Even if the internal mechanics of the generator are accurately
reproduced on all systems, the actual values used in subsequent
computations might not be. The C Standard gives implementations
some freedom in floating-point calculation, in particular:
"Except for assignment and cast (which remove all extra
range and precision), the values yielded by operators with
floating operands and values subject to the usual arithmetic
conversions and of floating constants are evaluated to a
format whose range and precision may be greater than
required by the type. [...]" -- 5.2.4.2.2p9
So: The computations involving the `float' numbers you generate
might come out differently on different machines, even if you
manage to generate exactly the same `float' numbers. One machine
could use `float' precision, another could use `double', yet
another might use `long double' or even some hardware-internal
precision not directly accessible from C. The upshot is that a
low-order bit might round differently every now and again. (And
because of the way your program operates, there's a decent chance
that the damage could be affect only a single block and leave any
subsequent blocks intact.)
I'm not saying this *does* happen, only that it might. Unless
you find some other smoking gun -- or even if you do! -- I'd suggest
eliminating all floating-point calculation from the program. Use
a purely-integer PRNG, and use purely-integer arithmetic on what
it generates.