32/64 bit cc differences

B

Ben Bacarisse

James Dow Allen said:
One specific suggestion is to avoid all floating-point.
Use an integer-only PRNG.

Yes and in fact that's what he's got. The floating point PRNG includes
an integer PRNG with final division step. I don't think the floating
point number are needed at all. I every case, the result of ran1 is
used to generate an integer. It would be very simple to switch to using
an integer-only PRNG.
 
E

Eric Sosman

Yes, good spot. It might not be a problem though because the program
scatters "random" data about which is then filtered out on decryption.
FP differences will just add another dimension to the "randomness".
However, if the RNG is also used for deterministic but chaotic data in
the heart of the algorithm then, yes, there will be trouble ahead.

The RNG also governs whether the bytes are XOR'ed or XNOR'ed
with mask values, and how the eventual block of bits is permuted.
If encryption swaps bits X and Y while "the same" decryption
swaps X and (Y+1), there's a 25% chance of corruption. Sometimes
the corrupted byte will be in the discarded "noise," but sometimes
it will be payload.
 
J

James Dow Allen

One specific suggestion is to avoid all floating-point.

Oops. I now see this suggestion was already made.
Excuse me for skimming.

(My excuse was that after loading the code I figured
few would be masochistic enough to study it. :)
I wasn't very masochistic myself, but the use of
'float' glared at me. Use of any floating-point
seems to me a no-no in general when results must
be reproduced across platforms.)

James
 
A

Aleksandar Kuktin

Aleksandar Kuktin said:
[snip]

(By the way, the 32-vs-64-bit
encrypted files are also ~one-in-a-thousand different, so both stages
exhibit this small problem.)

[snip]

Your #2 is wrong, as explicitly stated in the last parenthetical "btw"
sentence you quoted from me. Correct that, and it now can be.
In fact, it is.

In my defense, the description confused me. (Which is why I asked.)

I'd post something about "well, that's your problem", and "turn off the
RNG to see if it will work that way" and so on, but you already solved
the problem so whatever...
 
J

JohnF

Aleksandar Kuktin said:
JohnF said:
Aleksandar Kuktin said:
JohnF wrote:
[snip]
(By the way, the 32-vs-64-bit
encrypted files are also ~one-in-a-thousand different, so both stages
exhibit this small problem.)
[snip]

Your #2 is wrong, as explicitly stated in the last parenthetical "btw"
sentence you quoted from me. Correct that, and it now can be.
In fact, it is.

In my defense, the description confused me. (Which is why I asked.)

Sorry if followup sounded harsh. It wasn't meant to, but I was posting
several replies, maybe too quickly, and probably unintentionally
worded it poorly.
 
J

JohnF

Eric Sosman said:
The RNG also governs whether the bytes are XOR'ed or XNOR'ed
with mask values, and how the eventual block of bits is permuted.
If encryption swaps bits X and Y while "the same" decryption
swaps X and (Y+1), there's a 25% chance of corruption. Sometimes
the corrupted byte will be in the discarded "noise," but sometimes
it will be payload.

I'm not sure I'm following that. So I'm equally unsure the following
remark is relevant: the rng ran1() was copied from Numerical Recipes
(as per attribution in comment block), but modified to generate several
independent streams. The xor/xnor and permute use two different
streams, for exactly the purpose of guaranteeing that decryption
reverses encryption. I'd think there's pretty much a 100% chance
things would immediately mess up otherwise. But a 0% chance (modulo
my bugs:) using the separate rng streams. Is that addressing what
you're saying?
 
J

JohnF

BartC said:
How did you fix that?

Excellent question. I wasn't sure exactly how when I posted,
but realized problem was eminently fixable, as follows.
As per forkosh.com/fm.html, with default max block size
and max noise bytes, biggest possible permutation is 67584 bits.
So I need to take an rng float (or double) 0.0-1.0, and use it to
choose between one of 67584 "bins". Since 1.0/67584.0 is well within
the "resolution"/precision of any conceivable float representation,
it's gotta be easily doable, one way or another.

The problem arises with current "formula" when the generated rn
gets near the "edge" of one of those "bins", and 32-bit apparently
truncates to lower-numbered bin, whereas 64-bit goes higher.
I say "apparently" because the first thing I tried was a little test,
as follows: changed the current little bin-choosing "formula"
int j = i + (int)(((float)(n-i))*ran1(stream,seed)); /*ran#i...n-1*/
to
int j = i + (int)(((double)(n-i))*((double)ran1(stream,seed)));

I expected that greater precision would maybe push the 32-bit
compiled executable "over the edge", so to speak, reproducing
the 64-bit numbers. It obviously couldn't change things the
other way round. Right??? ... Wrong. The 32-bit results remained
unchanged, and the 64-bit executable now chooses the lower
bin number wherever there used to be a diff/discrepancy.
How the heck that's happening, I don't know. You could say it
kind of solves the original problem, but since it's so weird
and unexpected, I have no faith it's a truly portable fix.
So I'll get around to a better, more robust, bin-choosing
formula. But since I now know -- thanks to you all -- exactly
where and what the problem is, and know that it's obviously
theoretically easily fixable, I'm not really worried about
it any longer.
 
A

Aleksandar Kuktin

Sorry if followup sounded harsh. It wasn't meant to, but I was posting
several replies, maybe too quickly, and probably unintentionally worded
it poorly.

No problem about it. All is well in the land of C. :)
 
J

JohnF

Ben Bacarisse said:
Yes and in fact that's what he's got. The floating point PRNG includes
an integer PRNG with final division step. I don't think the floating
point number are needed at all. I every case, the result of ran1 is
used to generate an integer. It would be very simple to switch to using
an integer-only PRNG.

Yeah, as per remarks in preceding followup explaining
"temporary fix/patch", I think the best permanent fix is probably what
you're saying (note: a preceding followup made the same "integer prng"
recommendation, but I'm not seeing who said it at the moment).
As mentioned in the rng's comment block, it's copied from
Numerical Recipes (with a little change to allow for several
concurrent but independent streams of random numbers).
And as most such rng's, output is float 0.0-1.0. I just left it
that way since that's what most rng's do anyway, and it was easy
enough to accommodate. Or so I thought. And so it appeared until now.
 
A

Aleksandar Kuktin

So I need to take an rng float (or double) 0.0-1.0, and use it to choose
between one of 67584 "bins". Since 1.0/67584.0 is well within the
"resolution"/precision of any conceivable float representation, it's
gotta be easily doable, one way or another.

Isn't C suppose to have a standard construct for rounding one way or the
other? I don't normally use floating point calculations if I don't
explicitly need to, but I think I remember reading in somefunc() man page
that you can request somefunc() to round this or that way.
 
J

JohnF

James Dow Allen said:
Oops. I now see this suggestion was already made.
Excuse me for skimming.

(My excuse was that after loading the code I figured
few would be masochistic enough to study it. :)
I wasn't very masochistic myself, but the use of
'float' glared at me. Use of any floating-point
seems to me a no-no in general when results must
be reproduced across platforms.) James

That kind of thought (if I recall) had crossed my
mind when copying ran1() from Numerical Recipes
back around y2k (to replace an even earlier rng
copied from the IBM Scientific Subroutine Package --
the guy who remarked that my coding style reminded
him of Fortran was right on:). But as pointed out
in preceding followup, I realized that distinguishing
between the max possible number of bins only required
1.0e-5 (or less) precision, and all 32-bit architectures
supplied some 100x better than that. What I failed
to think about was truncation behavior near bin edges:
the edges might look "fuzzier" to 32-bit executables.
But I guess using the rng's integer result should
get rid of that once and for all.
 
E

Eric Sosman

I'm not sure I'm following that. So I'm equally unsure the following
remark is relevant: the rng ran1() was copied from Numerical Recipes
(as per attribution in comment block), but modified to generate several
independent streams. The xor/xnor and permute use two different
streams, for exactly the purpose of guaranteeing that decryption
reverses encryption. I'd think there's pretty much a 100% chance
things would immediately mess up otherwise. But a 0% chance (modulo
my bugs:) using the separate rng streams. Is that addressing what
you're saying?

Not quite. I'm saying that even if ran1() returns the exact
same stream of `float' values on all machines (C does not guarantee
this, and "even as a practical matter" I'm not sure it will do so),
subsequent calculations involving those `float' values may produce
different results due to being carried out a different precisions
before rounding or truncating to produce a final integer answer.

My remark about "a 25% chance" was incorrect -- as I said,
the code seems unnecessarily obfuscated, and I hadn't the patience
to give it a really thorough reading. My thoughts about the swap
part went something like this:

- On machine A the program swaps the bits in positions X and Y,
while on machine B (due to different rounding) it swaps X
and Y+1. (Or X and Y-1, but that's the same case: Just
relabel A and B. Or X±1 and Y, but that's the same case:
Just relabel X and Y.)

- I'm assuming the off-by-one disagreements are rare, so the
likelihood of two disagreements in one swap is surpassingly
rare. It's conceivable that X±1 could swap with Y±1, but I
chose to ignore the possibility as negligible.

- If the bits at positions X, Y, and Y+1 all have the same value,
both swaps are no-ops and there's no corruption. This happens
25% of the time -- but that was my error: There's a 25% chance
of *no* damage, not a 25% chance damage will occur.

- Otherwise, the swaps will damage one, two, or three bytes
(where by "damage" I mean "come out differently on A and B").
Sometimes the damage will be to noise bytes, sometimes it
will be to payload. (In the "negligible" case of swapping
X±1 with Y±1 as many as four bytes could be damaged.)

- Subsequent "correct" swaps involving a damaged bit will not
increase the amount of damage, but will only move it around.

I don't see why you need multiple random streams -- it seems that
if you made the same sequence of calls on a single generator you'd
get the same results each time. But then, I haven't read the code
carefully.
 
J

JohnF

Eric Sosman said:
Not quite. I'm saying that even if ran1() returns the exact
same stream of `float' values on all machines (C does not guarantee
this, and "even as a practical matter" I'm not sure it will do so),
subsequent calculations involving those `float' values may produce
different results due to being carried out a different precisions
before rounding or truncating to produce a final integer answer.

Okay, got it.
My remark about "a 25% chance" was incorrect

Yeah, that's what I was trying to figure. It sounded like
the result of some simple combinatoric/stochastic argument
that I just couldn't see.
(See bottom for answer to your other question.)
-- as I said,
the code seems unnecessarily obfuscated, and I hadn't the patience
to give it a really thorough reading. My thoughts about the swap
part went something like this:

- On machine A the program swaps the bits in positions X and Y,
while on machine B (due to different rounding) it swaps X
and Y+1. (Or X and Y-1, but that's the same case: Just
relabel A and B. Or X?1 and Y, but that's the same case:
Just relabel X and Y.)

- I'm assuming the off-by-one disagreements are rare, so the
likelihood of two disagreements in one swap is surpassingly
rare. It's conceivable that X?1 could swap with Y?1, but I
chose to ignore the possibility as negligible.

- If the bits at positions X, Y, and Y+1 all have the same value,
both swaps are no-ops and there's no corruption. This happens
25% of the time -- but that was my error: There's a 25% chance
of *no* damage, not a 25% chance damage will occur.

- Otherwise, the swaps will damage one, two, or three bytes
(where by "damage" I mean "come out differently on A and B").
Sometimes the damage will be to noise bytes, sometimes it
will be to payload. (In the "negligible" case of swapping
X?1 with Y?1 as many as four bytes could be damaged.)

- Subsequent "correct" swaps involving a damaged bit will not
increase the amount of damage, but will only move it around.

I don't see why you need multiple random streams -- it seems that
if you made the same sequence of calls on a single generator you'd
get the same results each time. But then, I haven't read the code
carefully.

Should I leave the answer to that as a problem for the reader?
Code reading not really necessary to see the answer...
o On encryption it first xor/xnor's and then permutes.
o Therefore, on decryption [I bet you just figured it out:)]
it must first permute and then xor/xnor.
o If just one stream were used, the encryption xor/xnor random
numbers would be applied to the permute. What a mess.
o And for similar reasons, the generation of noise bytes
also requires its own separate rng stream.
 
A

Aleksandar Kuktin

As per forkosh.com/fm.html, with default max block size and max noise
bytes, biggest possible permutation is 67584 bits.
So I need to take an rng float (or double) 0.0-1.0, and use it to choose
between one of 67584 "bins". Since 1.0/67584.0 is well within the
"resolution"/precision of any conceivable float representation, it's
gotta be easily doable, one way or another.

Out of curiosity - how did you come up with 67584? Its hexadecimal
representation - 0x10800 - isn't particularly round and I can't think of
any other magic properties that number could have.

I didn't read the code, if the answer is hiding in there.
 
B

BartC

JohnF said:
Excellent question. I wasn't sure exactly how when I posted,
but realized problem was eminently fixable, as follows.
As per forkosh.com/fm.html, with default max block size
and max noise bytes, biggest possible permutation is 67584 bits.
So I need to take an rng float (or double) 0.0-1.0, and use it to
choose between one of 67584 "bins". Since 1.0/67584.0 is well within
the "resolution"/precision of any conceivable float representation,
it's gotta be easily doable, one way or another.

The problem arises with current "formula" when the generated rn
gets near the "edge" of one of those "bins", and 32-bit apparently
truncates to lower-numbered bin, whereas 64-bit goes higher.

If you're using ran1() to generate 0.0 to 1.0, couldn't differences start
from there? In that in one instance, the ran1() might return, say,
0.249999... and in another, 0.250000... from the same sets of integer start
values.

Looking at ran1(), it only seems to use floats to convert a 0 to 2147483647
integer result, to a 0.0 to 1.0 one. Could you make use of an integer random
function which returns the range 0 to 2147483647 as it is?

(If you integer-divide 0...2147483647 by 31775, you do get 0...67584 (just
about), which is close to the 0...67583 you seem to need. I don't know how
to get rid of that one out-of-range result without skewing the results.)

(BTW, there seems to be something 'off' about using a 23-bit float value
(1.0/2147483647) to multiply a 31-bit random integer. But probably OK unless
you try and recreate the 31-bit value from the result....)
 
B

Ben Bacarisse

JohnF said:
Yeah, as per remarks in preceding followup explaining
"temporary fix/patch", I think the best permanent fix is probably what
you're saying (note: a preceding followup made the same "integer prng"
recommendation, but I'm not seeing who said it at the moment).

Well there are some very good ones around (search for KISS and George
Marsaglia) but you don't need a new one. Your PRNG is an integer one,
it just does a final divide to return a float rather than an int. If it
is a good PRNG, the final int can be used instead of the divided float.
Unfortunately, for cryptographic work, you should have very strong
guarantees about the way it behaves, but since this PRNG is designed for
numerical work, you have probably tacitly assumed it is good enough.

To get some more confidence, test the integer PRNG suing any one of the
standard random test suites. It won't give you cryptographic levels of
confidence, but it will ensure that you can use all the bits with equal
confidence.

<snip>
 
J

Jorgen Grahn

.
The code's supposed to be portable, and not care as long as int>=32.
The one place it wanted strictly >32 I used long long (despite
obnoxious -Wall warnings about it).

-Wno-long-long is a decent cure for that. Or better, switch to C99
and tell the compiler you're doing it.

/Jorgen
 
G

glen herrmannsfeldt

Isn't C suppose to have a standard construct for rounding one way or the
other? I don't normally use floating point calculations if I don't
explicitly need to, but I think I remember reading in somefunc() man page
that you can request somefunc() to round this or that way.

I think C is a little stricter, but there is a claim that a valid
Fortran floating point system could return 42.0 (or 42.D0) for all
floating point expressions.

Floating point is great for quantities that have a relative
uncertainty, and you want a value that is close to the right one.

This calculation is better done in fixed point.

The RNG already computes an integer between 0 and 21747483647,
the easy way is to take that modulo one more than the largest
value you want. The other way, when you have 64 bit arithmetic
available, is to multiply by the maximum+1 and divide by
2147483648 (the divide can be done as a shift).

Both will reliably and independent of any floating point
arithmetic give consistent values.

-- glen
 
E

Eric Sosman

[...]
This calculation is better done in fixed point.
A-men.

The RNG already computes an integer between 0 and 21747483647,

Unlikely: The larger value requires >=35 bits, which isn't
usual these days. ;-) Also, isn't his RNG based on Park & Miller's
"Minimal Standard" generator? If so, the lower limit is not 0
but 1, and the upper is not 2147483647 but 2147483646.
the easy way is to take that modulo one more than the largest
value you want.

Many authors recommend against this, because the low-order
bits of maximum-period linear congruential generators have short
periods. But the "Minimal Standard" generator is not such a
generator: It's a pure congruential generator with prime modulus,
and its low-order bits are "as random as" the others.
value you want. The other way, when you have 64 bit arithmetic
available, is to multiply by the maximum+1 and divide by
2147483648 (the divide can be done as a shift).

Both will reliably and independent of any floating point
arithmetic give consistent values.

Yet another method is to use a rejection technique. If you
want a value in the range 0<=V<N and the generator produces values
in LO<=R<HI, you generate an R and compute V=(R-LO)/((HI-LO)/N).
If that's <N you're done; if not, throw it away, generate another R,
and keep trying. (In the worst case you'll reject a hair fewer
than half of the R's, so the average quantity of R's you need is
no worse than 1 + 1/2 + 1/4 + ... = 2.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,955
Messages
2,570,117
Members
46,705
Latest member
v_darius

Latest Threads

Top