J
jdog1016
Recently I coded something for a class that uses a LOT of doubles with
precision out to the thousands place. I did most of the arithmetic by
multiplying each by a thousand and converting to an int, but not all of
it. In any case, coding it under FreeBSD, I didn't have any problems.
However, when I moved it Linux and compiled, I got drastically
different numbers. I eventually fixed the problem by multiplying
everything by 1000 and storing as ints, and then converting back when
printing to the screen, but I'm not sure even now if I fully understand
why this happened. Is this a compiler thing? I was using 3.3.3 on
freebsd and 3.3.2 on linux... Or is this an OS thing?
precision out to the thousands place. I did most of the arithmetic by
multiplying each by a thousand and converting to an int, but not all of
it. In any case, coding it under FreeBSD, I didn't have any problems.
However, when I moved it Linux and compiled, I got drastically
different numbers. I eventually fixed the problem by multiplying
everything by 1000 and storing as ints, and then converting back when
printing to the screen, but I'm not sure even now if I fully understand
why this happened. Is this a compiler thing? I was using 3.3.3 on
freebsd and 3.3.2 on linux... Or is this an OS thing?