max / min / smallest float value on Python 2.5

D

duncan smith

Hello,
I'm trying to find a clean and reliable way of uncovering
information about 'extremal' values for floats on versions of Python
earlier than 2.6 (just 2.5 actually). I don't want to add a dependence
on 3rd party modules just for this purpose. e.g. For the smallest
positive float I'm using,


import platform
if platform.architecture()[0].startswith('64'):
TINY = 2.2250738585072014e-308
else:
TINY = 1.1754943508222875e-38


where I've extracted the values for TINY from numpy in IDLE,



I'm not 100% sure how reliable this will be across platforms. Any ideas
about the cleanest, reliable way of uncovering this type of information?
(I can always invoke numpy, or use Python 2.6, on my home machine and
hardcode the retrieved values, but I need the code to run on 2.5 without
3rd part dependencies.) Cheers.

Duncan
 
B

Benjamin Kaplan

Hello,
     I'm trying to find a clean and reliable way of uncovering information
about 'extremal' values for floats on versions of Python earlier than 2.6
(just 2.5 actually).  I don't want to add a dependence on 3rd party modules
just for this purpose.  e.g. For the smallest positive float I'm using,


import platform
if platform.architecture()[0].startswith('64'):
   TINY = 2.2250738585072014e-308
else:
   TINY = 1.1754943508222875e-38


where I've extracted the values for TINY from numpy in IDLE,



I'm not 100% sure how reliable this will be across platforms.  Any ideas
about the cleanest, reliable way of uncovering this type of information?  (I
can always invoke numpy, or use Python 2.6, on my home machine and hardcode
the retrieved values, but I need the code to run on 2.5 without 3rd part
dependencies.)  Cheers.

Duncan
2.2200000000000001e-308

float32 vs. float64 has nothing to do with a 32-bit vs. a 64-bit
platform. It's single precision floating-point (C float) vs.
double-precision floating point (C double). It's used in numpy because
numpy optimizes everything like crazy. Python always uses doubles.<type 'numpy.float64'>


 
S

Steven D'Aprano

Hello,
I'm trying to find a clean and reliable way of uncovering
information about 'extremal' values for floats on versions of Python
earlier than 2.6 (just 2.5 actually). I don't want to add a dependence
.... smallest = x
.... x /= 2.0
....4.9406564584124654e-324

which is the smallest number that can be distinguished from zero on my
system.

If you're running on some weird platform with non-binary floats (perhaps
a Russian ternary mainframe, or an old supercomputer with decimal floats)
then you're on your own.

I calculated this using Python 2.5. In 2.6, I see this:
sys.floatinfo(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308,
min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15,
mant_dig=53, epsilon=2.2204460492503131e-16, radix=2, rounds=1)

So there's obviously a difference between how I calculate the smallest
number and what Python thinks. The reason for this is left as an exercise.
 
D

duncan smith

Christian said:
duncan said:
Hello,
I'm trying to find a clean and reliable way of uncovering
information about 'extremal' values for floats on versions of Python
earlier than 2.6 (just 2.5 actually). I don't want to add a dependence
on 3rd party modules just for this purpose. e.g. For the smallest
positive float I'm using,


import platform
if platform.architecture()[0].startswith('64'):
TINY = 2.2250738585072014e-308
else:
TINY = 1.1754943508222875e-38


where I've extracted the values for TINY from numpy in IDLE,

float(numpy.finfo(numpy.float32).tiny) 1.1754943508222875e-38
float(numpy.finfo(numpy.float64).tiny)
2.2250738585072014e-308

You are confusing a 32 / 64bit build with 32 / 64bit floats. Python's
float type is build upon C's double precision float type on both 32 and
64 bit builds. The simple precision 32bit float type isn't used. The
DBL_MIN and DBL_MAX values are equal on all platforms that have full
IEEE 754 float point support. The radix may be different, though.

Christian

OK, this is the sort of confusion I suspected. I wasn't thinking
straight. The precise issue is that I'm supplying a default value of
2.2250738585072014e-308 for a parameter (finishing temperature for a
simulated annealing algorithm) in an application. I develop on
Ubuntu64, but (I am told) it's too small a value when run on a Win32
server. I assume it's being interpreted as zero and raising an
exception. Thanks.

Duncan
 
S

Steven D'Aprano

The precise issue is that I'm supplying a default value of
2.2250738585072014e-308 for a parameter (finishing temperature for a
simulated annealing algorithm) in an application. I develop on
Ubuntu64, but (I am told) it's too small a value when run on a Win32
server. I assume it's being interpreted as zero and raising an
exception. Thanks.

I'm trying to think of what sort of experiment would be able to measure
temperatures accurate to less than 3e-308 Kelvin, and my brain boiled.

Surely 1e-100 would be close enough to zero as to make no practical
difference? Or even 1e-30? Whatever you're simulating surely isn't going
to require 300+ decimal points of accuracy.

I must admit I'm not really familiar with simulated annealing, so I could
be completely out of line, but my copy of "Numerical Recipes ..." by
Press et al has an example, and they take the temperature down to about
1e-6 before halting. Even a trillion times lower that that is 1e-15.
 
M

Mark Dickinson

import platform
if platform.architecture()[0].startswith('64'):
     TINY = 2.2250738585072014e-308
else:
     TINY = 1.1754943508222875e-38

As Christian said, whether you're using 32-bit or 64-bit shouldn't
make a difference here. Just use the first TINY value you give.
I'm not 100% sure how reliable this will be across platforms.  Any ideas
about the cleanest, reliable way of uncovering this type of information?

In practice, it's safe to assume that your 2.225....e-308 value is
reliable across platforms. That value is the one that's appropriate
for the IEEE 754 binary64 format, and it's difficult these days to
find CPython running on a machine that uses any other format for C
doubles (and hence for Python floats).

The smallest positive *normal* number representable in IEEE 754
binary64 is exactly 2**-1022 (or approximately
2.2250738585072014e-308). The smallest positive *subnormal* number
representable is exactly 2**-1074, or approximately
'4.9406564584124654e-324'. (Subnormals have fewer bits of precision
than normal numbers; it's the presence of subnormals that allows for
'gradual underflow'.) Some machines can/will treat subnormal numbers
specially for speed reasons, either flushing a subnormal result of a
floating-point operation to 0, or replacing subnormal inputs to an
floating-point operation with 0, or both. So for maximal portability,
and to avoid numerical problems, it's best to avoid the subnormal
region.
The precise issue is that I'm supplying a default value of
2.2250738585072014e-308 for a parameter (finishing temperature for a
simulated annealing algorithm) in an application. I develop on
Ubuntu64, but (I am told) it's too small a value when run on a Win32
server. I assume it's being interpreted as zero and raising an
exception.

This is a bit surprising. What's the precise form of the error you
get? Do you still get the same error if you replace your TINY value
by something fractionally larger? (E.g., 2.23e-308.)
 
S

Steve Holden

duncan said:
Christian said:
duncan said:
Hello,
I'm trying to find a clean and reliable way of uncovering
information about 'extremal' values for floats on versions of Python
earlier than 2.6 (just 2.5 actually). I don't want to add a
dependence on 3rd party modules just for this purpose. e.g. For the
smallest positive float I'm using,


import platform
if platform.architecture()[0].startswith('64'):
TINY = 2.2250738585072014e-308
else:
TINY = 1.1754943508222875e-38


where I've extracted the values for TINY from numpy in IDLE,


float(numpy.finfo(numpy.float32).tiny)
1.1754943508222875e-38
float(numpy.finfo(numpy.float64).tiny)
2.2250738585072014e-308

You are confusing a 32 / 64bit build with 32 / 64bit floats. Python's
float type is build upon C's double precision float type on both 32 and
64 bit builds. The simple precision 32bit float type isn't used. The
DBL_MIN and DBL_MAX values are equal on all platforms that have full
IEEE 754 float point support. The radix may be different, though.

Christian

OK, this is the sort of confusion I suspected. I wasn't thinking
straight. The precise issue is that I'm supplying a default value of
2.2250738585072014e-308 for a parameter (finishing temperature for a
simulated annealing algorithm) in an application. I develop on
Ubuntu64, but (I am told) it's too small a value when run on a Win32
server. I assume it's being interpreted as zero and raising an
exception. Thanks.
Whether this is relevant or not I can't say, but you must be careful to
note that the smallest representable floating-point value (i.e. the
smallest number distinguishable from zero) is not the same as the
smallest difference between two numbers of a given magnitude.

Consider a decimal floating-point system with a two-digit exponent and a
four-digit mantissa, and for convenience ignore negative mantissas. The
range of representable non-zero values runs from 1E-99 to 9999E99. But
adding 1E-99 to (say) 1 will just give you 1 because the system has
insufficient precision to represent the true result.

regards
Steve
 
S

Steve Holden

duncan said:
Christian said:
duncan said:
Hello,
I'm trying to find a clean and reliable way of uncovering
information about 'extremal' values for floats on versions of Python
earlier than 2.6 (just 2.5 actually). I don't want to add a
dependence on 3rd party modules just for this purpose. e.g. For the
smallest positive float I'm using,


import platform
if platform.architecture()[0].startswith('64'):
TINY = 2.2250738585072014e-308
else:
TINY = 1.1754943508222875e-38


where I've extracted the values for TINY from numpy in IDLE,


float(numpy.finfo(numpy.float32).tiny)
1.1754943508222875e-38
float(numpy.finfo(numpy.float64).tiny)
2.2250738585072014e-308

You are confusing a 32 / 64bit build with 32 / 64bit floats. Python's
float type is build upon C's double precision float type on both 32 and
64 bit builds. The simple precision 32bit float type isn't used. The
DBL_MIN and DBL_MAX values are equal on all platforms that have full
IEEE 754 float point support. The radix may be different, though.

Christian

OK, this is the sort of confusion I suspected. I wasn't thinking
straight. The precise issue is that I'm supplying a default value of
2.2250738585072014e-308 for a parameter (finishing temperature for a
simulated annealing algorithm) in an application. I develop on
Ubuntu64, but (I am told) it's too small a value when run on a Win32
server. I assume it's being interpreted as zero and raising an
exception. Thanks.
Whether this is relevant or not I can't say, but you must be careful to
note that the smallest representable floating-point value (i.e. the
smallest number distinguishable from zero) is not the same as the
smallest difference between two numbers of a given magnitude.

Consider a decimal floating-point system with a two-digit exponent and a
four-digit mantissa, and for convenience ignore negative mantissas. The
range of representable non-zero values runs from 1E-99 to 9999E99. But
adding 1E-99 to (say) 1 will just give you 1 because the system has
insufficient precision to represent the true result.

regards
Steve
 
D

duncan smith

Steven said:
I'm trying to think of what sort of experiment would be able to measure
temperatures accurate to less than 3e-308 Kelvin, and my brain boiled.

Surely 1e-100 would be close enough to zero as to make no practical
difference? Or even 1e-30? Whatever you're simulating surely isn't going
to require 300+ decimal points of accuracy.

I must admit I'm not really familiar with simulated annealing, so I could
be completely out of line, but my copy of "Numerical Recipes ..." by
Press et al has an example, and they take the temperature down to about
1e-6 before halting. Even a trillion times lower that that is 1e-15.

It depends on the optimisation problem, but I suppose the fitness
functions could be tweaked. I could paste the actual code if anyone's
interested, but the following pseudo-python gives the idea. For an
exponential cooling schedule the temperatures are generated as below.
The lower the final temperature the greater the number of iterations,
and the longer the algorithm spends searching locally for an optimal
solution (having already searched more widely for areas of high fitness
at higher temperatures). The probability of moving to a less fit
solution is given by exp(dF/temp) where dF is a (negative) change in
fitness and temp is the current temperature. So I could scale the
fitness function to cope with higher finishing temperatures.

I'm going to have to think about the point raised by Steve (Holden).

I also think I can probably improve on raising StopIteration if
exp(dF/temp) overflows by yielding False instead (although if it does
overflow it probably indicates a poor choice of cooling schedule for the
given problem). Stuff to think about. Cheers.

Duncan


import random
import math

def temps(start, final, mult):
t = start
while t > final:
yield t
t *= mult

def sim_anneal(permuter, start, final, mult):
rand = random.random
exp = math.exp
for temp in temps(start, final, mult):
dF = permuter.next()
if dF >= 0:
yield True
else:
try:
yield rand() < exp(dF / temp)
except OverflowError:
raise StopIteration

class Permuter(object):
def __init__(self, obj):
self.obj = obj
self.proposed = None

def run(self, start, final, mult):
for decision in sim_anneal(self, start, final, mult):
if decision:
# commit proposed change to self.obj

def next():
# propose a change to self.obj
# calculate and return the change in fitness
self.proposed = proposed
return dF
 
M

Mark Dickinson

[...]
interested, but the following pseudo-python gives the idea.  For an [...]

             try:
                 yield rand() < exp(dF / temp)

Practically speaking, the condition rand() < exp(dF / temp) is never
going to be satisfied if dF / temp < -40 (in fact, the output of
rand() is always an exact multiple of 2**-53, so the condition rand()
< exp(-40) is identical to the condition rand() == 0.0, which should
occur for one random sample out of every 9 thousand million million or
so).

So assuming that your fitness delta dF can't get smaller than 1e-16 or
so in absolute value (which seems reasonable, given that dF is
presumably the result of subtracting two numbers of 'normal'
magnitude), there would be little point having temp go much smaller
than, say, 1e-20.

IOW, I agree with Steven: 2.2e-308 seems extreme.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top