Patricia said:
The objective in setting epsilon is to look for a value with the
following two properties:
1. If the infinite precision answer would be expected, then the rounding
error in the calculation no greater than epsilon. Ideally, this is true
under worst case assumptions. In many cases we have to be content with
very high probability. Unless the calculation is exact, any value
satisfying this condition will be big enough to ensure that expected +/-
epsilon is not equal to expected.
2. For purposes of the application, a value within epsilon of expected
might just as well be expected.
Patricia
I thought I'd post the results of comparing tests:
=================== Code ===================
// Comparison of FP equality testing methods
import static java.lang.System.out;
public class Epsilon {
public static void main ( String [] args ) {
float x = 500000.0f, expected = 500000.05f, epsilon = 0.05f;
out.printf( "x = %14.7f, expected = %14.7f, epsilon = %1.7f%n",
x, expected, epsilon );
out.println( "Test1: x >= expected "
+ "- epsilon && x <= expected + epsilon" );
out.print( " " +
(x >= expected - epsilon && x <= expected + epsilon) );
out.printf( " (expected - epsilon = %1.7f)%n",
(expected - epsilon) );
out.println( "Test2: Math.abs(x - expected) <= epsilon" );
out.println( " " + (Math.abs(x - expected) <= epsilon) );
out.printf( " Math.abs(x - expected) = %1.7f)%n",
Math.abs(x - expected) );
}
}
================== Output ==================
C:\Temp>java Epsilon
x = 500000.00000000, expected = 500000.06250000, epsilon = 0.0500000
Test1: x >= expected - epsilon && x <= expected + epsilon
true (expected - epsilon = 500000.0000000)
Test2: Math.abs(x - expected) <= epsilon
false
Math.abs(x - expected) = 0.0625000)
=============== End =======================
As can be seen, Patricia's test results in true whenever x equals
expected to the limit of the precision of those values, which may
mean they are equal even when the difference > epsilon as long as
both x and expected are >> (much greater than) epsilon, whereas my
test fails whenever the roundoff error is greater than epsilon.
I think this means test2 (my test) is testing if the FP round-off
error is significant, whereas test1 (Patricia's test) is testing
if two FP numbers can be considered equal, to the limit of the
precision or to the value of epsilon, whichever is greater.
I think that for most applications test1 is what is wanted. (That
is, who cares what the error is as long as it is "insignificant"?
When using type float, you only get 6 significant decimal digits, so
"x000000y" won't have a significant value for digit y no matter where
in the number you put the decimal point. So, if the two numbers
have the same 6 leftmost digits in a float, the two numbers should be
considered equal regardless of epsilon. Patricia's test has this
property, mine doesn't.
The only case where a smaller (then the significant digits) epsilon
matters is when errors can accumulate, say in code such as:
float total = 0.0f;
for ( many iterations ) {
float val = some_expression_with_round_off_error;
total += val;
}
(Patricia, did I get it right this time? I certainly appreciate
your thoughtful and insightful posts in c.l.j.p.) I dimly remember
I once took a numerical methods course that dealt with this issue;
I wish I could remember the details.
-Wayne