Increase the value 100 significantly (e.g. to INT_MAX),
and you'll likely (but not necessarily) see a difference.
With only 100 iterations, the error has not accumulated
sufficiently to be apparent with your code.
Another way to show the problem (even with a smaller
number of iterations) is to increase the output precision,
e.g.:
cout << fixed << setprecision(15) << '\n';
I.e. the call to operator<< is outputting a rounded
value, not necessarily the exact value of 'i'.
E.g. The following program:
#include <ios>
#include <iomanip>
#include <iostream>
int main()
{
std::cout << std::fixed
<< std::setprecision(15);
for(double i=0; i < 10; i += 0.1)
std::cout << i << '\n';
return 0;
}
... when built with VC++ and executed on an Intel
Pentium, gives the output of:
0.000000000000000
0.100000000000000
0.200000000000000
0.300000000000000
0.400000000000000
0.500000000000000
0.600000000000000
0.700000000000000
0.800000000000000
0.900000000000000
1.000000000000000
1.100000000000000
1.200000000000000
1.300000000000000
1.400000000000000
1.500000000000000
1.600000000000000
1.700000000000000
1.800000000000001
1.900000000000001
2.000000000000000
2.100000000000001
2.200000000000001
2.300000000000001
2.400000000000001
2.500000000000001
2.600000000000001
2.700000000000001
2.800000000000001
2.900000000000001
3.000000000000001
3.100000000000001
3.200000000000002
3.300000000000002
3.400000000000002
3.500000000000002
3.600000000000002
3.700000000000002
3.800000000000002
3.900000000000002
4.000000000000002
4.100000000000001
4.200000000000001
4.300000000000001
4.400000000000000
4.500000000000000
4.600000000000000
4.699999999999999
4.799999999999999
4.899999999999999
4.999999999999998
5.099999999999998
5.199999999999998
5.299999999999997
5.399999999999997
5.499999999999996
5.599999999999996
5.699999999999996
5.799999999999995
5.899999999999995
5.999999999999995
6.099999999999994
6.199999999999994
6.299999999999994
6.399999999999993
6.499999999999993
6.599999999999993
6.699999999999992
6.799999999999992
6.899999999999992
6.999999999999991
7.099999999999991
7.199999999999990
7.299999999999990
7.399999999999990
7.499999999999989
7.599999999999989
7.699999999999989
7.799999999999988
7.899999999999988
7.999999999999988
8.099999999999987
8.199999999999987
8.299999999999987
8.399999999999986
8.499999999999986
8.599999999999985
8.699999999999985
8.799999999999985
8.899999999999984
8.999999999999984
9.099999999999984
9.199999999999983
9.299999999999983
9.399999999999983
9.499999999999982
9.599999999999982
9.699999999999982
9.799999999999981
9.899999999999981
9.999999999999981
If you increment 'i' by 0.1 enough times, the error
will eventually work its way toward the more significant
digits and show up even with 'cout's default precision.
Again, don't depend upon the result from 'cout' to
tell you the exact value of a floating point object
An implementation could even have more significant
digits than 15. (check your <limits> header for
your implementation limits.)
It's simply not possible to represent every decimal
floating point value in binary. Behavior of output
functions cannot change this fact.