How to Solve a Root...

M

Mark

always wondered how a computer might go about doing this...
i'm taking calculus at university... and we went over newton's method.
i found this interesting...

------------------------------
#include <cstdlib>
#include <iostream>
#include <cmath>

using namespace std;

double f1(double x);
double f2(double x, double y);

int main(int argc, char *argv[])
{
double x = 5;
cout << f1(x) << endl;
cout << sqrt(x) << endl;

system("PAUSE");
return EXIT_SUCCESS;
}

double f1(double x)
{
return f2(x,x/2);
}

double f2(double x, double y)
{
double z = y - (y*y-x)/(2*x);
if( y == z ) return z;
else return f2(x,z);
}
 
I

int2str

Mark said:
always wondered how a computer might go about doing this...
i'm taking calculus at university... and we went over newton's method.
i found this interesting...

------------------------------
#include <cstdlib>
#include <iostream>
#include <cmath>

using namespace std;

double f1(double x);
double f2(double x, double y);

int main(int argc, char *argv[])
{
double x = 5;
cout << f1(x) << endl;
cout << sqrt(x) << endl;

system("PAUSE");
return EXIT_SUCCESS;
}

double f1(double x)
{
return f2(x,x/2);
}

double f2(double x, double y)
{
double z = y - (y*y-x)/(2*x);
if( y == z ) return z;

I thought it was bad to directly compare floats?
else return f2(x,z);
}

I love recursion as much as the next guy, but in this mathematical case
I see huge potential for a stack overflow. Maybe this should be rolled
into a loop instead. Might actually look cleaner.

Cheers,
Andre
 
G

gottlobfrege

Mark said:
always wondered how a computer might go about doing this...
i'm taking calculus at university... and we went over newton's method.
will work for cube roots and others if you just tweak the formula.

You might want to look into the Runge-Kutta method. Quite robust. I
coded it up once many years ago...
 
M

Mark

I thought it was bad to directly compare floats?

i have no idea... never heard that before.
I love recursion as much as the next guy, but in this mathematical case
I see huge potential for a stack overflow. Maybe this should be rolled
into a loop instead. Might actually look cleaner.

well. when i've worked these out on paper... they only took about 6 or
8 "loops" to get it accurate to about 8 decimal places.... it really
shouldn't stack that much at all. *i think*

but yes, maybe a for loop would be more efficient...but... oh well.


You might want to look into the Runge-Kutta method. Quite robust. I
coded it up once many years ago...

Runge-kutta eh? perhaps i shall look into it. you wouldnt happen to
know what method cmath uses, would you?

i suppose i could open the library and check myself.... if i can
understand it.
 
M

Mark P

Mark said:
i have no idea... never heard that before.

"Bad" may be too strong, but it should be done with care. A computer
will compare 2.0 and 1.999999 as unequal; sometimes this is undesirable.
Often rather than comparing for equality it makes more sense to see if
the absolute value of the difference is less than some small number (say
0.0001).

Runge-Kutta is a numerical method for solving differential equations.
Quite different from root finding.

-Mark
 
?

=?ISO-8859-15?Q?Juli=E1n?= Albo

Mark said:
well. when i've worked these out on paper... they only took about 6 or
8 "loops" to get it accurate to about 8 decimal places.... it really
shouldn't stack that much at all. *i think*

Seems not fast enough. Some years ago, talking about fixed point
calculations for writing graphics demos, I recommended to try the Hero
method (a Newton's variant specific for square roots), and the result was
that just 4 iterations were enough for 3-d calculus.

Buy will be better to talk about this things in group about graphics
programming.
 
M

Michiel.Salters

Mark said:
Runge-Kutta is a numerical method for solving differential equations.
Quite different from root finding.

Not really. Often these forms can be converted into each other,
certainly
for the trivial forms used here. And sqr(x)=y implies y*y-x = 0, which
is
truly trivial. Runga-Kutta takes three points, IIRC, which means it's a
single iteration for second-degree functions like that. Newton uses two
points, which means it's only a single iteration for first-degree
functions
(lines).

HTH,
Michiel.
 
I

int2str

Mark said:
well. when i've worked these out on paper... they only took about 6 or
8 "loops" to get it accurate to about 8 decimal places.... it really
shouldn't stack that much at all. *i think*

I've measured it with a non-trivial number like '12345.6789' and it
took 3669 iterations. That's quite a bit of stack pressure.
but yes, maybe a for loop would be more efficient...but... oh well.

Here's my entry :D (based on your formula):

double my_sqrt( const double & x )
{
double y = 0;
double z = 1;

while ( y != z )
{
z = y;
y = y - (y * y - x) / (2 * x);
}

return y;
}

Cheers,
Andre
 
M

mlimber

Neil said:
This algorithm will not be helped much by a better equality
operation.
[snip]

Sure, but it still does need a better "equality" operation.

Cheers! --M
 
G

Greg

Mark said:
always wondered how a computer might go about doing this...
i'm taking calculus at university... and we went over newton's method.
i found this interesting...

------------------------------
#include <cstdlib>
#include <iostream>
#include <cmath>

using namespace std;

double f1(double x);
double f2(double x, double y);

int main(int argc, char *argv[])
{
double x = 5;
cout << f1(x) << endl;
cout << sqrt(x) << endl;

system("PAUSE");
return EXIT_SUCCESS;
}

double f1(double x)
{
return f2(x,x/2);
}

double f2(double x, double y)
{
double z = y - (y*y-x)/(2*x);
if( y == z ) return z;

I thought it was bad to directly compare floats?

It would depend on why the floats are being compared. Testing two
different floating point values for equality is a bad idea since the
values are compared exactly even though the values may not be
represented exactly. So two values that would be considered equal by
the programmer may not always compare as equal in the program.

In this case, however, the comparison is fine. One floating value is
being compared against itself, or more precisely, the result of one
iteration is being compared against the result of the previous
iteration. Since the difference between the values produced at each
iteration becomes smaller and smaller, at some point the difference
will have become so minute that the floating point variable can no
longer detect it.

At that point, the two floating point values will compare equal (since
the new value appears unchanged from the previous one). Since there is
no point in any further iterations when that happens, the function
stops iterating and returns with its result.

Greg
 
K

Kai-Uwe Bux

Greg said:
Mark said:
always wondered how a computer might go about doing this...
i'm taking calculus at university... and we went over newton's method.
i found this interesting...

------------------------------
#include <cstdlib>
#include <iostream>
#include <cmath>

using namespace std;

double f1(double x);
double f2(double x, double y);

int main(int argc, char *argv[])
{
double x = 5;
cout << f1(x) << endl;
cout << sqrt(x) << endl;

system("PAUSE");
return EXIT_SUCCESS;
}

double f1(double x)
{
return f2(x,x/2);
}

double f2(double x, double y)
{
double z = y - (y*y-x)/(2*x);
if( y == z ) return z;

I thought it was bad to directly compare floats?

It would depend on why the floats are being compared. Testing two
different floating point values for equality is a bad idea since the
values are compared exactly even though the values may not be
represented exactly. So two values that would be considered equal by
the programmer may not always compare as equal in the program.

In this case, however, the comparison is fine. One floating value is
being compared against itself, or more precisely, the result of one
iteration is being compared against the result of the previous
iteration. Since the difference between the values produced at each
iteration becomes smaller and smaller, at some point the difference
will have become so minute that the floating point variable can no
longer detect it.

This reasoning is based on calculus and does not apply to floating point
numbers of limitedprecision.
At that point, the two floating point values will compare equal (since
the new value appears unchanged from the previous one). Since there is
no point in any further iterations when that happens, the function
stops iterating and returns with its result.

Nope:

#include <iostream>
#include <iomanip>
#include <limits>
#include <cmath>

double my_sqrt( const double & x )
{
double y = 0;
double z = 1;

while ( y != z )
{
z = y;
y = y - (y * y - x) / (2 * x);
std::cout << std::setprecision(17)
<< y << " "
<< z << '\n';
}

return y;
}


double my_sqrt_b ( const double & x )
{
double root = 0;
double quot = x;

while ( std::abs( root - quot ) >
std::numeric_limits<double>::epsilon() ) {
root = ( root + quot ) / 2;
quot = x / root;
std::cout << root << " " << quot << '\n';
}

return ( root );
}

int main ( void ) {
std::cout << my_sqrt( 0.5 ) << '\n';
}

prints on my machine:

0.5 0
0.75 0.5
0.6875 0.75
0.71484375 0.6875
0.7038421630859375 0.71484375
0.70844837254844606 0.7038421630859375
0.7065492759819042 0.70844837254844606
0.7073373965913512 0.7065492759819042
0.70701120397472073 0.7073373965913512
0.70714636142893661 0.70701120397472073
0.70709038494675236 0.70714636142893661
0.70711357246260598 0.70709038494675236
0.70710396810177689 0.70711357246260598
0.70710794639649821 0.70710396810177689
0.70710629853942519 0.70710794639649821
0.70710698110529846 0.70710629853942519
0.70710669837744955 0.70710698110529846
0.70710681548719212 0.70710669837744955
0.70710676697875419 0.70710681548719212
0.70710678707160801 0.70710676697875419
0.70710677874887562 0.70710678707160801
0.70710678219626433 0.70710677874887562
0.70710678076830913 0.70710678219626433
0.70710678135978755 0.70710678076830913
0.7071067811147892 0.70710678135978755
0.7071067812162708 0.7071067811147892
0.70710678117423575 0.7071067812162708
0.70710678119164727 0.70710678117423575
0.70710678118443515 0.70710678119164727
0.70710678118742254 0.70710678118443515
0.70710678118618508 0.70710678118742254
0.70710678118669767 0.70710678118618508
0.70710678118648529 0.70710678118669767
0.70710678118657333 0.70710678118648529
0.7071067811865368 0.70710678118657333
0.70710678118655201 0.7071067811865368
0.70710678118654569 0.70710678118655201
0.70710678118654824 0.70710678118654569
0.70710678118654724 0.70710678118654824
0.70710678118654768 0.70710678118654724
0.70710678118654746 0.70710678118654768
0.70710678118654757 0.70710678118654746
0.70710678118654746 0.70710678118654757
0.70710678118654757 0.70710678118654746
0.70710678118654746 0.70710678118654757
0.70710678118654757 0.70710678118654746
0.70710678118654746 0.70710678118654757
0.70710678118654757 0.70710678118654746
0.70710678118654746 0.70710678118654757
0.70710678118654757 0.70710678118654746
0.70710678118654746 0.70710678118654757
0.70710678118654757 0.70710678118654746
0.70710678118654746 0.70710678118654757
0.70710678118654757 0.70710678118654746
0.70710678118654746 0.70710678118654757
0.70710678118654757 0.70710678118654746
0.70710678118654746 0.70710678118654757

As you can see, you run the risk that y and z enter a cycle. At this point,
their difference does not get any smaller.


I just know enough about numerical analysis to stay away from these
problems. Dealing with floats and doubles is *hard* and best left to
experts who can cope with a total break down of Calculus based intuition
and reasoning.


Best

Kai-Uwe Bux
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,817
Latest member
AdalbertoT

Latest Threads

Top