How to program this in C++?

M

mike3

Hi.

I was making a bignum package in C++ for a program I've got, that will
generate images of fractals. Currently, it has two parts: a "raw"
unsigned integer type, and a big floating point type, the latter of
which is used to do all the fractal calculations. The purpose of a
bignum package is to allow for deep zooming (beyond the range of
hardware floating point). Although it's obviously going to be really
slow, it does allow for that deep stuff that I want.

But I'm wondering about how to set this up. The question I have right
now is should the big floating point type be a derived class (ie. use
inheritance) from the raw integer type, or should it just include an
object of type raw integer in it?
 
J

Jim Langston

mike3 said:
Hi.

I was making a bignum package in C++ for a program I've got, that will
generate images of fractals. Currently, it has two parts: a "raw"
unsigned integer type, and a big floating point type, the latter of
which is used to do all the fractal calculations. The purpose of a
bignum package is to allow for deep zooming (beyond the range of
hardware floating point). Although it's obviously going to be really
slow, it does allow for that deep stuff that I want.

But I'm wondering about how to set this up. The question I have right
now is should the big floating point type be a derived class (ie. use
inheritance) from the raw integer type, or should it just include an
object of type raw integer in it?

There are already a number of big number libraries out there for this type
of thing. Rather than reinvent the wheel, why don't you try to find one you
like? I believe boost has one also.

Hmm.. I went to boost site and dont' see a big number library.

Googling for "bignum floating point c++" gave some good hits, one was for
GMP. http://gmplib.org/
I haven't used it but it's an arbitrary length big num library.
 
K

Kai-Uwe Bux

mike3 said:
Hi.

I was making a bignum package in C++ for a program I've got, that will
generate images of fractals. Currently, it has two parts: a "raw"
unsigned integer type, and a big floating point type, the latter of
which is used to do all the fractal calculations. The purpose of a
bignum package is to allow for deep zooming (beyond the range of
hardware floating point). Although it's obviously going to be really
slow, it does allow for that deep stuff that I want.

But I'm wondering about how to set this up. The question I have right
now is should the big floating point type be a derived class (ie. use
inheritance) from the raw integer type,

Probably not.
or should it just include an object of type raw integer in it?

I take it that you represent a float by an integer part and some additional
data that give you a fractional part. In that case, an integer data member
is the most natural and cleanest way to go.

With regard to inheritance, you have essentially two options:

a) public inheritance. That would expose the integer part of your float to
independent manipulation (essentially like making the integer data member
public, BadIdea(tm)). On top of that, you get into all sorts of trouble
with pointers to integer objects that now could point to floats, and more
importantly, your float type would match any function

integer some_function ( integer const & )

which you probably don't want.

b) private inheritance. That does not have the problems from (a), but in
this case it wouldn't buy you anything. It will just obscure the code.



BTW: why do you reinvent the wheel? There are decent bignum libraries out
there, and it will be very hard to beat them performance-wise.


Best

Kai-Uwe Bux
 
P

Pavel Shved

a) public inheritance. That would expose the integer part of your float to
independent manipulation (essentially like making the integer data member
public, BadIdea(tm)). On top of that, you get into all sorts of trouble
with pointers to integer objects that now could point to floats, and more
importantly, your float type would match any function

integer some_function ( integer const & )

which you probably don't want.

I think matching floats in integer funtions is the thing one likely
does want. That's done not by inheritance (which won't work in the
way you expect floats to act as integers), but by specifying user-
defined typecast rather.
 
K

Kai-Uwe Bux

Pavel said:
I think matching floats in integer funtions is the thing one likely
does want. That's done not by inheritance (which won't work in the
way you expect floats to act as integers), but by specifying user-
defined typecast rather.

I am not sure that I want floats to silently match integer functions. Have a
look at the return type. For a simple function like

integer sqr ( integer const & i ) {
return ( i*i );
}

the result could be rather surprising when you feed in a float. Life will
get even more surprising if you also provide a conversion constructor
interger -> float. Then, you could do:

float a = ...
a = sqr( a );

and what you get is not exactly what you see.


Best

Kai-Uwe Bux
 
K

Kira Yamato

Hi.

I was making a bignum package in C++ for a program I've got, that will
generate images of fractals. Currently, it has two parts: a "raw"
unsigned integer type, and a big floating point type, the latter of
which is used to do all the fractal calculations. The purpose of a
bignum package is to allow for deep zooming (beyond the range of
hardware floating point).

For fractal computation, especially when you want to do deep zooming,
using an arbitrary precision floating type is likely not going to work.
The problem is that arbitrary precision does not mean infinitely
precision. For example, a simple number like 1/3 has binary
representation .0101010101... repeating indefinitely. It is obvious
that even with arbitrary precision floating type you cannot represent
1/3 precisely. So, when you do deep zooming, you may still accumulate
too much rounding errors.

One way to resolve this is to use exact computation. That's right.
You can really do exact computation with no round-off errors
what-so-ever. What is the catch? Well, you can only do exact
computation over algebraic numbers instead of all of the reals.
Therefore, if your fractal is generated by an iterated function which
happens to be an algebraic function with coefficients in the rationals,
then you are already working within the field of algebraic numbers. If
this is indeed your case, then you can do exact computation.

For a particular C++ implementation of exact computation of algebraic
numbers, i refer you to

http://cs.nyu.edu/exact/core_pages/intro.html

I think there're links there which lead to documentation for both the
API as well as the mathematical theory behind it too.
 
P

Pavel Shved

I am not sure that I want floats to silently match integer functions. Have a
look at the return type. For a simple function like

integer sqr ( integer const & i ) {
return ( i*i );
}

the result could be rather surprising when you feed in a float. Life will
get even more surprising if you also provide a conversion constructor
interger -> float. Then, you could do:

float a = ...
a = sqr( a );

and what you get is not exactly what you see.

I see the following: float variable is passed to an integer function
and i expect the same behaviour as with intrinsic C++ types. And i do
get it. Of course if you don't like this endeavour you may fix it
out, but i prefer writing intgrality-indepentent functions with
templates, like

template <typename T> T sqr (T const& _t)
{
return _t*_t;
}

, letting integers calculated with aid of floating algorithms fit to
greates-common-divisor functions or something with no explicit
conversions.
 
D

Diego Martins

I see the following: float variable is passed to an integer function
and i expect the same behaviour as with intrinsic C++ types. And i do
get it. Of course if you don't like this endeavour you may fix it
out, but i prefer writing intgrality-indepentent functions with
templates, like

template <typename T> T sqr (T const& _t)
{
return _t*_t;

}

, letting integers calculated with aid of floating algorithms fit to
greates-common-divisor functions or something with no explicit
conversions.

_t ?
 
M

mike3

On 2007-11-08 01:50:58 -0500, mike3 <[email protected]> said:
For fractal computation, especially when you want to do deep zooming,
using an arbitrary precision floating type is likely not going to work.
The problem is that arbitrary precision does not mean infinitely
precision. For example, a simple number like 1/3 has binary
representation .0101010101... repeating indefinitely. It is obvious
that even with arbitrary precision floating type you cannot represent
1/3 precisely. So, when you do deep zooming, you may still accumulate
too much rounding errors.

One way to resolve this is to use exact computation. That's right.
You can really do exact computation with no round-off errors
what-so-ever. What is the catch? Well, you can only do exact
computation over algebraic numbers instead of all of the reals.
Therefore, if your fractal is generated by an iterated function which
happens to be an algebraic function with coefficients in the rationals,
then you are already working within the field of algebraic numbers. If
this is indeed your case, then you can do exact computation.

For a particular C++ implementation of exact computation of algebraic
numbers, i refer you to

http://cs.nyu.edu/exact/core_pages/intro.html

The function on some types I want to support consists
of iterated transcendental functions (sine, cosine,
etc.) as opposed to algebraic functions, and hence
approximations will have to be used there. Also,
I'm not sure if you do need exact arithmetic in the
first place. One can zoom down to around 10^11-10^12
on ordinary "double" floating point. However, even
with algebraic iterations like the Mandelbrot set,
I noticed this:

--
Here's a series of iterations of the point c = -1/3 + 1/3i
through the Mandelbrot map: z_n = z_(n-1)^2 + c, where
z_0 = 0.

z_0 = 0
z_1 = -1/3 + 1/3i
z_2 = -1/3 + 1/9i
z_3 = -19/81 + 7/27i
z_4 = -2267/6561 + 463/2187i
z_5 = -11138939/43046721 + 2683727/14348907i
z_6 = -558418949732987/1853020188851841 +
146103389403343/617673396283947i
....

Look at how huge the numbers in the fractions are getting
after only 6 cycles.

The number of digits in the numerators is:

1, 1, 1, 2, 4, 8, 15, 31 ...

which is clearly exponential growth, approximately
2^(n-2) for z_n where n >= 2. For n = 100,000,
a not-so-unreasonable number of iterations for
a *deep* zoom, we have tens of thousands of orders
of magnitude more digits than there are atoms
in the entire universe.
--

How do you circumvent that problem? And that's just to
determine whether or not a single point is in the
Mandelbrot set. The instant you make an approximation
somewhere is the instant you've moved out of exact
arithmetic anyway.
 
M

mike3

Probably not.


I take it that you represent a float by an integer part and some additional
data that give you a fractional part. In that case, an integer data member
is the most natural and cleanest way to go.

With regard to inheritance, you have essentially two options:

a) public inheritance. That would expose the integer part of your float to
independent manipulation (essentially like making the integer data member
public, BadIdea(tm)). On top of that, you get into all sorts of trouble
with pointers to integer objects that now could point to floats, and more
importantly, your float type would match any function

integer some_function ( integer const & )

which you probably don't want.

b) private inheritance. That does not have the problems from (a), but in
this case it wouldn't buy you anything. It will just obscure the code.

This makes more sense. Furthermore, in my case I
was talking about using (b). This is the discussion
that inspired the idea:

http://groups.google.com/group/comp.lang.c++/msg/a66278b77597d648

The part that is relevant here is this:

"> The Init functions are *pnly* and I say again, *ONLY*
called in CONSTRUCTORS, period. And *always* called from
them. They're just there to save a bit of cutting-and-
pasting. At no point are they called anywhere else.

By expressing them as constructors (of e.g. a private base class) you
express that in code, /ensuring/ that they're not called from anywhere
else, and so making it completely safe to remove initialization
checks. "

(The "Init" fns mentioned, by the way, were to
initialize the BigFloats, allocate memory for the
digits, etc. This discussion revolved around an
older implementation of big floating arithmetic
and part of the fractal program that I was having
a tough bug with.)

In response to this, I thought, "Hey! Let's make
a 'raw integer' type that will be inherited,
since when one says 'base class' that implies
'inherited', by the big floating point type,
and it'll be private. And we'll use std::vector
like he says to go and encapsulate the digit array!
This 'raw integer' thing will then alleviate the
need for those annoying 'Init' functions plus
handle the arithmetic on the digits to boot
as well.", insofar as implementing the big number
arithmetic part of the program goes.
BTW: why do you reinvent the wheel? There are decent bignum libraries out
there, and it will be very hard to beat them performance-wise.

Well, for one, I'd own all the copyrights this way.
For another, I do not need to *beat* them, either.
Even getting a significant fraction of their performance
would be acceptable.
 
K

Kai-Uwe Bux

Pavel said:
I see the following: float variable is passed to an integer function
and i expect the same behaviour as with intrinsic C++ types. And i do
get it.

A problem is of course that you would need to remember exactly which
function are integer and which are not. When you see gcd() or sin(), it's
probably not a problem. But for something like sqr() it becomes more iffy;
and I don't even want to think about operators like a*b.

In fact, I don't think that mimicking the behavior of built-in numeric types
is that good an idea: (a) it's not feasible to copy their behavior exactly
because overload resolution and user-defined conversions behave different
from promotion and arithmetic conversion rules; and (b) even for the
built-in types it causes all sorts of headaches (like when you mix signed
and unsigned types).


Of course if you don't like this endeavour you may fix it
out, but i prefer writing intgrality-indepentent functions with
templates, like

template <typename T> T sqr (T const& _t)
{
return _t*_t;
}

I usually do the same.

, letting integers calculated with aid of floating algorithms fit to
greates-common-divisor function or something with no explicit conversions.

but I don't understand this part.


Best

Kai-Uwe Bux
 
M

mike3

This makes more sense. Furthermore, in my case I
was talking about using (b). This is the discussion
that inspired the idea:

http://groups.google.com/group/comp.lang.c++/msg/a66278b77597d648

The part that is relevant here is this:

"> The Init functions are *pnly* and I say again, *ONLY*


By expressing them as constructors (of e.g. a private base class) you
express that in code, /ensuring/ that they're not called from anywhere
else, and so making it completely safe to remove initialization
checks. "

(The "Init" fns mentioned, by the way, were to
initialize the BigFloats, allocate memory for the
digits, etc. This discussion revolved around an
older implementation of big floating arithmetic
and part of the fractal program that I was having
a tough bug with.)

In response to this, I thought, "Hey! Let's make
a 'raw integer' type that will be inherited,
since when one says 'base class' that implies
'inherited', by the big floating point type,
and it'll be private. And we'll use std::vector
like he says to go and encapsulate the digit array!
This 'raw integer' thing will then alleviate the
need for those annoying 'Init' functions plus
handle the arithmetic on the digits to boot
as well.", insofar as implementing the big number
arithmetic part of the program goes.


Well, for one, I'd own all the copyrights this way.
For another, I do not need to *beat* them, either.
Even getting a significant fraction of their performance
would be acceptable.

Any commment?
 
K

Kai-Uwe Bux

mike3 said:
This makes more sense. Furthermore, in my case I
was talking about using (b). This is the discussion
that inspired the idea:

http://groups.google.com/group/comp.lang.c++/msg/a66278b77597d648

The part that is relevant here is this:

"> The Init functions are *pnly* and I say again, *ONLY*

By expressing them as constructors (of e.g. a private base class) you
express that in code, /ensuring/ that they're not called from anywhere
else, and so making it completely safe to remove initialization
checks. " initialize the BigFloats, allocate memory for the
digits, etc. This discussion revolved around an
older implementation of big floating arithmetic
and part of the fractal program that I was having
a tough bug with.)

In response to this, I thought, "Hey! Let's make
a 'raw integer' type that will be inherited,
since when one says 'base class' that implies
'inherited', by the big floating point type,
and it'll be private. And we'll use std::vector
like he says to go and encapsulate the digit array!
This 'raw integer' thing will then alleviate the
need for those annoying 'Init' functions plus
handle the arithmetic on the digits to boot
as well.", insofar as implementing the big number
arithmetic part of the program goes.

To have a private base class for the purpose of factoring out code common to
various constructors is fine. However, that is an implementation detail and
by no means indicates that such a private base class for big_float should
be a big_int (even if in a first draft of the class all the code factored
out pertains to the integer part of the big_float). You loose flexibility
if you insist that this base class has a prescribed behavior.

Moreover, if those init functions deal with the integer part only, then
having a big_int member would take care of factoring them out, too: the
big_int member has a constructor and that constructor is called when the
member is initialized. The init-code for the integer part just goes there
and disappears from the big_float class.

Private inheritance buys you nothing in this regard. The only effect that
inheritance has compared to composition is the handling of member
functions: with composition you write

integer_part.add ( other.integer_part );

and with inheritance you write

add( other.integer_part );

or (in case of a name-clash, which is not too unlikely given that both
classes are numbers)

my_integer_base::add( other.integer_part );

Of course, as soon as operator overloading enters the picture, the
name-collision problem can become a real hassle.

Well, for one, I'd own all the copyrights this way.
True.

For another, I do not need to *beat* them, either.
Even getting a significant fraction of their performance
would be acceptable.

From experience, I predict that you will a hard time coming close even
within an order of magnitude (if not two). However, it is a fun experience.
So by all means, go for it (if you have the time).


Best

Kai-Uwe Bux
 
M

mike3

mike3 wrote:
To have a private base class for the purpose of factoring out code common to
various constructors is fine. However, that is an implementation detail and
by no means indicates that such a private base class for big_float should
be a big_int (even if in a first draft of the class all the code factored
out pertains to the integer part of the big_float). You loose flexibility
if you insist that this base class has a prescribed behavior.

Moreover, if those init functions deal with the integer part only, then
having a big_int member would take care of factoring them out, too: the
big_int member has a constructor and that constructor is called when the
member is initialized. The init-code for the integer part just goes there
and disappears from the big_float class.

Well, what would be a suitable "base" class then for the
big integer part, though? There's really nothing to that
except a length field plus the digits, the latter of which
is held in an object of type std::vector.

Scrapping the inheritance method and replacing it with
using a raw integer object inside the big floating point,
I'd then have this: "BigFloat" contains objects of types
"RawInt" and "SignExp", while "RawInt" contains an object
of type std::vector. The array allocation, etc. that would've
otherwise been handled by Init functions is then handled
by the functions in std::vector, then the assignment and
preparation of the raw integers is handled by their own
constructors, and "SignExp"'s (the sign/exponent combination
wrapped up into a little class all it's own so that
exponent overflow/underflow checking code is made reusable)
constructors handle the preparation of the floating point
number's sign/exponent, with the floating point class's own
constructors finally finishing off the remaining work (for
example if one requests a BigFloat preinitialized to a
specific value, the setting of that value would be achieved
by those constructors).
Private inheritance buys you nothing in this regard. The only effect that
inheritance has compared to composition is the handling of member
functions: with composition you write

integer_part.add ( other.integer_part );

and with inheritance you write

add( other.integer_part );

or (in case of a name-clash, which is not too unlikely given that both
classes are numbers)

my_integer_base::add( other.integer_part );

Of course, as soon as operator overloading enters the picture, the
name-collision problem can become a real hassle.

Yes. Like right now, with the inheritance-based system,
I have two "add" routines inside a BigFloat object: one
"rawAdd" and the other just "Add", the former of which
is private and is used to operate on the digits of the
mantissas, and the latter of which is public and can
be used to do the floating-point arithmetic (there will
also be a special, but public, "FastAdd" routine that is
used for the calculation-intensive parts of the program
and that requires all inputs/outputs to be stored at equal
precision, which allows for some stops to be pulled out).

Which is important, as I have not yet decided on the
licensing model for the software if I actually release it.
The best way to avoid copyright trouble is to make all your
work original, so you own it all.
From experience, I predict that you will a hard time coming close even
within an order of magnitude (if not two). However, it is a fun experience.
So by all means, go for it (if you have the time).

Well, I like doing it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,197
Messages
2,571,038
Members
47,633
Latest member
BriannaLyk

Latest Threads

Top