M
mike3
Hi.
I once got the following response to a query of mine here:
http://groups.google.com/group/comp.lang.c++/msg/b80993cb0f735810?dmode=source
Now, after trying this approach, with all these "Bignum" and "slice"
objects to handle
my primitives, I found the performance was dismal. I need maximum
performance.
What I'm wondeirng about is why not keep the passed length parameters
and offload the
checks to whatever routine is calling these primitives, as they are
just that: primitive
operations designed to be used to build the more sophisticated
operations. Writing
a full-on integer package (the approach I ended up adopting based on
these suggestions)
and then never using it for anything but to build a floating-point
package has proven
not only inefficient performance-wise, but bloated with functionality
I don't use. All that
abstraction seems to eat more time than the underlying digit operation
itself!
So my idea is this:
1. Write primitives with length parameters (add, sub, copy, etc. all
work on runs of the
same length, mul and div on differing lengths.). These primitives take
as operands
iterators from an std::vector<> and work on that.
2. Use those *primitives* to build more complex BigInt and BigFloat
classes, which
implement all bounds-checking. This eliminates redundant bounds-checks
in an "RawInt"
class used to build the Float, for example, as well as integer-only
assumptions, things
that make the code messy, complicated, and poor in performance.
What exactly is so bad about this approach?
In the case of the methods presented, we have this thing that creates
several "slices"
of an object, does the boundschecking there (never mind that the
initial selections done
in the floating-point operation contain boundschecking by their very
nature), and then we
feed those to our bignum routines, then do the operations, then
destroy these, and then
we repeat this billions and billions of times in the calculation loop.
With very long
operands (1024 bits or more) this does not really matter, but long
operands like that I
do not expect to use often since they are intrinsically time-consuming
to operate upon.
Shorter operands -- 128 bits or so -- are more common, and here I
notice a big performance
hit.
What is the best bignum-package design for maximum performance?
I once got the following response to a query of mine here:
http://groups.google.com/group/comp.lang.c++/msg/b80993cb0f735810?dmode=source
Now, after trying this approach, with all these "Bignum" and "slice"
objects to handle
my primitives, I found the performance was dismal. I need maximum
performance.
What I'm wondeirng about is why not keep the passed length parameters
and offload the
checks to whatever routine is calling these primitives, as they are
just that: primitive
operations designed to be used to build the more sophisticated
operations. Writing
a full-on integer package (the approach I ended up adopting based on
these suggestions)
and then never using it for anything but to build a floating-point
package has proven
not only inefficient performance-wise, but bloated with functionality
I don't use. All that
abstraction seems to eat more time than the underlying digit operation
itself!
So my idea is this:
1. Write primitives with length parameters (add, sub, copy, etc. all
work on runs of the
same length, mul and div on differing lengths.). These primitives take
as operands
iterators from an std::vector<> and work on that.
2. Use those *primitives* to build more complex BigInt and BigFloat
classes, which
implement all bounds-checking. This eliminates redundant bounds-checks
in an "RawInt"
class used to build the Float, for example, as well as integer-only
assumptions, things
that make the code messy, complicated, and poor in performance.
What exactly is so bad about this approach?
In the case of the methods presented, we have this thing that creates
several "slices"
of an object, does the boundschecking there (never mind that the
initial selections done
in the floating-point operation contain boundschecking by their very
nature), and then we
feed those to our bignum routines, then do the operations, then
destroy these, and then
we repeat this billions and billions of times in the calculation loop.
With very long
operands (1024 bits or more) this does not really matter, but long
operands like that I
do not expect to use often since they are intrinsically time-consuming
to operate upon.
Shorter operands -- 128 bits or so -- are more common, and here I
notice a big performance
hit.
What is the best bignum-package design for maximum performance?