Bill said:
> Subject: Re: Who uses clapack?
>
> hi Jentje,
>
> Bill wrote
>
> class and then some more compilcated classes like a vector of vectors and
>
>
> Jentje wrote
> This is not a very common operation which I have only
>
>
>
> I believe that operator overloading is one of the primary advantages of
> C++ The ability to write simple equations like A = B + C*D makes the
> underlying mathematics stand out and the code easy to debug and remember.
Hi Bill,
I am somewhat sceptical and don't think is such a great
advantage, although it is nice if you can use it.
The problem is that C++ does not allow you to define new
operators so it is best suited for algebras which closely
resemble the algebra for real numbers such as the one for
complex numbers.
However, the Matrix/Vector/Scalar algebra over the real and
complex number fields is not so trivial and includes operations
such as transpose and trace and dyadic product, vector outer
product, minors, compound matrices, block matrices, even the
3D cross product which may be more complicated than C++ allows.
Aside from the number field there are many shapes of matrices:
rectangular ones, banded, bidiagonal, tridiagonal, triangular
and so on.
I use C and use function calls and write a comment above the
function call which shows in Matrix Notation what the call
does which is not quite as nice of course.
C++ is a grandiose language but I find C adequate for matrix
linear algebra. Maybe, if it were not for the name mangling
which makes linking with Assembler more difficult, I might
have used C++.
> I haven't had many problems wwith the shorthand notation. It is however
> challenging to do in complex variables.
>
> Bill wrote
> ie solution of equations, decompositions, SVD eigenvectors etc. etc.
>
>
> Jentje wrote
> You must be kidding, by the time I am done with Schur, SVD
>
>
>
> I am not kidding !!!
> Remember I am writing templated code so that one function description
> can in most cases handle float, double, complex<float>, and
complex said:
> its all the same function....this reduces code length by a factor of 4.
Yes, I had understood that, but don't you have to write
separate implementations of those template functions for
complex numbers for example. OK, I understand now, the
complex float and complex double are the same code and the
real float and double are another code, understood.
> Also remenber that I am using the standard template library which handles
> the pointers and data types.. As am example
>
> template <class T> inline
> SMatrix<T>& SMatrix<T>:
perator +=(const SMatrix<T> &B){
> SYS_ASSERT(( (rows() == B.rows()) & (cols() == B.cols()) ),
> "error += matrices not equal size \n");
> std::transform(begin(),end(),B.begin(),begin(),std:
lus<T>());
> return *this;
> }
Good heavens!
> here in 8 lines we have the code to do an update add += for all data
types.
I still maintain that this is a relatively rare operation
except for block matrices and of course for vectors where
operator-=() would be natural for projection.
> It also
> checks for proper matrix size. The c code for this would be twice as long.
> I also reuse the code in SVD's and eigenvectors to save space.
> Bill
>
> Jentje
>
>
>
> Remember that my lib file contains instances of code for all data types
> For a particular problem only a small fraction of this is used and appears
> in the object file of a particular app.
>
> Jentje
>
> Have you tested those functions by constructing pathological
> matrices from their eigenvalues (for symmetric matrices)
> or from their singular values (in the case of the SVD) and
> likewise for Linear Equations and evaluated the accuracy?
>
> Bill
>
> I have tested all functions up to size 500 x 500 using
> matrices filled with random numbers. I havent done anything
> with pathological matrices yet.
In addition to those "random" matrices which are not really
that random you should test with matrices with clusters of
equal or extremely closely spaced Eigen/Singular values
or logarithmically distributed.
You construct those from the Eigenvalues/Singular Values
by multiplying with very accurately constructed orthogonal
matrices.
It is amazing how tests like this can break a program which
has tested coreectly using a billion random inputs.
Lapack pases these brutal tests by the way.
> I hope that this encourages others to consider using C++ for
> their linear algebra problems
Maybe but most likely your code is slow unless you use BLAS
type functions.
There is a thriving cottage industry of people who have written
C++ templates for Linear Algebra code, mostly for existing code.
Maybe! your code is better but to gain acceptance, you will have
to supply a suite of tests, one for each factorization or linear
problem, where you circular test your functions and also test
them against Lapack, both in terms of speed and accuracy.
I have test generators such as these for the Symmetric Eigenvalue
problem and the SVD where I run Lapack versus my code and display
the error in ULP and the speed. For example for the SVD I show the
maximum error in ULP for: UT.U-I, VT.V-I, UT.A.V-L and L*-L.
> Regards...Bill Shortall
regards,
Jentje Goslinga