Who uses clapack?

V

Victor Eijkhout

The authors of lapack/scalapack are starting to work on a new release of
these packages. One of the things we want to address is the sillyness
that C users, instead of linking to the binaries of the Fortran version,
use an automatically translated C version.

Therefore we'd like to know precisely what the reasons for this are.

Naively one would say, compile the Fortran version, append an underscore
(when appropriate) to routine names, and just link those libraries to
your object files.

What are the gotchas, and what are the real stumbling blocks here?
Please spell it out in as much detail as you can muster. This situation
really needs to improve.

V.
 
T

Tino

Victor said:
The authors of lapack/scalapack are starting to work on a new release of
these packages. One of the things we want to address is the sillyness
that C users, instead of linking to the binaries of the Fortran version,
use an automatically translated C version.

Therefore we'd like to know precisely what the reasons for this are.

Naively one would say, compile the Fortran version, append an underscore
(when appropriate) to routine names, and just link those libraries to
your object files.

What are the gotchas, and what are the real stumbling blocks here?
Please spell it out in as much detail as you can muster. This situation
really needs to improve.

V.

As a person who used to use CLAPACK until I found it was just as easy
to link to the FORTRAN libs, I would dare say there are those who use
CLAPACK because they don't realize there are other alternatives. Also,
CLAPACK comes with the header file needed to use either the FORTRAN or
CLAPACK libraries, whereas if you download the pre-built libraries from
http://www.netlib.org/lapack/archives/, the header files are not
present. This is a small issue, but could be a problem for people who
are less familiar with the library structure or the semantics for
calling a FORTRAN library from C/C++. Just my thoughts...

On a slightly related note, is there any effort (or even thought) being
put into the development of a "standardized" C++ object-oriented LAPACK
library? Something to wrap around the BLAS and LAPACK, perhaps?
Best,
Ryan
 
V

Victor Eijkhout

Tino said:
On a slightly related note, is there any effort (or even thought) being
put into the development of a "standardized" C++ object-oriented LAPACK
library? Something to wrap around the BLAS and LAPACK, perhaps?

The thought has occurred to many people. I mean, passing LDA into a C++
routine is the programming equivalent of turning a crank handle on the
space shuttle. Now, what to do about it....

I've started looking at Sidl/Babel to get a more native looking
interface in other languages than F77. Not sure to what extent that will
satisfy you.

V.
 
D

Dan Pop

In said:
The authors of lapack/scalapack are starting to work on a new release of
these packages. One of the things we want to address is the sillyness
that C users, instead of linking to the binaries of the Fortran version,
use an automatically translated C version.

Therefore we'd like to know precisely what the reasons for this are.

Naively one would say, compile the Fortran version, append an underscore
(when appropriate) to routine names, and just link those libraries to
your object files.

What are the gotchas, and what are the real stumbling blocks here?
Please spell it out in as much detail as you can muster. This situation
really needs to improve.

The issue that springs to mind is that Fortran and C store matrices
differently: Fortran does it column-wise (the first index varies fastest),
C does it row-wise (the last index varies fastest).

The fact that Fortran array indexing is 1-based (by default) and C array
indexing is 0-based shouldn't cause any problems when using the compiled
Fortran code from C.

Dan
 
V

Victor Eijkhout

Dan Pop said:
The issue that springs to mind is that Fortran and C store matrices
differently: Fortran does it column-wise (the first index varies fastest),
C does it row-wise (the last index varies fastest).

The fact that Fortran array indexing is 1-based (by default) and C array
indexing is 0-based shouldn't cause any problems when using the compiled
Fortran code from C.

These sort of issues make me wonder why clapack is considered so great:
it does not provide arrays in the C-natural order, and in some places
the 1-based indexing is exposed, such as in pivoting of the
factorization.

When we write a more C-native interface we'll try to cover these issues.
It will mean some minimal bit of glue code that will hardly impact
performance.

Thanks for your response.

V.
 
K

Kamaraju Kusumanchi

Victor said:
The authors of lapack/scalapack are starting to work on a new release of
these packages. One of the things we want to address is the sillyness
that C users, instead of linking to the binaries of the Fortran version,
use an automatically translated C version.

Therefore we'd like to know precisely what the reasons for this are.

Naively one would say, compile the Fortran version, append an underscore
(when appropriate) to routine names, and just link those libraries to
your object files.

What are the gotchas, and what are the real stumbling blocks here?
Please spell it out in as much detail as you can muster. This situation
really needs to improve.

V.

The following are just my opinions based on my experience.

One reason is, the clapack's installation instructions are crystal
clear. I mean it is very easy and gives step by step instructions as to
what to do.

But on the other hand, lapack gave me hell of a trouble while compiling.
Compiling with intel compiler gave some IEEE errors (cant recall them
exactly, but if you are interested, they could be found on c.l.f
archives) The lapack/lapack95 instructions are cryptic and probably can
be understood only by experts. But now I have intel mkl library
available for free. So I dont have to compile them myself.

raju
 
G

glen herrmannsfeldt

Dan Pop wrote:

(snip regarding lapack and clapack)
The issue that springs to mind is that Fortran and C store matrices
differently: Fortran does it column-wise (the first index varies fastest),
C does it row-wise (the last index varies fastest).
The fact that Fortran array indexing is 1-based (by default) and C array
indexing is 0-based shouldn't cause any problems when using the compiled
Fortran code from C.

Some people who write numerical algorithms in Fortran exchange
the subscripts from the mathematical description, and others don't.

With cache memory and virtual memory one should always be sure that
the loops correspond to the memory layout in the appropriate language.

-- glen
 
B

Bill Shortall

Victor Eijkhout said:
The thought has occurred to many people. I mean, passing LDA into a C++
routine is the programming equivalent of turning a crank handle on the
space shuttle. Now, what to do about it....

I've started looking at Sidl/Babel to get a more native looking
interface in other languages than F77. Not sure to what extent that will
satisfy you.

V.

Hi Victor,

For the last few years I have been working on a
set of C++ classes to do linear algebra. I was trying to
acheive most of the functionality of Lapack/Blas but
in a very user friendly fashion. The classes center
arround a general purpose vector class along with a dense rectangular Matrix
class and then some more compilcated classes like a vector of vectors and
vector of matrices.
All of them use operator overloading so you can write
A = B + C * D where A,B,C, D are matrices and all of the operators
work for both real and complex matrices. Akll of the basic Linear algebra
functions are supported
ie solution of equations, decompositions, SVD eigenvectors etc. etc.
The entire library is less than 5000 lines of code
and compiles into a static library of ~~ 3 megabytes
You don't have to compile it but you will need either a
Miccrosoft VC6 or Linux GCC compiler to use it.
I have demo version available and if anyone in the group wants to play
with it and has a compatible compiler send me a line. Once you've used it
you will never go back to LAPACK
(e-mail address removed)
regards...Bill
 
V

Victor Eijkhout

Bill Shortall said:
For the last few years I have been working on a
set of C++ classes to do linear algebra. I was trying to
acheive most of the functionality of Lapack/Blas but
in a very user friendly fashion. The classes center
arround a general purpose vector class along with a dense rectangular Matrix
class and then some more compilcated classes like a vector of vectors and
vector of matrices.

Bill,

please mail me a pointer to your software.

However, without wanting to be disrespectful of your software, I wonder
if this is such a wise approach. Lapack is a high quality code base that
supports several data formats: dense, symmetric, banded, tridiagonal.
There is use for that, so before you can state
Once you've used it
you will never go back to LAPACK

you have to duplicate a lot of that functionality.

Then, Lapack has (especially in the eigenvalue part) state of the art
algorithms. The people coming up with the best algorithms are
implementing them straight in Lapack. What eigenvalue algorithms do you
use? Condition estimation, iterative refinement, et cetera.

V.
 
M

Michael Hosea

I can't say whether my experience is rare or not, but there has not always
been a FORTRAN compiler available for the target platforms that I was
working on, namely the embedded ones. The C versions based on F2C weren't
very inviting, either. I can run F2C myself, and I have. I've even
hand-translated LAPACK code to efficient C. Even when a FORTRAN compiler is
available, it can be an expensive option when its only purpose is to compile
a FORTRAN library or two on a very occasional basis, and we might be talking
about more than one FORTRAN compiler if the C source were cross-compiled on
multiple platforms. In some cases it makes perfect economic sense to build
and use a FORTRAN library, and in others it doesn't seem as appealing.
Unfortunately for me, I've usually found myself in positions where it seemed
quite unappealing for one reason or another. On one system there was even a
custom C compiler that used a different floating point model. To my
knowledge, no compatible FORTRAN compiler ever even existed for that
platform!

What I have often wished for was a C implementation that represented the
same quality and attention to detail as the FORTRAN version.

I have no idea what you could do about any of that as long as FORTRAN is the
highest priority language to support, the "mother language", if you will.
Obviously FORTRAN is still heavily used in some niches. I'm not questioning
this, nor passing any kind of value judgement on it, but I would definitely
look very hard at the question of what the priority *should* be, basing this
on what you are trying to accomplish with a new version of LAPACK. You
might get one answer if, for example, you were primarily interested in
supercomputing or are only planning a relatively minor incremental update,
and you might reach a different conclusion if you wanted use of the library
to be easy and convenient to incorporate for the widest possible audience.
 
B

Bill Shortall

Victor Eijkhout said:
Bill,

please mail me a pointer to your software.

However, without wanting to be disrespectful of your software, I wonder
if this is such a wise approach. Lapack is a high quality code base that
supports several data formats: dense, symmetric, banded, tridiagonal.
There is use for that, so before you can state


you have to duplicate a lot of that functionality.

Then, Lapack has (especially in the eigenvalue part) state of the art
algorithms. The people coming up with the best algorithms are
implementing them straight in Lapack. What eigenvalue algorithms do you
use? Condition estimation, iterative refinement, et cetera.

V.

Hi Victor,
I need a e-mail address to send you the algebra package
its not on the internet
do you want the microsoft VC6 or the the Linux GCC 2.7
version. The package is about 750 kilobytes. I wonder
how big Lapack/Blas is ?
Sure the Lapack people might have some more sophisticated algorithms but
only a few problems need them. The advantages of working in C++ and having
all its tremendous libraries available to you are overwhelming.
The eigenvalue methods I use are the old standbys
reduction to either tri or bi diagonal then QR iteration. For
real general matrices I use reduction to hessian then QR iteration for the
eigenvalues and inverse iteration for the eigenvectors.
 
V

Victor Eijkhout

Very interesting response. Thanks.

I'm not sure that what we're doing right now is going to help you any.
However, Lapack has traditionally had a very clean coding style with
rigidly enforced style, so I imagine that translating automatically will
always stay a possibility.

V.
 
A

Allin Cottrell

Dan said:
The issue that springs to mind is that Fortran and C store matrices
differently: Fortran does it column-wise (the first index varies fastest),
C does it row-wise (the last index varies fastest).

I don't think that is the issue, since the clapack translation (as I
understand it) is not an idiomatic translation into C (actually, that
would be much too confusing, if you think about it). It's a perfectly
literal translation, which preserves Fortan-style array indexing by
brutal mangling of C notation.

Allin Cottrell
Wake Forest University
 
J

Jentje Goslinga

Bill said:
Hi Victor,

For the last few years I have been working on a
set of C++ classes to do linear algebra. I was trying to
acheive most of the functionality of Lapack/Blas but
in a very user friendly fashion. The classes center
arround a general purpose vector class along with a dense rectangular Matrix
class and then some more compilcated classes like a vector of vectors and
vector of matrices.
All of them use operator overloading so you can write
A = B + C * D where A,B,C, D are matrices and all of the operators
work for both real and complex matrices.

This is not a very common operation which I have only
encountered when working with block matrices.
C++ is a powerful language but the simple algebra of
Matrices, vectors and Scalars is sufficiently complex
that using C++ shorthand notation is not always possible.
Have you looked at OON, the Object Oriented Numerics
Group?

Akll of the basic Linear algebra
functions are supported
ie solution of equations, decompositions, SVD eigenvectors etc. etc.
The entire library is less than 5000 lines of code

You must be kidding, by the time I am done with Schur, SVD
and Block Reflector, I will have 25-30k lines of C and a few
thousand lines of Intel Assembler. And that is without Linear
Equations or Complex Data type.
And it is including testing modules for everything.
and compiles into a static library of ~~ 3 megabytes

Isn't that a bit large? My executable which contains all of
the said factorizations and comprehensive testing software
weighs in at 76 kilobytes. You generate 40 times the object
code with one fifth the code, a factor 200!
You don't have to compile it but you will need either a
Miccrosoft VC6 or Linux GCC compiler to use it.
I have demo version available and if anyone in the group wants to play
with it and has a compatible compiler send me a line. Once you've used it
you will never go back to LAPACK
(e-mail address removed)
regards...Bill

Have you tested those functions by constructing pathological
matrices from their eigenvalues (for symmetric matrices)
or from their singular values (in the case of the SVD) and
likewise for Linear Equations and evaluated the accuracy?


Jentje Goslinga
 
B

Bill Shortall

Jentje Goslinga said:
This is not a very common operation which I have only
encountered when working with block matrices.
C++ is a powerful language but the simple algebra of
Matrices, vectors and Scalars is sufficiently complex
that using C++ shorthand notation is not always possible.
Have you looked at OON, the Object Oriented Numerics
Group?

Akll of the basic Linear algebra

You must be kidding, by the time I am done with Schur, SVD
and Block Reflector, I will have 25-30k lines of C and a few
thousand lines of Intel Assembler. And that is without Linear
Equations or Complex Data type.
And it is including testing modules for everything.


Isn't that a bit large? My executable which contains all of
the said factorizations and comprehensive testing software
weighs in at 76 kilobytes. You generate 40 times the object
code with one fifth the code, a factor 200!


Have you tested those functions by constructing pathological
matrices from their eigenvalues (for symmetric matrices)
or from their singular values (in the case of the SVD) and
likewise for Linear Equations and evaluated the accuracy?


Jentje Goslinga


Subject: Re: Who uses clapack?

hi Jentje,

Bill wrote

class and then some more compilcated classes like a vector of vectors and
Jentje wrote
This is not a very common operation which I have only
encountered when working with block matrices.
C++ is a powerful language but the simple algebra of
Matrices, vectors and Scalars is sufficiently complex
that using C++ shorthand notation is not always possible.

I believe that operator overloading is one of the primary advantages of
C++ The ability to write simple equations like A = B + C*D makes the
underlying mathematics stand out and the code easy to debug and remember.
I haven't had many problems wwith the shorthand notation. It is however
challenging to do in complex variables.

Bill wrote
ie solution of equations, decompositions, SVD eigenvectors etc. etc.
Jentje wrote
You must be kidding, by the time I am done with Schur, SVD
and Block Reflector, I will have 25-30k lines of C and a few

I am not kidding !!!
Remember I am writing templated code so that one function description
can in most cases handle float, double, complex<float>, and complex<double>
its all the same function....this reduces code length by a factor of 4.
Also remenber that I am using the standard template library which handles
the pointers and data types.. As am example

template <class T> inline
SMatrix<T>& SMatrix<T>::eek:perator +=(const SMatrix<T> &B){
SYS_ASSERT(( (rows() == B.rows()) & (cols() == B.cols()) ),
"error += matrices not equal size \n");
std::transform(begin(),end(),B.begin(),begin(),std::plus<T>());
return *this;
}

here in 8 lines we have the code to do an update add += for all data types.
It also
checks for proper matrix size. The c code for this would be twice as long.
I also reuse the code in SVD's and eigenvectors to save space.
Bill
> and compiles into a static library of ~~ 3 megabytes
Jentje
Isn't that a bit large? My executable which contains all of
the said factorizations and comprehensive testing software

Remember that my lib file contains instances of code for all data types
For a particular problem only a small fraction of this is used and appears
in the object file of a particular app.

Jentje

Have you tested those functions by constructing pathological
matrices from their eigenvalues (for symmetric matrices)
or from their singular values (in the case of the SVD) and
likewise for Linear Equations and evaluated the accuracy?

Bill

I have tested all functions up to size 500 x 500 using
matrices filled with random numbers. I havent done anything
with pathological matrices yet.

I hope that this encourages others to consider using C++ for
their linear algebra problems

Regards...Bill Shortall
 
J

Jentje Goslinga

Bill said:
> Subject: Re: Who uses clapack?
>
> hi Jentje,
>
> Bill wrote
>
> class and then some more compilcated classes like a vector of vectors and
>
>
> Jentje wrote
> This is not a very common operation which I have only
>
>
>
> I believe that operator overloading is one of the primary advantages of
> C++ The ability to write simple equations like A = B + C*D makes the
> underlying mathematics stand out and the code easy to debug and remember.

Hi Bill,

I am somewhat sceptical and don't think is such a great
advantage, although it is nice if you can use it.
The problem is that C++ does not allow you to define new
operators so it is best suited for algebras which closely
resemble the algebra for real numbers such as the one for
complex numbers.
However, the Matrix/Vector/Scalar algebra over the real and
complex number fields is not so trivial and includes operations
such as transpose and trace and dyadic product, vector outer
product, minors, compound matrices, block matrices, even the
3D cross product which may be more complicated than C++ allows.
Aside from the number field there are many shapes of matrices:
rectangular ones, banded, bidiagonal, tridiagonal, triangular
and so on.
I use C and use function calls and write a comment above the
function call which shows in Matrix Notation what the call
does which is not quite as nice of course.
C++ is a grandiose language but I find C adequate for matrix
linear algebra. Maybe, if it were not for the name mangling
which makes linking with Assembler more difficult, I might
have used C++.
> I haven't had many problems wwith the shorthand notation. It is however
> challenging to do in complex variables.
>
> Bill wrote
> ie solution of equations, decompositions, SVD eigenvectors etc. etc.
>
>
> Jentje wrote
> You must be kidding, by the time I am done with Schur, SVD
>
>
>
> I am not kidding !!!
> Remember I am writing templated code so that one function description
> can in most cases handle float, double, complex<float>, and
complex said:
> its all the same function....this reduces code length by a factor of 4.

Yes, I had understood that, but don't you have to write
separate implementations of those template functions for
complex numbers for example. OK, I understand now, the
complex float and complex double are the same code and the
real float and double are another code, understood.

> Also remenber that I am using the standard template library which handles
> the pointers and data types.. As am example
>
> template <class T> inline
> SMatrix<T>& SMatrix<T>::eek:perator +=(const SMatrix<T> &B){
> SYS_ASSERT(( (rows() == B.rows()) & (cols() == B.cols()) ),
> "error += matrices not equal size \n");
> std::transform(begin(),end(),B.begin(),begin(),std::plus<T>());
> return *this;
> }

Good heavens!
> here in 8 lines we have the code to do an update add += for all data
types.

I still maintain that this is a relatively rare operation
except for block matrices and of course for vectors where
operator-=() would be natural for projection.
> It also
> checks for proper matrix size. The c code for this would be twice as long.
> I also reuse the code in SVD's and eigenvectors to save space.
> Bill
>
> Jentje
>
>
>
> Remember that my lib file contains instances of code for all data types
> For a particular problem only a small fraction of this is used and appears
> in the object file of a particular app.
>
> Jentje
>
> Have you tested those functions by constructing pathological
> matrices from their eigenvalues (for symmetric matrices)
> or from their singular values (in the case of the SVD) and
> likewise for Linear Equations and evaluated the accuracy?
>
> Bill
>
> I have tested all functions up to size 500 x 500 using
> matrices filled with random numbers. I havent done anything
> with pathological matrices yet.

In addition to those "random" matrices which are not really
that random you should test with matrices with clusters of
equal or extremely closely spaced Eigen/Singular values
or logarithmically distributed.
You construct those from the Eigenvalues/Singular Values
by multiplying with very accurately constructed orthogonal
matrices.
It is amazing how tests like this can break a program which
has tested coreectly using a billion random inputs.
Lapack pases these brutal tests by the way.
> I hope that this encourages others to consider using C++ for
> their linear algebra problems

Maybe but most likely your code is slow unless you use BLAS
type functions.
There is a thriving cottage industry of people who have written
C++ templates for Linear Algebra code, mostly for existing code.

Maybe! your code is better but to gain acceptance, you will have
to supply a suite of tests, one for each factorization or linear
problem, where you circular test your functions and also test
them against Lapack, both in terms of speed and accuracy.
I have test generators such as these for the Symmetric Eigenvalue
problem and the SVD where I run Lapack versus my code and display
the error in ULP and the speed. For example for the SVD I show the
maximum error in ULP for: UT.U-I, VT.V-I, UT.A.V-L and L*-L.
> Regards...Bill Shortall

regards,

Jentje Goslinga
 
J

James Van Buskirk

Yes, I had understood that, but don't you have to write
separate implementations of those template functions for
complex numbers for example. OK, I understand now, the
complex float and complex double are the same code and the
real float and double are another code, understood.

Now I don't think I get it. The Fortran version needs two
separate codes to handle the 6 possibilities: a real code
for single, double, and quad precision real and a complex
code for single, double, and quad precision complex. But
the previous poster seems to be saying that in the C++
version, a single code suffices for the four cases of
(single vs. double precision) and (real vs. complex).
There are at least some level 1 blas subprograms where
there are two complex versions and only one real version,
e.g. cdotc/zdotc/ydotc & cdotu/zdotu/ydotu for complex
data but only sdot/ddot/qdot for real data. It seems to
me that you aren't quite going to get a 4X code reduction
factor for several of the blas subprograms.
 
K

kia

Victor said:
The authors of lapack/scalapack are starting to work on a new release of
these packages. One of the things we want to address is the sillyness
that C users, instead of linking to the binaries of the Fortran version,
use an automatically translated C version.

Your concerns seem oddly out of step, now that clc's own gatekeeper
declared them *off topic*. I agree, they should concentrate on doing
what C is good for -- numerical computing is not one of them.
Consequently, don't dilute your efforts to accommodate technically
illiterate crowd that could neither appreciate nor effectively use what
you have to offer.
 
T

Tino

Bill said:
Hi Victor,

For the last few years I have been working on a
set of C++ classes to do linear algebra. I was trying to
acheive most of the functionality of Lapack/Blas but
in a very user friendly fashion. The classes center
arround a general purpose vector class along with a dense rectangular Matrix
class and then some more compilcated classes like a vector of vectors and
vector of matrices.
All of them use operator overloading so you can write
A = B + C * D where A,B,C, D are matrices and all of the operators
work for both real and complex matrices. Akll of the basic Linear algebra
functions are supported
ie solution of equations, decompositions, SVD eigenvectors etc. etc.
The entire library is less than 5000 lines of code
and compiles into a static library of ~~ 3 megabytes
You don't have to compile it but you will need either a
Miccrosoft VC6 or Linux GCC compiler to use it.
I have demo version available and if anyone in the group wants to play
with it and has a compatible compiler send me a line. Once you've used it
you will never go back to LAPACK
(e-mail address removed)
regards...Bill

I have also developed a C++ library which acts as nothing more than a
wrapper for the BLAS/LAPACK or Intel MKL libraries. In my case, I
claim only to support a subset of the BLAS/LAPACK functionality (though
it is 100% of the functionality that I use). I also support all of the
different matrix formats (dense, banded, symmetric), though only in
double precision (no complex matrices). The reason I was interested in
a more "official" effort is, that I believe that LAPACK's popularity
and utility is in its generality and I would welcome a sort of
standardization of the C++ interface as there seem to be dozens if not
hundereds of them floating around.

Ryan
 
B

Bill Shortall

Tino said:
I have also developed a C++ library which acts as nothing more than a
wrapper for the BLAS/LAPACK or Intel MKL libraries. In my case, I
claim only to support a subset of the BLAS/LAPACK functionality (though
it is 100% of the functionality that I use). I also support all of the
different matrix formats (dense, banded, symmetric), though only in
double precision (no complex matrices). The reason I was interested in
a more "official" effort is, that I believe that LAPACK's popularity
and utility is in its generality and I would welcome a sort of
standardization of the C++ interface as there seem to be dozens if not
hundereds of them floating around.

Ryan

Yes...the writers of LAPACK should write a C++
version. After all, they are the experts. It might even
be fun for them !
Bill ............. (e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,156
Messages
2,570,878
Members
47,413
Latest member
KeiraLight

Latest Threads

Top