L
lcw1964
Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.
I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here,
http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).
I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.
Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.
I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.
I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.
Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!
This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.
Many thanks in advance,
Les
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.
I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here,
http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).
I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.
Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.
I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.
I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.
Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!
This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.
Many thanks in advance,
Les