vectorized computation in C++ such as those in Matlab (Matlab toC++)?

L

Luna Moon

Dear all,

Can C++/STL/Boost do the vectorized calculation as those in Matlab?

For example, in the following code, what I really want to do is to
send in a vector of u's.

All other parameters such as t, l1, l2, l3, etc. are scalars...

But u is a vector.

Thus, t6 becomes a vector.

t9 is an element-wise multiplication...

The following code was actually converted from Matlab.

If vectorized computation is not facilitated, then I have to call this
function millions of times.

But if vectorized computation is okay, then I can send in just a u
vector with batch elements a time.

I have many such code in Matlab need to be converted into C++ with
vectorization.

Any thoughts?

Thank you!

double t5, t6, t7, t9, t11, t13, t16, t20, t23, t27, t32, t34, t36,
t37, t38, t42,
t44, t47, t48, t51, t52, t54, t59, t60, t61, t66, t67, t69, t74,
t75, t76, t81,
t82, t84, t87, t105, t106, t110, t112;

t5 = exp(-t * l1 - t * l2 - t * l3);
t6 = t * u;
t7 = mu1 * mu1;
t9 = u * u;
t11 = kappa * kappa;
t13 = 0.1e1 / (t9 * t7 + t11);
 
L

Leandro Melo

Dear all,
Can C++/STL/Boost do the vectorized calculation as those in Matlab?

For example, in the following code, what I really want to do is to
send in a vector of u's.

All other parameters such as t, l1, l2, l3, etc. are scalars...

But u is a vector.

Thus, t6 becomes a vector.

t9 is an element-wise multiplication...

The following code was actually converted from Matlab.

If vectorized computation is not facilitated, then I have to call this
function millions of times.

But if vectorized computation is okay, then I can send in just a u
vector with batch elements a time.

I have many such code in Matlab need to be converted into C++ with
vectorization.

Any thoughts?

Thank you!

                double t5, t6, t7, t9, t11, t13, t16, t20, t23, t27, t32, t34, t36,
t37, t38, t42,
                        t44, t47, t48, t51, t52, t54, t59, t60, t61, t66, t67, t69, t74,
t75, t76, t81,
                        t82, t84, t87, t105, t106, t110, t112;

                t5 = exp(-t * l1 - t * l2 - t * l3);
                t6 = t * u;
                t7 = mu1 * mu1;
                t9 = u * u;
                t11 = kappa * kappa;
                t13 = 0.1e1 / (t9 * t7 + t11);

Hi.

I think matlab provides a c++ api. Have you checked it out? There's
also the matrix template library for general algebra computations. You
might find it useful.
 
L

Luna Moon

I don't think Matlab's C++ API can do that. I think it is just a C
interface. It does not have STL, Boost etc.

Also, we are not talking about things as complicated as high speed
matrix computation, it's just vectorized computation...
 
B

Bart van Ingen Schenau

Luna said:
Dear all,

Can C++/STL/Boost do the vectorized calculation as those in Matlab?

I don't know what Boost has in the field of matrix & vector
computations, but standard C++ does not have anything even remotely
resembling the capabilities of Matlab.

The closest you can get with standard C++ is to use std::valarray<>,
which was intended to facilitate computations that can potentially be
executed in parallel.

Bart v Ingen Schenau
 
L

Lionel B

Dear all,

Can C++/STL/Boost do the vectorized calculation as those in Matlab?

What exactly do you mean by "vectorized calculation as those in Matlab"?
Do you just mean that Matlab has a native vector type and does
calculations with it, or were you suggesting that Matlab processes
vectors in some special way that C++ cannot?

Matlab, AFAIK, does a lot of its matrix/vector arithmetic, such as dot
products and matrix-matrix or matrix-vector multiplication, using a BLAS
library - that is highly optimised linear algebra code (generally written
in Fortran) - which is accessible via C++, since there is a well-defined
interface for C++ (C, really) and Fortran. There is a good chance you
will already have a BLAS library on your system; if not, there are open
source (e.g,. the ATLAS project) as well as vendor-supplied versions
(e.g. Intel, AMD, etc supply BLAS libraries).

It is possible that Matlab will also make use of very machine-specific
optimisations such as sse/mmx for floating point computation. You can use
these too from C++ if you can persuade your compiler to play ball.

The bottom line is that there's nothing Matlab can do that you can't do
in C++, equally (if not more) efficiently. It's more a question of
convenience: Matlab is designed specifically for vector/matrix
manipulation - C++ is a general-purpose programming language.
For example, in the following code, what I really want to do is to send
in a vector of u's.

All other parameters such as t, l1, l2, l3, etc. are scalars...

But u is a vector.

Thus, t6 becomes a vector.

t9 is an element-wise multiplication...

The following code was actually converted from Matlab.

If vectorized computation is not facilitated, then I have to call this
function millions of times.

But if vectorized computation is okay, then I can send in just a u
vector with batch elements a time.

I'm really not quite sure what you mean here.

The closest thing in C++ to a Matlab vector is probably the
std::valarray<double> class, although it seems a bit of a bodge and hence
rather unpopular. The std::vector<double> class will probably do you
quite well; it doesn't implement functionality such as element-wise
multiplication, so you will have to do that yourself - but that's pretty
simple.

There are also various matrix/vector C++ libraries knocking around (e.g.
Blitz++) that you might want to look at.

In terms of efficiency, if you are doing a lot of large matrix
multiplications or more sophisticated linear algebra a la Matlab, then
you might want to investigate the BLAS and possibly LAPACK (Linear
Algebra Package), but I suspect that might be overkill in your case. And
it is ugly.

FWIW, I recently ported a lot of Matlab code to C++ and have to say that
C++ generally kicks Matlab's a*se in terms of efficiency - but not in
ease of coding (Matlab appears to suffer performance-wise from a lot of
internal copying which you can eliminate in hand-coded C++).
 
R

Rune Allnor

What exactly do you mean by "vectorized calculation as those in Matlab"?
Do you just mean that Matlab has a native vector type and does
calculations with it, or were you suggesting that Matlab processes
vectors in some special way that C++ cannot?

It is a common misconception amongst matlab users that there is
something special about vectors. Matlab has historically been
very slow when executing explicit for-loops and while-loops etc.
The 'standard' matlab way to deal with this is to bypass the
interpreter and call compiled code, often from BLAS or LAPACK,
by 'vectorizing' the matlab code. I commented on that just a
few days ago on comp.soft-sys.matlab:

http://groups.google.no/group/comp.soft-sys.matlab/msg/e9699dcd19dcbe49?hl=no&dmode=source

The problem is that users who only know matlab and no other
programming
languages are conditioned to believe that the problem lies with for-
loops
as such, and not with matlab.

Rune
 
L

Lionel B

It is a common misconception amongst matlab users that there is
something special about vectors. Matlab has historically been very slow
when executing explicit for-loops and while-loops etc. The 'standard'
matlab way to deal with this is to bypass the interpreter and call
compiled code, often from BLAS or LAPACK, by 'vectorizing' the matlab
code. I commented on that just a few days ago on comp.soft-sys.matlab:

Indeed. And if you look inside any BLAS or LAPACK you'll see... loops.
Cleverely structured loops, to be sure - to exploit processor
architecture features such as cache structure and special floating point
facilities, maybe even some true parallelization if you're on a
multiprocessor system - but loops nonetheless. How could it be otherwise
on a serial processing CPU?
The problem is that users who only know matlab and no other programming
languages are conditioned to believe that the problem lies with for-
loops as such, and not with matlab.

Compilers are getting pretty clever these days and can often achieve
similar optimizations as Matlab no doubt deploys - as well as some of the
more sophisticated optimizations implemented in modern BLAS and LAPACK
libraries - with C++ (or C or Fortran) loops through vectors. [Intel's
compilers in particular are pretty impressive with floating point
optimization, GCC and Microsoft not far behind].

A recent experience of mine involved re-writing some Matlab code in C++.
For straightforward vector and matrix operations (essentially BLAS levels
1 & 2) I used explicit for loops while for matrix-matrix multiplication
(BLAS level 3) and higher order linear algebra calculations (like SVD and
eigenvalues) I plugged into a BLAS/LAPACK library (the same BLAS/LAPACK
that my Matlab installation uses). The resultant code (compiled by a
recent GCC) ran on the order of 10-20x faster than the Matlab code. My
suspicion is that Matlab's extra overhead was incurred through unecessary
(from an algorithmic perspective) copying of large vectors and matrices.
 
A

allnor

Indeed. And if you look inside any BLAS or LAPACK you'll see... loops.

Exactly. I attended a conference on underwater acoustics many years
ago, where one of the presentations dealt with 'efficient
computation.'
In effect, the matlab code was rewritten from readable code (i.e. for-
loops) to 'vectorized' matlab code. That presentation was, in fact,
the inspiration for making the test I pointed to yesterday.

Rune
 
R

rocksportrocker

Dear all,

Can C++/STL/Boost do the vectorized calculation as those in Matlab?

For example, in the following code, what I really want to do is to
send in a vector of u's.

All other parameters such as t, l1, l2, l3, etc. are scalars...

But u is a vector.

Thus, t6 becomes a vector.

t9 is an element-wise multiplication...

The following code was actually converted from Matlab.

If vectorized computation is not facilitated, then I have to call this
function millions of times.

But if vectorized computation is okay, then I can send in just a u
vector with batch elements a time.

I have many such code in Matlab need to be converted into C++ with
vectorization.

Any thoughts?

Thank you!

                double t5, t6, t7, t9, t11, t13, t16, t20, t23, t27, t32, t34, t36,
t37, t38, t42,
                        t44, t47, t48, t51, t52, t54, t59, t60, t61, t66, t67, t69, t74,
t75, t76, t81,
                        t82, t84, t87, t105, t106, t110, t112;

                t5 = exp(-t * l1 - t * l2 - t * l3);
                t6 = t * u;
                t7 = mu1 * mu1;
                t9 = u * u;
                t11 = kappa * kappa;
                t13 = 0.1e1 / (t9 * t7 + t11);

Why do you want that ? Is it because the code is easier to read, or do
you hope
to get better performance ?

If it is for performance: Writing native loops in C/C++ with some
optimization flags
will give you best performance in most cases. Sometimes optimized BLAS
like
ATLAS will improve performance further.

Vectorized code in Matlab is faster than looping code, because in the
latter the
loops are interpreted which slows things down. Internally Matlab works
as described
above.

Greetings, Uwe
 
B

Bo Schwarzstein

Dear all,

Can C++/STL/Boost do the vectorized calculation as those in Matlab?

For example, in the following code, what I really want to do is to
send in a vector of u's.

All other parameters such as t, l1, l2, l3, etc. are scalars...

But u is a vector.

Thus, t6 becomes a vector.

t9 is an element-wise multiplication...

The following code was actually converted from Matlab.

If vectorized computation is not facilitated, then I have to call this
function millions of times.

But if vectorized computation is okay, then I can send in just a u
vector with batch elements a time.

I have many such code in Matlab need to be converted into C++ with
vectorization.

Any thoughts?

Thank you!

double t5, t6, t7, t9, t11, t13, t16, t20, t23, t27, t32, t34, t36,
t37, t38, t42,
t44, t47, t48, t51, t52, t54, t59, t60, t61, t66, t67, t69, t74,
t75, t76, t81,
t82, t84, t87, t105, t106, t110, t112;

t5 = exp(-t * l1 - t * l2 - t * l3);
t6 = t * u;
t7 = mu1 * mu1;
t9 = u * u;
t11 = kappa * kappa;
t13 = 0.1e1 / (t9 * t7 + t11);

GSL,GNU Octave,boost
 
G

Giovanni Gherdovich

Hello,

my question is pretty much related to this topic so I continue
the thread instead of opening another one.
I'm a former Matlab coder and a wannabe C++ coder.

I've read about std::valarray<> in the Stroustrup's book
"The C++ Programming Language", discovering that some operators
(like the multiplication *), and some basic math functions
like sin and cos, are overloaded in order to behave very similar
to Matlab ones when dealing with valarrays (mainly, they act
component-wise).

I'm aware of Matlab "vectorization" techniques; I use them to
avoid for-loops.
But when I do that I _don't_ do linear algebra: I just do
component-wise operations between matrices, in order to
save Matlab from doing serial calls to the same routine (via for-
loops).
I dont't think such code gains anything from using BLAS or
similarities.
I mean: taking the inverse of a matrix is linear algebra,
but multiplying two vectors component-wise is just... multiplying.

rocksportrocker
If it is for performance: Writing native loops in C/C++ with some
optimization flags will give you best performance in most cases.

Best performance than Matlab, or best performance than
"vectorized" C++?

Rune Allnor:
The problem is that users who only know matlab and no other
programming languages are conditioned to believe that the
problem lies with for-loops as such, and not with matlab.

My question:
When writing C++ code, do you thing I can have faster code
if I use std::valarray<> in "the Matlab way", instead
of using, say, std::vector<> and for-loops?

I must admit that this curiosity comes from my previous Matlab
experiences, when I used to think to for-loops as being the devil,
but from a naive point of view I can image that "vectorized"
operations on std::valarray<> are optimized by smart compilation
techniques... after all the programmer doesn't specify the
order with which the operation as to be done, like in a foor-loop...
Is this just fantasy?
--NOTE that the scenario I have in mind is a single core machine.

Regards,
Giovanni Gherdovich
 
R

Rune Allnor

Hello,

my question is pretty much related to this topic so I continue
the thread instead of opening another one.
I'm a former Matlab coder and a wannabe C++ coder.

I've read about std::valarray<> in the Stroustrup's book
"The C++ Programming Language", discovering that some operators
(like the multiplication *), and some basic math functions
like sin and cos, are overloaded in order to behave very similar
to Matlab ones when dealing with valarrays (mainly, they act
component-wise).

I'm aware of Matlab "vectorization" techniques; I use them to
avoid for-loops.

That's a *matlab* problem. 'Vectorization' is a concept
exclusive to matlab, which historically was caused by
what I consider to be bugs in the matlab interpreter.
But when I do that I _don't_ do linear algebra: I just do
component-wise operations between matrices, in order to
save Matlab from doing serial calls to the same routine (via for-
loops).
I dont't think such code gains anything from using BLAS or
similarities.
I mean: taking the inverse of a matrix is linear algebra,
but multiplying two vectors component-wise is just... multiplying.

rocksportrocker


Best performance than Matlab,

Depending on exactly what you do, matlab *can* get very
close to best-possible performance since it uses highly
tuned low-level libraries. If your operation is covered by
such a function, you might find it difficult to beat matlab.
If not, don't be surprised if C++ code beats matlab by
a factor 5-10 or more.
or best performance than
"vectorized" C++?

There is no such thing as 'vectorized C++.'
Rune Allnor:


My question:
When writing C++ code, do you thing I can have faster code
if I use std::valarray<> in "the Matlab way", instead
of using, say, std::vector<> and for-loops?

I don't know. I haven't used std::valarray<>. I know I have
seen some comment somewhere that std::valarray<> was an early
attempt at a standardized way to handle numbercrunching in C++,
which was, well, not quite as successful as one might have
whished for.
I must admit that this curiosity comes from my previous Matlab
experiences, when I used to think to for-loops as being the devil,
but from a naive point of view I can image that "vectorized"
operations on std::valarray<> are optimized by smart compilation
techniques...

You would be surprised: There are for-loops at the core of all
those libraries, even the BLAS libraries matlab is based on.
There are smart compilation techniques involved, but to *optimize*
the for-loops, not to *eliminate* them.
after all the programmer doesn't specify the
order with which the operation as to be done, like in a foor-loop...

I wrote a sketch to illustrate how this is done in a previous
post in this thread, which was posted only to comp.soft-sys.matlab:

http://groups.google.no/group/comp.soft-sys.matlab/msg/2ab57a27d663e1fa?hl=no

As you can see, the 'vector' version myfunction(std::vector<double>)
calls the scalar version myfunction(double) in a for-loop. This is
essentially what is done in all the libraries you use, including
matlab.

The for-loops are at the core, and the smart compiler techniques
optimize the executable code to avoid any unnecessary run-time
overhead.
Is this just fantasy?

You might want to have a look at the basic texts on modern C++.
Try "Accelerated C++" by Koenig & Moo, or "You can do it!" by
Glassborow. Or both.

Rune
 
D

dj3vande

On 6 Aug, 14:54, Giovanni Gherdovich


That's a *matlab* problem. 'Vectorization' is a concept
exclusive to matlab, which historically was caused by
what I consider to be bugs in the matlab interpreter.

To describe this property of matlab as "buggy" is inordinately harsh.

It's an inherent property of interpreters: If you're interpreting a
loop, you have to look at the loop condition code, and the loop
bookkeeping code, and the code inside the loop, every time through.
Unless you go out of your way to make this fast, you end up having to
do a lookup-decode-process for each of those steps.
Compiling to native code lets you do the lookup-decode at compile time,
and for typical loops only generates a few machine-code instructions
for the loop bookkeeping and condition checking, which substantially
reduces the total amount of work the processor is doing. But making an
interpreter clever enough to do interpreted loops that fast is a Much
Harder Problem.

(So, the answer to the OP's question is (as already noted): Don't worry
about vectorizing, write loops and ask the compiler to optimize it, and
you'll probably come close enough to Matlab's performance that you
won't be able to tell the difference.)

Since Matlab is targeting numerical work with large arrays anyways,
there's not much benefit to speeding up this part of the interpreter;
if the program is spending most of its time inside the large-matrix
code (which is compiled to native code, aggressively optimized by the
compiler, and probably hand-tuned for speed), then speeding up the
interpreter's handling of the loop won't gain you any noticeable
speedup anyways. If you're writing loopy code to do things Matlab has
primitives for, you're probably better off vectorizing it anyways,
since that will make it both clearer and faster.
So (unlike with general-purpose interpreted languages that don't have
primitives that replace common loop idioms) there's no real benefit to
speeding up the Matlab interpreter's loop handling, and there are
obvious costs (development time, increased complexity, more potential
for bugs), so there are good reasons not to bother.

If you do have code that doesn't fit Matlab's vectorization model, you
can always write it in C or Fortran and wrap it up in a Matlab FFI
wrapper; Matlab's FFI is not hard to use on the compiled-to-native-code
side, and looks exactly like a Matlab function on the Matlab code side,
so it's almost always the Right Tool For The Job in that case.
(At my day job, I've been asked to do this for the Matlab programmers a
few times, and for hard-to-vectorize loopy code getting a speedup of
two or three orders of magnitude just by doing a reasonably direct
translation into C and compiling to native code with an optimizing
compiler is pretty much expected.)



dave
 
U

Uwe Schmitt

Giovanni said:
Hello,

my question is pretty much related to this topic so I continue
the thread instead of opening another one.
I'm a former Matlab coder and a wannabe C++ coder.

I've read about std::valarray<> in the Stroustrup's book
"The C++ Programming Language", discovering that some operators
(like the multiplication *), and some basic math functions
like sin and cos, are overloaded in order to behave very similar
to Matlab ones when dealing with valarrays (mainly, they act
component-wise).

I'm aware of Matlab "vectorization" techniques; I use them to
avoid for-loops.
But when I do that I _don't_ do linear algebra: I just do
component-wise operations between matrices, in order to
save Matlab from doing serial calls to the same routine (via for-
loops).
I dont't think such code gains anything from using BLAS or
similarities.
I mean: taking the inverse of a matrix is linear algebra,
but multiplying two vectors component-wise is just... multiplying.
yes. but you can do some enrollment or other
access patterns for optimizing cache access.
this is a broad field, look at:
http://en.wikipedia.org/wiki/Loop_transformation
Best performance than Matlab, or best performance than
"vectorized" C++?
best performance compared to vectorized code.
optimization flags of your compiler can force
loopoptimisation and other strategies.
My question:
When writing C++ code, do you thing I can have faster code
if I use std::valarray<> in "the Matlab way", instead
of using, say, std::vector<> and for-loops?
I do not know how optimized valarray<> is. You
should compare it using different matrix-/vector-sizes
and different optimization flags
of your compiler and post your results.

If you use GNU compilers, the flags are -O?
afaik -O0 up to -O3

And you should compare it to
http://math-atlas.sourceforge.net/

which is supposed to gain very good performance.
I must admit that this curiosity comes from my previous Matlab
experiences, when I used to think to for-loops as being the devil,
but from a naive point of view I can image that "vectorized"
operations on std::valarray<> are optimized by smart compilation
techniques... after all the programmer doesn't specify the
order with which the operation as to be done, like in a foor-loop...
Is this just fantasy?
In matlab for-loops are devil, because the interpreter
has to handle the loops, which slows things down.
If you make a navieve C implementation, you for-loops
are compiled to machinecode, which runs much faster
than the interpreted matlab for-loop.
vectorization gives matlab the ability to put the
operation into a optimized C routine, where the
essential and fast looping happens.

Greetings, Uwe

--
Dr. rer. nat. Uwe Schmitt
F&E Mathematik

mineway GmbH
Science Park 2
D-66123 Saarbrücken

Telefon: +49 (0)681 8390 5334
Telefax: +49 (0)681 830 4376

(e-mail address removed)
www.mineway.de

Geschäftsführung: Dr.-Ing. Mathias Bauer
Amtsgericht Saarbrücken HRB 12339
 
G

Giovanni Gherdovich

Hello,

thank you for your answers.

Rune Allnor:
Depending on exactly what you do, matlab *can* get very
close to best-possible performance since it uses highly
tuned low-level libraries. If your operation is covered by
such a function, you might find it difficult to beat matlab.
If not, don't be surprised if C++ code beats matlab by
a factor 5-10 or more.
dave:
(So, the answer to the OP's question is (as already noted): Don't worry
about vectorizing, write loops and ask the compiler to optimize it, and
you'll probably come close enough to Matlab's performance that you
won't be able to tell the difference.)
Uwe:
I mean: taking the inverse of a matrix is linear algebra,
but multiplying two vectors component-wise is just... multiplying.

Rune Allnor:
You would be surprised: There are for-loops at the core of all
those libraries, even the BLAS libraries matlab is based on.
There are smart compilation techniques involved, but to *optimize*
the for-loops, not to *eliminate* them.

I was among the user who are "conditioned to believe that the
problem lies with for-loops as such, and not with matlab",
to use Rune's words.
Thank you all to point it out.

About the performance of numerical computation done using
std::valarray<>'s features:

Uwe:
My question:
When writing C++ code, do you thing I can have faster code
if I use std::valarray<> in "the Matlab way", instead

Rune Allnor:
I don't know. I haven't used std::valarray<>. I know I have
seen some comment somewhere that std::valarray<> was an early
attempt at a standardized way to handle numbercrunching in C++,
which was, well, not quite as successful as one might have
whished for.

It seems that nobody knows if it's worth to use std::valarray<>
and related "vectorized" operators (provided by the standard
library) to do numerical computing in C++.

Googling this topic, I've found this interesting thread in
a forum of a site called "www.velocityreviews.com"
http://www.velocityreviews.com/forums/t277285-p-c-stl-valarrays-vs-vectors.html

One of the poster, who (like me) took the chapter "Vector Arithmetic"
on Stroustrup's book as The Truth, says that with valarray<>
you can do math at the speed of light, blah blah optimization
blah blah vectorization and so on.

Another user answers with what I find a more reasonable argument:
std::valarray<> was designed to meet the characteristic of vector
machines, like the Cray. If you don't have the Cray, there is
no point in doing math with valarray<> and related operators.

Anyway, as soon as I have some spare time I will check it on
my own, comparing the results with ATLAS as Uwe suggests.

Regards,
Giovanni Gherdovich
 
J

Jerry Coffin

(e-mail address removed)>,
(e-mail address removed) says...

[ ... ]
Another user answers with what I find a more reasonable argument:
std::valarray<> was designed to meet the characteristic of vector
machines, like the Cray. If you don't have the Cray, there is
no point in doing math with valarray<> and related operators.

In theory that's right: the basic idea was to provide something that
could be implemented quite efficiently on vector machines. In fact, I've
never heard of anybody optimizing the code for a vector machine, so it
may be open to question whether it provides any real advantage on them.

OTOH, valarray _can_ make some code quite readable, so it's not always a
complete loss anyway.
 
G

Giovanni Gherdovich

Hello,
During the mid-90s both C and C++ were involved in adding features that
would support numerically intense programming. Unfortunately a couple of
years later the companies whose numerical experts doing the grunt work
withdrew support.

Just for the sake of historical investigation, I found a thread on
this newsgroup from the far 1991, where Walter Bright (who might
be the same Walter Bright who designed the D programming language,
http://www.walterbright.com/
http://en.wikipedia.org/wiki/Walter_Bright , but I'm not sure)
lists some shortcomings for the C++ numerical programmer,
and item #6 is

"Optimization of array operations is inhibited by the 'aliasing'
problems."
http://en.wikipedia.org/wiki/Aliasing_(computing)

(retrieved from
http://groups.google.com/group/comp...lnk=gst&q=numerical+analysis#feb0ec8ea7b24189

Then he mentions some solutions to this (two libraries, which
might be completely out of date nowaday).
Just to say that the Original Poster isn't the first to
address this issue...
By hindsight it might have been better to have shelved
the work but both WG14 and WG21 opted to continue hoping that they would
still produce something useful.

Mmmh... I skimmed over the pages of Working Groups 14 and 21
http://www.open-std.org/jtc1/sc22/wg14
http://www.open-std.org/jtc1/sc22/wg21
and they don't seem to have vector arithmetic among their priorities.
Anyway, from what I've learned from this thread, the overall
theme can very well make no sense, because of CPUs characteristics.

Regards,
Giovanni Gh.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,994
Messages
2,570,223
Members
46,815
Latest member
treekmostly22

Latest Threads

Top