Boost

H

Herman Viracocha

Juha said:
It does not make things easier because it takes hours to compile?

No? Definitely is not easier. Are you sure you clearly understand "hours
to compile"?
 
J

J. Clarke

Many larger systems are available. Not too many years ago, I was running
-j384 compiles regularly on a 192-core shared memory system.

People tend to assume that the consumer processors are the high end and
ignore the existence of the Xeon and Opteron lines. Using off-the-shelf
parts from Amazon you can build a 64-core machine with a terabyte of
RAM. Not _cheap_ but doable.
 
W

Walter Profile

J. Clarke said:
People tend to assume that the consumer processors are the high end and
ignore the existence of the Xeon and Opteron lines. Using off-the-shelf
parts from Amazon you can build a 64-core machine with a terabyte of
RAM.
Not _cheap_ but doable.

What off-the-shelf from Amazon are you talking about. Are you sure what
distributed shared memory is all about?

You seemingly agree in something that is stupid. The most on usenet are
using what is outhere available. Thus for now a 4 core / 8 threads are is
th best buy.

Moreover, if the code is not properly granulated for shared memory
distribution systems, that -j384 makes no sense.
 
B

Bob Hammerman

Drew said:
I agree that it is possible (though apparently not necessary) for Boost
to take hours to build.

His repeated "you are noise and do not answer the question" song and
dance is, IMO, a lame act.

Rather you agree but not know what you agree in. Total confusion.
 
B

Bob Hammerman

Robert said:
"Arraigning" to compiler over a cluster? Is that anything like being
sentenced to hard labor? Spell checkers are fun... *sigh*

Not to forget that "cluster" has nothing to do with multicore and shared
memory
 
B

Bob Hammerman

Robert said:
Interesting just how similar the NNTP headers for Nick Baumbach's and
Herman Viracocha's posts are...

What has this to do with the subject to discuss? You seems confused.
 
J

Jorgen Grahn

Many larger systems are available. Not too many years ago, I was running
-j384 compiles regularly on a 192-core shared memory system.

Part of the point being: it's sometimes optimal to use more make
processes than the number of CPUs. (A coworker pointed that out to me
15 years ago or so.)

/Jorgen
 
B

Bob Hammerman

Scott said:
If you have one source file, then yes, the obvious is obvious. If you
have thousands of source files, well, then.... Try building oracle11i,
or glibc, or linux, or a custom hypervisor instead of hello world
sometime.

Not sure at all, remember, shared memory, if the threads or processes are
interrelated, threads are, then they will wait in whatever queue for ready
data.

Back to the point. Are you implying that Boost is good only for high-ends
since it compiles faster. I don't need a 192 core for coding in C/C++.

It looks like no one knows why Boost would be good to anything.
 
V

Victor Bazarov

Not sure at all, remember, shared memory, if the threads or processes are
interrelated, threads are, then they will wait in whatever queue for ready
data.

Back to the point. Are you implying that Boost is good only for high-ends
since it compiles faster. I don't need a 192 core for coding in C/C++.

It looks like no one knows why Boost would be good to anything.

It's "good to" Usenet trolling, as your example clearly shows. ;-)

V
 
B

Bob Hammerman

Scott said:
If you have one source file, then yes, the obvious is obvious. If you
have thousands of source files, well, then.... Try building oracle11i,
or glibc, or linux, or a custom hypervisor instead of hello world
sometime.

I did compiled linux kernel years ago all the time. It took merely a 10 min
on a i386 and similar.
 
I

Ian Collins

Jorgen said:
Part of the point being: it's sometimes optimal to use more make
processes than the number of CPUs. (A coworker pointed that out to me
15 years ago or so.)

That's right. With adequate RAM, 2x the physical cores is about right
for C++ on most x86/64 processors.
 
J

J. Clarke

What off-the-shelf from Amazon are you talking about. Are you sure what
distributed shared memory is all about?

Supermicro H8QG7 board, four AMD6274 processors, maxing out the RAM you
have to go outside of Amazon to get 32 sticks of Samsung M386B4G70BM0-
YK0. Power supply, case, etc I'll leave you to find on your own.

You're going to have about $50,000 sunk in the machine by the time it's
finished--like I said, it's not going to be cheap.
You seemingly agree in something that is stupid. The most on usenet are
using what is outhere available. Thus for now a 4 core / 8 threads are is
th best buy.

It's only "stupid" to people who don't have a clue what hardware is
currently available or its limitations. If you can buy it off of Amazon
then it's "out here avaiable". That most of us can't afford it is a
separate issue.
Moreover, if the code is not properly granulated for shared memory
distribution systems, that -j384 makes no sense.

In that case any use of memory makes no sense because all multicore
processors use shared memory, whether the cores are on a single chip or
spread across four or more.

You seem to be stuck in the 32-bit era when addressing anything beyond 4
GB meant using smoke and mirrors. While NUMA is a worthwhile
performance booster in some cases, it is certainly not _necessary_ in
order to to gain improved compilation times vs using smaller amounts of
RAM.
 
J

J. Clarke

Not sure at all, remember, shared memory, if the threads or processes are
interrelated, threads are, then they will wait in whatever queue for ready
data.

Back to the point. Are you implying that Boost is good only for high-ends
since it compiles faster. I don't need a 192 core for coding in C/C++.

It looks like no one knows why Boost would be good to anything.

If compilation time was a showstopper then Linux would not be good for
anything but "high ends"--it can take days to build, as anybody who has
set up a gentoo system knows.

What it's good for is avoiding reinventing the wheel. If you have a
tested library already available that provides the function that you
need with acceptable performance, why rewrite it?
 
B

Bob Hammerman

J. Clarke said:
You seem to be stuck in the 32-bit era when addressing anything beyond 4
GB meant using smoke and mirrors. While NUMA is a worthwhile
performance booster in some cases, it is certainly not _necessary_ in
order to to gain improved compilation times vs using smaller amounts of
RAM.

Is about code granulation able to make use of the multicores. You looks
like an idiot not knowing his ass what he is talking about.

Then "out there" means you go out down town and buy something like that,
not crap on Amazon and such. I bet I did run code on super computers at a
time you were not even born.

Please leave this thread, you do not contribute in any way.
 
B

Bob Hammerman

J. Clarke said:
What it's good for is avoiding reinventing the wheel. If you have a
tested library already available that provides the function that you
need with acceptable performance, why rewrite it?

I don't know, but going through reverse engineering other peoples craps is
more time demanding than going through own crap. Don't you know it, where
have you been?

You seems to be a new beginner in computing, having a big mouth.
 
J

J. Clarke

Is about code granulation able to make use of the multicores.

If you would actually write sentences that contained subjects and verbs
you might actually learn how to communicate with others. You may be
trying to communicate something meaningful here but you have failed to
do so.
You looks
like an idiot not knowing his ass what he is talking about.

I will be plonking you after I finish commenting on your puerile post.
Then "out there" means you go out down town and buy something like that,
not crap on Amazon and such. I bet I did run code on super computers at a
time you were not even born.

I see. So if it's not in stock in some "down town" store it does not
exist?

As for your "betting you did run code on super computers at time when I
was not even born", since one cannnot now and has never been able to "go
out down town and buy" a supercomputer, by your logic they are not "out
there" and thus you by your own logic could not possibly have used one.
Please leave this thread, you do not contribute in any way.

<plonk>
 
J

J. Clarke

I don't know, but going through reverse engineering other peoples craps is
more time demanding than going through own crap. Don't you know it, where
have you been?

Why would you want to reverse engineer a stock library? Are you trying
to pirate it or something?
You seems to be a new beginner in computing, having a big mouth.

Yep, <plonk> was the right decision.
 
Ö

Öö Tiib

No. Such person does not exist so he can say nothing. So far it is just

Random Male Name <random letters@random letters.org>

See examples like Herman Viracocha, Bob Hammermann or Nick Baumbach.

Good luck filling your killfile with random troll-generated names and addresses
and announcing it every time.
 
O

Oscar Chesnutt

J. Clarke said:

Indeed, this supposedly means that you gonna use a computer program
helping you not to read. Otherwise you just keep reading. You must be
terribly clever, like your friends.

We never got the answer to the Boost concern. Do I have to be stupid to
use that?
 
D

Daniel

We never got the answer to the Boost concern. Do I have to be
stupid to use that?

Not at all. Even if you are not stupid, you can still use it.
Hope that helps.

Daniel
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,099
Messages
2,570,626
Members
47,237
Latest member
David123

Latest Threads

Top