Ruby for massively multi-core chips?

M

M. Edward (Ed) Borasky

Eric said:
Keep Koichi employed?
I think it's time I posted my "we've been here before" rant about
concurrency and massively parallel computers on my blog. :) For
starters, do a Google search for the writings of Dr. John Gustafson, who
is now a senior researcher at Sun Microsystems. :)
 
M

Matt Lawrence

I think it's time I posted my "we've been here before" rant about concurrency
and massively parallel computers on my blog. :) For starters, do a Google
search for the writings of Dr. John Gustafson, who is now a senior researcher
at Sun Microsystems. :)

Or, somebody could port Ruby to Erlang! :)

-- Matt
It's not what I know that counts.
It's what I can remember in time to use.
 
R

Ron M

Bil said:
How to best evolve Ruby to accommodate 80-core
CPU programming?

A version of NArray that parallelizes it's
work (a task that could be made easier using
OpenMP or similar) would work especially well
if your CPU-intensive part of your application
is math intensive.

For more mundane tasks (web serving) an
obvious answer would be to simply fork
off 80 ruby processes which would efficiently
use the 80 cores.
 
E

Eric Hodel

I think it's time I posted my "we've been here before" rant about
concurrency and massively parallel computers on my blog. :) For
starters, do a Google search for the writings of Dr. John
Gustafson, who is now a senior researcher at Sun Microsystems. :)

See also Koichi's 2005 RubyConf presentation.
 
G

gga

Daniel Berger ha escrito:
Possible but not easy with fork + ipc I think. Otherwise, no. Neither
does Perl or Python.

So far the only language I've seen specifically designed for multiple
cpus/cores is Fortress, and it's alpha.

Fortress, eh? Never heard of it...

Actually, there's a couple of languages you could use on that machine
that are far, far from beta.

Your best bet for that machine is LUA at this point in time. LUA is
multi-thread ready and pretty stable. As long as you don't do any OO
and blink a little, Lua's syntax looks like Ruby.
Nowhere near as nice to do OO in it as in Ruby (or Python, for that
matter), but doable. It is a tiny little bit nicer than Perl's OO (but
not by much).

And good old and somewhat dusty TCL has always been thread friendly.
TCL's OO is kind of a big mess, as it is not native to the language and
there are 2 or 3 frameworks for doing so. However, TCL's big plus is
that it has been around the block for a long, long time.
 
T

Tom Pollard

I think it's time I posted my "we've been here before" rant about
concurrency and massively parallel computers on my blog. :) For
starters, do a Google search for the writings of Dr. John
Gustafson, who is now a senior researcher at Sun Microsystems. :)

SUN'S GUSTAFSON ON ENVISIONING HPC ROADMAPS FOR THE FUTURE
http://www.taborcommunications.com/hpcwire/hpcwireWWW/
05/0114/109060.html

[...]
You may recall that Sun acquired the part of Cray that used to be
Floating Point Systems. When I was at FPS in the 1980s, I managed the
development of a machine called the FPS-164/MAX, where MAX stood for
Matrix Algebra Accelerator. It was a general scientific computer with
special-purpose hardware optimized for matrix multiplication (hence,
dense matrix factoring as well). One of our field analysts, a well-
read guy named Ed Borasky, pointed out to me that our architecture
had precedent in this machine developed a long time ago in Ames,
Iowa. He showed me a collection of original papers reprinted by Brian
Randell, and when I read Atanasoff's monograph I just about fell off
my chair. It was a SIMD architecture, with 30 multiply-add units
operating in parallel. The FPS-164/MAX used 31 multiply-add units,
made with Weitek parts that were about a billion times faster than
vacuum tubes, but the architectural similarity was uncanny. It gave
me a new respect for historical computers, and Atanasoff's work in
particular. And I realized I shouldn't have been such a cynic about
the historical display at Iowa State.
[...]

I can see why you're a fan. ;-)

Tom
 
M

M. Edward (Ed) Borasky

Tom said:
I think it's time I posted my "we've been here before" rant about
concurrency and massively parallel computers on my blog. :) For
starters, do a Google search for the writings of Dr. John Gustafson,
who is now a senior researcher at Sun Microsystems. :)

SUN'S GUSTAFSON ON ENVISIONING HPC ROADMAPS FOR THE FUTURE
http://www.taborcommunications.com/hpcwire/hpcwireWWW/05/0114/109060.html

[...]
You may recall that Sun acquired the part of Cray that used to be
Floating Point Systems. When I was at FPS in the 1980s, I managed the
development of a machine called the FPS-164/MAX, where MAX stood for
Matrix Algebra Accelerator. It was a general scientific computer with
special-purpose hardware optimized for matrix multiplication (hence,
dense matrix factoring as well). One of our field analysts, a
well-read guy named Ed Borasky, pointed out to me that our
architecture had precedent in this machine developed a long time ago
in Ames, Iowa. He showed me a collection of original papers reprinted
by Brian Randell, and when I read Atanasoff's monograph I just about
fell off my chair. It was a SIMD architecture, with 30 multiply-add
units operating in parallel. The FPS-164/MAX used 31 multiply-add
units, made with Weitek parts that were about a billion times faster
than vacuum tubes, but the architectural similarity was uncanny. It
gave me a new respect for historical computers, and Atanasoff's work
in particular. And I realized I shouldn't have been such a cynic about
the historical display at Iowa State.
[...]

I can see why you're a fan. ;-)

Tom
Yeah, John and I worked together at FPS. But what I'm getting at is that
John and I (and others within FPS and elsewhere in the supercomputing
segment) would have endless discussions about the future of
high-performance computing, with some saying it just *had* to be
massively parallel SIMD, others saying it just *had* to be moderately
parallel MIMD, and others saying, "programming parallel vector machines
is just too hard -- the guys over at Intel are doubling the uniprocessor
clock speed every 18 months -- in five years you'll have a Cray on your
desktop".

That was "only" about 20 years ago ... I've got a 1.3 gigaflop Athlon
Tbird that's still more horsepower than I need, but back then if you
wanted 1.3 gigaflops you had to chain together multiple vector machines.
But my real point is that no matter what solution you proposed, "the
programmers weren't ready", "the languages weren't ready", "the
compilers weren't ready", "the architectures weren't ready", "the
components weren't ready", etc. I hear the same whining today about
dual-cores, clusters, scripting languages and today's generation of
programmers. And it's just as bogus now as it was then. Except that
there's 20 years more practical experience and theoretical knowledge
about how to do parallel and concurrent computing. So actually it's
*more* bogus now!
 
M

M. Edward (Ed) Borasky

Ron said:
A version of NArray that parallelizes it's
work (a task that could be made easier using
OpenMP or similar) would work especially well
if your CPU-intensive part of your application
is math intensive.

For more mundane tasks (web serving) an
obvious answer would be to simply fork
off 80 ruby processes which would efficiently
use the 80 cores.
Uh ... be careful ... processes take up space in cache and in RAM. The
only thing that would be sharable is the memory used for code ("text" in
Linux terms). I think what you want is *lightweight* processes a la
Erlang, which Ruby doesn't have yet. It does have *threads*, though.
 
M

M. Edward (Ed) Borasky

Francis said:
Decompose large tasks into a number of cooperating processes (not
threads)
that approximates the number of cores available. Knit them together with
asynchronous messaging and write your programs in an event-driven style.

Best practices for doing the above? Not quite there yet, but a lot of
people
are working on them.
1. Lightweight processes, please :)

2. A lot of people have been working on them for *decades*. We were
forced into hiding by increasing clock speeds, huge caches and multiple
copies of all the register sets on chip, DSP chips to do the audio,
graphics chips to do the video, and the lure of other technologies like
the Internet and databases. :)

I just wonder how long we'll be out of hiding this time. :)
 
J

Joel VanderWerf

M. Edward (Ed) Borasky said:
Uh ... be careful ... processes take up space in cache and in RAM. The
only thing that would be sharable is the memory used for code ("text" in
Linux terms). I think what you want is *lightweight* processes a la
Erlang, which Ruby doesn't have yet. It does have *threads*, though.

Part of the heap may be sharable, but GC makes that part small:

http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/186561
 
R

Ron M

M. Edward (Ed) Borasky said:
Uh ... be careful ... processes take up space in cache and in RAM.

I said process intentionally.
only thing that would be sharable is the memory used for code ("text" in
Linux terms).

Nope. Many (all?) OS's do copy-on-write. If one parent
forked the other children after a lot of initialization
was done, all that data initialized by the parent (including,
for example, loaded ruby modules, etc) would be shared too.

I think a lot of highly scalable servers (Oracle, Postgresql,
Apache 1.x, etc) use this approach.


Fundamentally the difference between threads and processes seems
to be the following. With processes, most memory is unshared
unless you explicitly create a shared memory segment. With
threads, most memory is shared unless you explicitly create
thread-local storage. It's often easier to explicitly specify
the shared memory parts, since it makes you very aware of which
data structures need the special care of locking. And since
the VM system will protect the private memory of processes
and AFAIK you'd have to go through some hoops to make the
OS enforce access to thread local storage, you'd be safer
with the multi-process model too.
 
J

Joel VanderWerf

Ron said:
M. Edward (Ed) Borasky wrote: ...

Nope. Many (all?) OS's do copy-on-write. If one parent
forked the other children after a lot of initialization
was done, all that data initialized by the parent (including,
for example, loaded ruby modules, etc) would be shared too.

That doesn't play well with ruby's gc, which touches reachable objects
(parse trees are another matter).

Maybe there's a way to tell GC that all objects existing at the time of
fork should be considered permanently reachable, so that their memory is
never copied in the child due to mark(). This way you could set up a
basic set of objects that children could add to but never collect. Might
be useful for special purposes, but not in general.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,222
Messages
2,571,142
Members
47,775
Latest member
MadgeMatti

Latest Threads

Top