Very interesting paper about future programming models

K

Keith Fahlgren


Yeah, big +1. The most thought-provoking part for me was this(from page 35)=
:

It is striking, however, that research from
psychology has had almost no impact, despite the obvious
fact that the success of these models will be strongly
affected by the human beings whouse them. Testing
methods derived from the psychology research community
have been used to great effect for HCI, but are sorely
lacking in language design and software engineering. For
example, there is a rich theory investigating the causes of
human errors, which is well known in the human-computer
interface community, but apparently it has not penetrated
the programming model and language design community.
=85
We believe that integrating research on human psychology
and problem solving into the broad problem of designing,
programming, debugging, and maintaining complex parallel
systems will be critical to developing broadly successful
parallel programming models and environments.


Keith
 
M

M. Edward (Ed) Borasky

Keith said:
Yeah, big +1. The most thought-provoking part for me was this(from
page 35):

It is striking, however, that research from
psychology has had almost no impact, despite the obvious
fact that the success of these models will be strongly
affected by the human beings whouse them. Testing
methods derived from the psychology research community
have been used to great effect for HCI, but are sorely
lacking in language design and software engineering. For
example, there is a rich theory investigating the causes of
human errors, which is well known in the human-computer
interface community, but apparently it has not penetrated
the programming model and language design community.

We believe that integrating research on human psychology
and problem solving into the broad problem of designing,
programming, debugging, and maintaining complex parallel
systems will be critical to developing broadly successful
parallel programming models and environments.


Keith
Well ... yes and no ... we should probably take this to "pragprog", and
I'm going to, but ...

1. I've been here before -- at the point where general-purpose SISD
architectures ran out of steam and special-purpose machines abounded. I
spent ten years working for a company, Floating Point Systems, that
*made* special-purpose machines. There's a whole generation of people
out there, myself among them, that ended up finding other things to do
when the general-purpose SISD (and *CISC*) machine known as the Pentium
essentially wiped everything else off the map. So I view the current
"trend" to multicore systems and more dreams of massively parallel
computers becoming mainstream as only a temporary thing ... a swing of a
pendulum to one extreme ... general purpose SISD machines will be back!

2. There's an awful lot of specialized hardware in a modern PC for
audio and graphics already. By some strange coincidence, the sound card
architecture looks a *lot* like an FPS array processor. :) I'm not a
graphics geek, but I'd be willing the bet that what's inside the
graphics chipsets looks a lot like the specialized image processing
computers folks came up with in the 1960s and 1970s. Somebody programs
these parallel and concurrent gizmos, and they obviously are getting it
right and have tools to help them get it right.

3. To bring this back to Ruby, what I think Ruby needs, *independent* of
any trends in the underlying hardware, is

a. support for all the commonly-used concurrency primitives made as
efficient as possible in the underlying implementations. Most of them
are already there, including some, like Rinda/tuplespace, that aren't
common in other languages, and

b. Efficient implementation of the low-level core types -- integers,
rationals, multi-precision numbers, real and complex floating point
multi-dimensional arrays, hashes, bit vectors. In short, it should *not*
be necessary to escape to C -- *ever*.
 
M

M. Edward (Ed) Borasky

M. Edward (Ed) Borasky wrote:

[snip]

One other little piece of flame bait :) the fact that Berkeley has a
combined EE-CS department that produced this paper is another symptom of
what's wrong. Computer Science has become subordinated to Electrical
Engineering. I personally think that's very very wrong.
 
F

Francis Cianfrocca

1. I've been here before -- at the point where general-purpose SISD
architectures ran out of steam and special-purpose machines abounded. I
spent ten years working for a company, Floating Point Systems, that
*made* special-purpose machines. There's a whole generation of people
out there, myself among them, that ended up finding other things to do
when the general-purpose SISD (and *CISC*) machine known as the Pentium
essentially wiped everything else off the map. So I view the current
"trend" to multicore systems and more dreams of massively parallel
computers becoming mainstream as only a temporary thing ... a swing of a
pendulum to one extreme ... general purpose SISD machines will be back!


Are you expecting another round (or more) of massive improvements in
uniprocessor performance?
 
C

Charles Thornton

Francis said:
Are you expecting another round (or more) of massive improvements in
uniprocessor performance?

I can think of a number of possiblities -- Lets really get out there___

1) Multivalue Logic (Nasty)
2) Extremely Long instruction words
3) Massively Deep look ahead Threads

However, Most Likely -- General Multi-core Processors with
Various Specialized Processors.
 
A

Austin Ziegler

M. Edward (Ed) Borasky wrote:

[snip]

One other little piece of flame bait :) the fact that Berkeley has a
combined EE-CS department that produced this paper is another symptom of
what's wrong. Computer Science has become subordinated to Electrical
Engineering. I personally think that's very very wrong.

I'd rather have an EE-CS department than a Math-CS department. I'd
rather have a CS department that recognizes that it, like IT, touches
almost everything else more, though.

-austin, has been through both styles before
 
M

M. Edward (Ed) Borasky

Francis said:
Are you expecting another round (or more) of massive improvements in
uniprocessor performance?
I'll invoke Arthur C. Clarke's laws: "When a distinguished but elderly
scientist says something is impossible, he is usually proven wrong. When
he says something is possible, he is usually proven right." I don't know
how distinguished I am -- after all, I don't even have a PhD -- but I
think I have the elderly part down. :)

But seriously, there are plenty of projects going on to improve
uniprocessor performance at the hardware and technology level, and
there's *no* doubt in my mind that one or more of them will pay off. In
addition, there is a massive *existing* body of knowledge on exploiting
parallel computing in the three domains where it is most needed --
large-scale numerical computing, multi-media processing and large databases.

And finally, today's PC *is* very much a parallel machine even before
you put a dual-core processor package in it. As I noted before, your
multi-media work is mostly done by specialized processors, with the CPU
being a control processor only. And when you look inside the chip,
you'll find a RISC/microprogrammed/"LIW-like" architecture capable of
dealing with multiple copies of the i386 SISD base. Somehow all this
parallelism gets designed, built and debugged, and *manufactured* on a
massive scale.

So yes, I am not only expecting faster uniprocessors, I am also
expecting an *evolution*, not a *revolution*, in the programming
language area. And as an alumnus of the previous parallel and RISC
"revolution", I very much resent statements like "Researchers have the
rare opportunity to re-invent these cornerstones of computing, provided
they simplify the efficient programming of highly parallel systems." and
"We concluded that sneaking up on the problem of parallelism via
multicore solutions was likely to fail and we desperately need a new
solution for parallel hardware and software."

Bluntly put, it's a simple matter of economics. The people who genuinely
need massive parallelism already have it, and all that has changed is
that the cost in dollars and watts is coming down. And the people who
*don't* need it -- people who can do their jobs or live their lives
without a dozen 3 gigaflop general purpose CPUs and a programming
language to exploit them -- are not going to pay for them.
 
M

M. Edward (Ed) Borasky

Austin said:
M. Edward (Ed) Borasky wrote:

[snip]

One other little piece of flame bait :) the fact that Berkeley has a
combined EE-CS department that produced this paper is another symptom of
what's wrong. Computer Science has become subordinated to Electrical
Engineering. I personally think that's very very wrong.

I'd rather have an EE-CS department than a Math-CS department. I'd
rather have a CS department that recognizes that it, like IT, touches
almost everything else more, though.

-austin, has been through both styles before
Well ... I'd rather have a math department, with both theoretical and
applied branches, a computer science department, a software engineering
department *and* an electrical engineering department. What I have a
problem with is one of these disciplines subordinating the others. But
it's surely not a coincidence that UC Berkeley is near *Silicon* Valley. :)

Is there an Applied Mathematics Valley somewhere? :)
 
B

Benjohn Barnes

I'll invoke Arthur C. Clarke's laws: "When a distinguished but
elderly scientist says something is impossible, he is usually
proven wrong. When he says something is possible, he is usually
proven right." I don't know how distinguished I am -- after all, I
don't even have a PhD -- but I think I have the elderly part down. :)

Quite so :)

Having not even read the piece...

I was following up on Software Transactional Memory from an earlier
Ruby Talk posting, and that looks extremely promising. I've got this
thought that programming languages might change quite a lot in the
concepts they provide and emphasis. Perhaps when that happens, not
only will it get much easier to program, but parallel solutions will
no longer be hard. In fact, they might even be easier for a lot of
situations.

Cheers,
Benjohn
 
M

M. Edward (Ed) Borasky

Benjohn said:
And Quantum computers on Tuesday this week, apparently:

http://www.techworld.com/opsys/news/index.cfm?newsID=7972&pagtype=all

Seeing as it makes use of a Tuneable Flux Transformer, I'm pretty sure
it's going to work like a dream ;-)
Yeah ... anybody remember gallium arsenide? Liquid nitrogen cooled CMOS
and the GF-10? Occam? People who said it was impossible to build a
vectorizing C compiler? Lisp machines?

Speaking of the "need" for faster computers, I keep thinking of the
story of the two hunters who suddenly discovered they were being chased
by a bear. One of them sat down and started to change into his running
shoes. The other one said, "I don't care what kind of shoes you have,
you aren't going to be able to outrun that bear." To which the first
replied, "It's not the *bear* I have to outrun." :)
 
T

Tim Bray

1. I've been here before -- at the point where general-purpose SISD
architectures ran out of steam and special-purpose machines
abounded. I spent ten years working for a company, Floating Point
Systems, that *made* special-purpose machines. There's a whole
generation of people out there, myself among them, that ended up
finding other things to do when the general-purpose SISD (and
*CISC*) machine known as the Pentium essentially wiped everything
else off the map. So I view the current "trend" to multicore
systems and more dreams of massively parallel computers becoming
mainstream as only a temporary thing ... a swing of a pendulum to
one extreme ... general purpose SISD machines will be back!

Let me rephrase that.... general purpose computers always beat
special-purpose computers. I've seen LISP machines, database
machines, even full-text-search machines come and go. Right now,
it's hard to be a single-core server chip. They're all still very
general-purpose, it's just that silicon builders, at the moment,
*can* take advantage of Moore's law to build wider (multicore) CPUs,
but have failed to use it to make any one thread faster; with the
occasional exception of the IBM Power chips.

So I think the problem is real. [Disclosure: I work for Sun, maker
of the 8-core/32-thread T1, and there are more where that came from].

-Tim
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,234
Messages
2,571,178
Members
47,809
Latest member
Adisty

Latest Threads

Top