F
Francis Cianfrocca
M. Edward (Ed) Borasky said:First of all, while a Turing machine is a great "experimental animal",
our "practical computing machines" are Von Neumann machines rather than
Turing machines. And Von Neumann machines derive from a model of a human
working with a mechanical desk calculator. In fact, the people -- rooms
full of people -- who operated them were called "computers". Properly
speaking, a Von Neumann machine is an "electronic" computer or
"automatic" computer -- a machine doing what people used to do.
Second, I don't think the actual unification of Church and Turing models
occurred *long* before digital computers existed. The basic Turing and
Godel and Church papers were written in the early 1930s, and by the late
1930s there were working relay-based digital computers. Vacuum tube
machines started showing up (in public, anyhow) in the early 1950s and
in "war rooms" shortly after the end of WW II.
Ed, I'll certainly defer to the memory of a guy who was there ;-). I
wasn't, but here's my recollection of the history: You're absolutely
right about the big stuff being done by Turing and Church in the early
30s (and don't forget Godel's papers published in the middle of the
decade, although he was addressing different questions). Turing came to
Princeton on sabbatical for the 1937-38 academic year, and there he and
Church worked together, and they successfully unified Church's lambda
calculus with Turing's cellular-automaton-based universal machine that
winter.
By 1948, von Neumann had suggested what became known as the "Harvard
architecture" (stored-program digital computer) which is the model for
all of the "computers" we recognize as such today. The ten-year interval
between 1938 and 1948 is what I had in mind when I said "long before."
You're also right about the early impetus for mechanized calculation. It
was ballistic equations for artillery, and also the hydrodynamics of
chemical explosives for plutonium-based atomic bombs. The various
machines that were invented to solve these problems basically had
programs hard-wired into them, with solder. von Neumann's suggestion was
to store what later became called "software" directly into the "memory
cells" of the machines, and in so doing he invented the model that made
computers valuable for general use.
But note how completely reminiscent both models (the "ENIAC" style and
the "Harvard" style) are of classic Turing machines! Computer memory is
just a tessellation structure, and a computer program (even a
hard-soldered one) is a linear sequence of values corresponding
precisely to Turing's "tape."
The fact is that a Turing machine is described with components that not
only have direct physical analogs (his 29-state tape can easily be
modeled with five of what we now call "bits", and you will please pardon
the pun on "analogs"), but these physical analogs (electronic circuits)
turned out to be both easy to make and blazingly fast, once the
transistor came into wide use in the mid-Fifties. None of this is true
of a logical computing machine based on lambda-calculus. And that,
logically enough, is the reason why for the very longest time, languages
like Lisp suffered from horrible performance and incredible memory
requirements. And these problems have only recently been solved, but
only by extreme cleverness on the part of compiler writers.
Whereas almost 50 years ago, John Backus and his crew already had
Fortran compilers that could execute numerical methods at a pretty high
fraction of the available machine speed. (Interestingly, according to
Backus' notes, the first Fortran compiler took 18 man-years to build,
and *most* of that time went into hand-writing the front-end, a job that
become automated and near-trivial by the mid-Seventies.)
When I was a kid, I spent all my allowances buying TTL gates and
hand-wiring digital logic. Then when I first got my hands on a computer,
the process of learning machine language and then assembler was
near-instantaneous, because I knew exactly what those instructions were
doing physically in the hardware. Memory-addressing and pointers made
complete sense from the first moment. I was shocked when I first
encountered C, years later, that it was possible to program computers at
such a high level! My point is that putting values into memory cells,
moving them around, and adding and subtracting them is not only the
easiest known way to build a useful (Turing-complete) computer, but also
may be the easiest way for most people to model computations. As an
economic proposition, then, regardless of all this theory and history,
I'm no longer interested in FP for mainstream use. Ruby, on the other
hand, is a step in the right direction.
Aside: I used to think the fad that swept our university CS departments
10 years or so ago of teaching only Scheme was extraordinarily wasteful
and damaging. They thought that teaching people how to understand
computation using the "more-pure" functional style was preferable to
just teaching people how to program. Well, compared to what they
replaced it with (algorithm cookbooks in Java), it may not have been so
bad.
But now that I've declared Lisp to be a fascinating part of computing's
past (<dons asbestos underwear />), the question is: what lies ahead? It
would be charitable to compare our current computer technology, and the
applications we put it to, to the model T and the candlestick telephone.
The more interesting challenge is to go beyond the solved problem of
Turing-complete computations, and figure out how to linguistically and
metaphorically express the large-scale information-driven processes that
humans work with every day. Someone upthread started hinting at that.
There are already some beautiful but flawed examples of this new
thinking (Erlang for one, the original conception of Smalltalk for
another), but it's till very very early.