the perens in lisp dilects is there for a reson... macros.

A

ara.t.howard

I'm glad you enjoyed the trip down memory lane ;-). I gave up on functional
programming over a dozen years ago, for two reasons: first, I learned the
hard way that writing really good compilers for lambda-based languages
requires a huge amount of mathematical knowledge, and mine doesn't go beyond
partial diffeqs. Second, my experiments with ordinary professional
programmers proved that the proportion of them who are mentally receptive to
the functional style is too small to be commercially valuable. I was
completely taken with FP myself, but one programmer does not a team make.

can you elaborate? i'm very interested in your personal opinion without
regard to other team members (i work with only one other person and she's a
math genius).

regards.

-a
 
M

M. Edward (Ed) Borasky

Francis said:
I'm glad you enjoyed the trip down memory lane ;-). I gave up on
functional programming over a dozen years ago, for two reasons: first, I
learned the hard way that writing really good compilers for lambda-based
languages requires a huge amount of mathematical knowledge, and mine
doesn't go beyond partial diffeqs.
Ah, but you can *buy* or download for free really good compilers for
functional languages. See the conference announcement I posted a couple
of days ago. Maybe you'd even like to come visit the City of Roses? :)

To your point about regexes: remember that they are level-0 languages,
whereas most programming languages (including useful DSLs) are level-1
languages (context-free grammars). AFAIK, a language that generates
expressions that can be reduced to NFAs can not be Turing-complete, but
correct me if I'm wrong on that.
Right ... I don't know who first made the point about the tradeoff
between a language's expressive power and the ability to reason about
programs in that language -- level 0 languages have the least expressive
power and are easiest to reason about. However, I first saw it elegantly
expressed in James Petersen's book on Petri nets. I don't have it here,
but I can dig it up.
With DSLs, I'm thinking about tools for programmers, not for
non-programmers. (Although there is a very interesting class of
highly-successful mini-languages for non-programmers, like MS Office
macros.) My point about "tooling" was in regard to automatic programs
that can reason about other programs.
1. MS Office itself is a useful doman-specific language for
non-programmers. I haven't found VBA usable by non-programmers, however.
2. Regarding "tooling" -- if programmers are willing to restrict
themselves to constructs that can be tooled, of course. They tend not to
be willing to use functional languages, they tend not to be willing to
declare types of variables, and they tend to balk at *any* restriction
on their ability to shoot themselves and their customers in the foot. :)

But of course, the Rails folks rediscovered all of this -- the power
that comes from restricting some freedoms. Maybe the Python motto --
"There's only one way to do it" is the right thing after all.

DSLs and other mini-languages are
far easier to write workbenches and IDEs for, which I think is really
interesting and potentially a huge productivity boost. And again, it's
the reduction from full Turing-completeness that may be the secret
sauce. I wish I could prove that, but I've already told you that I'm no
mathematician, so it's a hunch at best.
It's an obvious conclusion -- I'll dig up the Petersen reference. :)
 
F

Francis Cianfrocca

M. Edward (Ed) Borasky said:
Ah, but you can *buy* or download for free really good compilers for
functional languages. See the conference announcement I posted a couple
of days ago. Maybe you'd even like to come visit the City of Roses? :)

I'm awfully impressed by the recent quality of the compilers for
lambda-languages, especially Lisp. This was far from true a dozen years
ago when I was going through the exercise. Also, I was a professional
compiler-writer at the time. I ditched that career about five minutes
after I read the Dr. Dobbs article announcing Java, in May 1995. (Well,
not completely. That summer I wrote a bytecode->native x86 compiler. But
the handwriting was on the wall.)

Interesting, your comment about Rails. (Although now we're mixing CoC up
with my intuitions about fundamentally restricting language power.) You
notice how people talk about Rails as the "DSL of the web"?

Part of what I'm interested in is permitting arbitrary Ruby code to run
in IoC containers with DI. That does imply some restrictions on how you
write your classes, not least so the framework can automatically give
you diagnostics and management tools. The simple fact that people use
TDD effectively in the agile world shows that there is something to
this: you're writing code-objects that are designed to be testable.
Well, I'd like to discover what are the best practices for writing
code-objects that designed to be distributable and composable. (Of
course we already know about an approach that *won't* work-
convention-based reflection in the manner of Java Beans.)
 
C

Chad Perrin

But of course, the Rails folks rediscovered all of this -- the power
that comes from restricting some freedoms. Maybe the Python motto --
"There's only one way to do it" is the right thing after all.

It's not restriction that does this for you -- it's well-designed
default abstractions. With some languages and for some tasks, those
defaults must exist in your head, and with others they exist in the
toolset (the language interpreter, the framework, et cetera). In either
case, they need not be restrictions, per se. Restrictions are
inviolable defaults, and Python's inviolable defaults are reasonably
well-designed, but they're still limiting when one must do something
that is best accomplished in a manner incompatible with those defaults.

Perl has a "horn of plenty" approach to defaults, where you pick and
choose your default behavior set from an incredibly large menu of
discrete defaults to suit your personal taste and your problem domain.
This has its ups and downs: on the upside, you have more flexibility
when you must accomplish something not covered by the vision of the guys
that created the language's defaults and, on the downside, you have a
language so complex that you're almost guaranteed to end up using a
suboptimal set of chosen default abstractions for your task because you
don't know the entire language (Larry Wall has, himself, said he doesn't
know all of Perl).

Lisp, in my understanding, achieves greater flexibility than Perl by
making it absurdly easy to arbitrarily generate the defaults that best
suit the task at hand. Ruby, meanwhile, is a great compromise between
all three of those approaches: there are default abstractions that are
clearly defined, and it encourages the programmer to use them for
solving problems, but it makes it possible to use a plethora of other
abstractions that are nondefault and still available to the programmer.
When that's not enough, it provides macros for building your own
defaults from "scratch" with notable ease.

Obviously, this is all oversimplified, and I'm hampered by varying
levels of familiarity with the various languages discussed when I
explain my take on them, but I'm pretty sure nobody knows everything
about all these languages so I'll just chalk it up to "the human
condition".
 
F

Francis Cianfrocca

Steven said:
Turing machines derive from a model of a human working with pencil and
paper. That's pretty fundamental.

Steve

So then you would argue that imperative languages are fundamentally more
natural for people to program in than functional languages?
Notwithstanding that they are mathematically equivalent?

I'm not an FP partisan myself so I won't argue with you, but I have a
feeling some Lisp and Ocaml fanboys are firing up their blowtorches for
you ;-)
 
F

Francis Cianfrocca

Chad said:
It's not restriction that does this for you -- it's well-designed
default abstractions. With some languages and for some tasks, those
defaults must exist in your head, and with others they exist in the
toolset (the language interpreter, the framework, et cetera). In either
case, they need not be restrictions, per se. Restrictions are
inviolable defaults, and Python's inviolable defaults are reasonably
well-designed, but they're still limiting when one must do something
that is best accomplished in a manner incompatible with those defaults.


That's all perfectly fine. Every language gives you a set of metaphors
with which you can model the real-world domains you care about, and
that's why we use languages in the first place. Because you get leverage
from their conventions and restrictions. And of course this applies to
level-3 languages like English and French, as well as level-1 languages.

What I was reaching for was some insight into why full-powered
(Turing-complete) computer languages are so difficult to write automated
tools and workbenches for. The challenge is not how humans can reason
about programs (undecidability is not a big hurdle for us), but how
programs can reason about programs. And my (unproved) intuition is that
mathematically restricted DSLs (whether they are Lisp programs, Ruby
programs, graphical widgets, physical constructions, or any other such
equivalent thing) may be so much easier to tool that they can give a
vast productivity improvement.

Low-grade example: I've been working on a component-based
web-development framework in Ruby, and most of it is DSL-driven. So far,
it's been remarkably easy to build graphical tools for it.
 
C

Chad Perrin

I did something similar a long time ago in macro assembler, which I
learned a long time before I learned Lisp. Interestingly enough, I dug
out my copy of Leo Brodie's "Thinking Forth" last night. That
programming style, and the term "factoring", are prominent in that work
(which by the way is available on line now as a PDF). I think it's
something all programmers of a certain level of maturity do regardless
of language.

That's true, but the advantage to Lisp, it seems to me, isn't merely a
perception that has grown from "we did it first" syndrome. I think it
boils down to the fact of Greenspun's Tenth Rule:

Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C
or Fortran program contains an ad-hoc, informally-specified bug-ridden
slow implementation of half of Common Lisp."

More generalized, any sufficiently complicated program written in a
language that doesn't have the same facilities for (as I put it earlier)
jargon-definition as Lisp (which means basically all languages, still,
though some like Ruby are getting awfully close) must essentially
implement part of a Lisp semantic structure for certain types of highly
complex problems. In the case of something sufficiently low-level, such
as C or Fortran, that pretty much means a very scattered Lisp
implementation for particularly complex problems of any type.

For a significantly Lispy language like Ruby, however, there's a
nontrivial percentage of potential problem domains for which Ruby
already contains all the necessary behaviors of Lisp to solve them in
the most efficient manner possible, and in addition to that Ruby
implements some excellent features that are not default to Lisp but
might be created within Lisp to solve certain types of problems. This
makes Ruby more convenient for solving some problems and (roughly)
equally convenient for solving other problems. There are then a
fairly small set of problem types for which one would have to
reimplement Lisp to most-efficiently solve the problem in Ruby.

The key, of course, is to pick the language that best suits what you do
when programming: if your language of choice already does (almost)
everything for which you'd have to create your own syntax in Lisp, you
probably have the right language for the job. If not, you might need to
find a different language that does so -- or use Lisp to abstract the
problem sufficiently to make it easy to solve.

The way this all works can be analogized to trigonometry: in trig, there
are three basic "laws" from which all other rules for doing trigonometry
are derived. Lisp is like those "laws" with a syntax designed to
simplify the process of derivation. Other languages (such as Fortran,
to pick a target at semi-random) are like a set of useful derivations
that you've memorized to take a test in a trig class. One can work
backward from those derivations to arrive at the original three "laws"
of trigonometry, and from there arrive at other derivations that can be
used to solve certain problems, and that tends to be necessary in a lot
of complex problem solving to some degree at least. Some languages
provide closer approximations to those three basic principles by
default, so that less work is necessary to solve problems not central to
the design of the language (I'm looking at you, Ruby), and so on.

These opinions are not representative of the management. No programmers
were harmed in the making of this lengthy ramble. We now return you to
your regularly scheduled programming. Hah. Programming. Get it?
 
C

Chad Perrin

What I was reaching for was some insight into why full-powered
(Turing-complete) computer languages are so difficult to write automated
tools and workbenches for. The challenge is not how humans can reason
about programs (undecidability is not a big hurdle for us), but how
programs can reason about programs. And my (unproved) intuition is that
mathematically restricted DSLs (whether they are Lisp programs, Ruby
programs, graphical widgets, physical constructions, or any other such
equivalent thing) may be so much easier to tool that they can give a
vast productivity improvement.

Low-grade example: I've been working on a component-based
web-development framework in Ruby, and most of it is DSL-driven. So far,
it's been remarkably easy to build graphical tools for it.

I think object-oriented programming is just a very limited form of what
we're really trying to accomplish when we do things correctly:
componentized abstraction. It's easiest to solve complex problems when
we create abstractions for dealing with them, and abstractions
themselves are often complex problems. Thus, we build abstractions with
which to build abstractions, edging closer to the actual solution to the
problem. In practice, this pretty much boils down to creating a
language within one's language -- a task to which Lisp is particularly
suited, and to which Ruby's design devotes significant attention as
well. A language like Ruby allows rather more to be done without having
to create such abstractions than a "pure" Lisp, however, which makes it
a lot more approachable, and it may thus be a better compromise for the
programming of our time for most problems we're attempting to solve by
programming.

Abstractions are like levers: you use a longer lever to move a bigger
problem. Something like Lisp is an incredibly efficient lever-creator.
That's my take, at any rate. The way this relates to Turing
completeness is in the fact that when creating abstractions specific to
your problem, you don't need Turing completeness. Trying to implement
it within your abstraction is a waste of time. Turing completeness
might, however, be more important for the overarching framework within
which you are creating these DSLs that are your abstractions.

An important distinction should be made, as well, between Turing
completeness and Turing equivalency. A "pure" Lisp, for instance, can
be used to create a Turing complete language, but isn't precisely a
Turing complete language itself. Similarly, a "pure" Turing machine can
be used to create a Lisp-complete language, but isn't precisely a Lisp
itself. Thus, Lisp is Turing equivalent, and a Turing machine is Lisp
equivalent, but neither is "complete" within the realm of othe other.
Without being a computer science PhD that knows all, I flatter myself to
think I understand the relationship between the two: each is just a
different way of approaching the same problem. The Turing way of doing
it is like trying to ensure you have all the tools you need to solve any
problem with relative ease, while the Lisp way of doing things is more
like trying to ensure you have all the tools you need to create tools
that provide optimal ease of accomplishing the task at hand. Please
excuse the flawed oversimplification.
 
C

Chad Perrin

So then you would argue that imperative languages are fundamentally more
natural for people to program in than functional languages?
Notwithstanding that they are mathematically equivalent?

I'm not an FP partisan myself so I won't argue with you, but I have a
feeling some Lisp and Ocaml fanboys are firing up their blowtorches for
you ;-)

I don't know what Steven meant, but I know what I'd mean if I said that:

Turing machines and Lisp machines are equivalently fundamental. After
all, I can work with mathematical functions with pencil and paper, too.
 
G

gabriele renzi

Francis Cianfrocca ha scritto:
I've always been fascinated by one specific aspect of functional
languages: in theory, it's possible to use tools to prove programs
correct. What a productivity win that would be.

notice that to proof some properties of a program you don't necessarily
need a functional language.
For example, you may love to take a look at Spark[1], a statically
verifiable subset of Ada, or at Vault[2].

Other languages can proof correctness for "some"functionalities.
I had the chance to work with NesC[3] wich is a nifty language for
embbeded systems, basically C+concurrency+components, where the compiler
checks that there are no race conditions in concurrent code (it gets
some false positives, but no wrong code passes).

[1] http://www.praxis-his.com/sparkada/intro.asp
[2] http://research.microsoft.com/vault/intro_page.htm
[2] http://nescc.sourceforge.net/
 
F

Francis Cianfrocca

gabriele said:
Francis Cianfrocca ha scritto:
notice that to proof some properties of a program you don't necessarily
need a functional language.
For example, you may love to take a look at Spark[1], a statically
verifiable subset of Ada, or at Vault[2].

I'm not surprised that someone would mention Diana, the Ada subset with
a formal semantics. People have been working on things like this for
decades, and all of the work is interesting. However, I'm more
interested in the economic value of high-quality software. And the
economic dimensions include such things as widespread availability of
reasonably-priced talent, high productivity, good stability in
production, etc. I think that because of Rails, Ruby may be poised to
break out into the programming mainstream, which is interesting because
of Ruby's underappreciated potential to fundamentally improve the
dynamics of software delivery.

Auguri Gabriele, e ti ringrazio per aver scritto ;-)
 
F

Francis Cianfrocca

Chad said:
I don't know what Steven meant, but I know what I'd mean if I said that:

Turing machines and Lisp machines are equivalently fundamental. After
all, I can work with mathematical functions with pencil and paper, too.


You may be right, but my guess is that Steve was trying to say something
like this:

It's fundamentally more natural for people to approach many computations
by imagining a working set of discrete values and constructing recipes
for manipulating them in discrete ways. It's fundamentally less natural
to construct what amounts to a mathematical field to do the same thing.
A perhaps poor analogy: most people have an easier time with arithmetic
on natural numbers than they do with group theory (even though they are
mathematically unified).

The contentious word in that paragraph is "natural," and it gets to what
I care about the most (writing all the software that needs to get
written with measurably high quality at a reasonable cost in dollars and
time). Is the Turing approach more "natural" or just more "familiar"
than the Church approach (even though they are mathematically unified)?
 
M

M. Edward (Ed) Borasky

Francis said:
It's fundamentally more natural for people to approach many computations
by imagining a working set of discrete values and constructing recipes
for manipulating them in discrete ways. It's fundamentally less natural
to construct what amounts to a mathematical field to do the same thing.
A perhaps poor analogy: most people have an easier time with arithmetic
on natural numbers than they do with group theory (even though they are
mathematically unified).
Well ... actually I think the "new math" experiment aka disaster showed
the contrary. It was just as *easy* for grade school children to learn
set theory, binary arithmetic, symbolic logic, groups theory, etc., as
it was for them to learn addition, subtraction, multiplication,
division, etc. But it was all useless when they got into the real world
and couldn't balance a checkbook. :)
The contentious word in that paragraph is "natural," and it gets to what
I care about the most (writing all the software that needs to get
written with measurably high quality at a reasonable cost in dollars and
time). Is the Turing approach more "natural" or just more "familiar"
than the Church approach (even though they are mathematically unified)?
Well ... first of all, let's look at why there are computers at all. I
won't expound on it here, since I posted my opinion at
http://borasky-research.blogspot.com/2006/06/why-are-there-computers-attempt-to-be.html.
But given that posting, I think the reason we followed the Turing and
Von Neumann path rather than the Church/Curry/etc. path is simply that
Turing and Von Neumann were both instrumental in applying computer
technology to winning World War II, and Church wasn't. (At least as far
as I know, Church wasn't ... if that's incorrect, please let me know.)

To be a little more accurate, most of the things in computer technology
we remember Von Neumann for occurred *after* WW II and so properly
belong to the early days of the Cold War. Moving on, when it comes to
"writing all the software that needs to get written", there just aren't
any silver bullets, free lunches, etc. There aren't any "global optima"
and I'm not sure there are even local optima. In short, we *satisfice*.
The IAS machines were "good enough", the Cray 1 was "good enough", Lisp
1.5 was "good enough", (holding nose) FORTRAN II was "good enough", and
Ruby 1.8 is "good enough." Ruby 2.0 will be better than 1.8 but it will
still be "good enough."
 
M

M. Edward (Ed) Borasky

Francis said:
The simple fact that people use
TDD effectively in the agile world shows that there is something to
this: you're writing code-objects that are designed to be testable.
Well, I'd like to discover what are the best practices for writing
code-objects that designed to be distributable and composable.
Well, the "big deal" these days is Robin Milner's "Pi Calculus", which
is an extension of his Calculus of Communicating Systems (CCS) to
include "mobile" processes. It's essentially a process algebra. As you
might expect, it imposes some heavy declaration burdens on the
programmer in return for being able to prove properties about one's
code. Still, it's getting a lot of hype -- i.e., companies are being
started to "productize" it, etc.
 
M

M. Edward (Ed) Borasky

Steven said:
Turing machines derive from a model of a human working with pencil and
paper. That's pretty fundamental.
First of all, while a Turing machine is a great "experimental animal",
our "practical computing machines" are Von Neumann machines rather than
Turing machines. And Von Neumann machines derive from a model of a human
working with a mechanical desk calculator. In fact, the people -- rooms
full of people -- who operated them were called "computers". Properly
speaking, a Von Neumann machine is an "electronic" computer or
"automatic" computer -- a machine doing what people used to do.

Second, I don't think the actual unification of Church and Turing models
occurred *long* before digital computers existed. The basic Turing and
Godel and Church papers were written in the early 1930s, and by the late
1930s there were working relay-based digital computers. Vacuum tube
machines started showing up (in public, anyhow) in the early 1950s and
in "war rooms" shortly after the end of WW II.
 
M

M. Edward (Ed) Borasky

gabriele said:
notice that to proof some properties of a program you don't
necessarily need a functional language.
For example, you may love to take a look at Spark[1], a statically
verifiable subset of Ada, or at Vault[2].
IIRC the *original* exercise in proving a program correct was in fact
not for a functional language, but for a flowchart. Was it Floyd??

Other languages can proof correctness for "some"functionalities.
I had the chance to work with NesC[3] wich is a nifty language for
embbeded systems, basically C+concurrency+components, where the
compiler checks that there are no race conditions in concurrent code
(it gets some false positives, but no wrong code passes).
I think my philosophy boils down to this:
1. A false positive -- an indication from the tool that a program
contains a race when it in fact does not -- is an indication that the
program is too difficult to read and should be rewritten so it generates
a true negative.
2. By the same token, failure of the verifying tool to prove correctness
of a program in an economically viable amount of time should also be
taken as an indication that the program is too difficult to read and
should be rewritten.
 
M

M. Edward (Ed) Borasky

Chad said:
perception that has grown from "we did it first" syndrome. I think it
boils down to the fact of Greenspun's Tenth Rule:

Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C
or Fortran program contains an ad-hoc, informally-specified bug-ridden
slow implementation of half of Common Lisp."
Well ... I think I disagree with this, having written some moderately
complicated FORTRAN programs and having read through many more. What
they all almost always contain, however, is a domain-specific language
that controls where they find their inputs, what processing the do on
them, where they put their outputs and in what formats, etc., on what we
used to call "control cards". Some of the parsers and DSLs of this
nature are quite complex and rich, but they are in no sense of the word
"Lisp".

Curiously enough, if you read Chuck Moore's description of the evolution
of FORTH, you'll find something similar -- every sufficiently
complicated FORTRAN program contained a huge "case" statement which was
a "language interpreter".

By the way, one of the classic "demonstrations of Artificial
Intelligence", ELIZA, was written not in Lisp but in a list-processing
language patterned on Lisp but implemented in (pregnant pause) FORTRAN.
But the SLIP interpreter, rather than being "sufficiently complicated"
was in fact "delightfully simple."
 
M

M. Edward (Ed) Borasky

Chad said:
(Larry Wall has, himself, said he doesn't
know all of Perl).
At one point, I pretty much knew "all of Lisp 1.5". When I had an
opportunity to do some (Macintosh) Common Lisp programming two summers
ago, I discovered that, although a lot of "new stuff" had been tacked
on, Lisp 1.5 still works!! I don't think K&R C "still works", I don't
think FORTRAN I "still works", although FORTRAN IV probably does. And I
know one piece of Perl 4 that stopped working somewhere in the 5.6 --
5.8 range -- byte semantics. :) The part of Perl that Larry Wall does
know may not "still work". :)
Lisp, in my understanding, achieves greater flexibility than Perl by
making it absurdly easy to arbitrarily generate the defaults that best
suit the task at hand. Ruby, meanwhile, is a great compromise between
all three of those approaches: there are default abstractions that are
clearly defined, and it encourages the programmer to use them for
solving problems, but it makes it possible to use a plethora of other
abstractions that are nondefault and still available to the programmer.
When that's not enough, it provides macros for building your own
defaults from "scratch" with notable ease.
With all of the rantings of the Lispniks about how wonderful Lisp macros
are, I've actually never seen what Rubyists refer to as "macros". Where
could I find that -- Ruby constructs called "macros" by Ruby programmers?
 
C

Chad Perrin

With all of the rantings of the Lispniks about how wonderful Lisp macros
are, I've actually never seen what Rubyists refer to as "macros". Where
could I find that -- Ruby constructs called "macros" by Ruby programmers?

I'm no expert in the subject (haven't been so well-blessed as to be
working with a language that provides the mechanisms of proper macros
for most of my programming experience), but my impression is that what
differentiates a "proper macro" from . . . other stuff . . . is the fact
that with Lisp you can treat a collection of characters in your program
as a string, an array-like list, an atomic Thing, or code, all without
having to do any special gymnastics. In other wrods, there's no
differentiation between code and data, as I understand it.

I at least know you can do such stuff, whether or not that's actually
the defining characteristic of Lisp macros -- but it sure seems like a
good point of differentiation from "other stuff". I suppose Ruby's
macros must be similarly differentiated from other Ruby code.
 
T

Tom Copeland

By the way, one of the classic "demonstrations of Artificial
Intelligence", ELIZA, was written not in Lisp but in a list-processing
language patterned on Lisp but implemented in (pregnant pause) FORTRAN.
But the SLIP interpreter, rather than being "sufficiently complicated"
was in fact "delightfully simple."

Way OT here, but Joseph Weizenbaum (who write ELIZA) also wrote an
excellent book "Computer Power and Human Reason". Here's a review:

http://www.ischool.utexas.edu/~palmquis/courses/reviews/amy.htm

Definitely worth picking up if you can find a copy somewhere...

Yours,

Tom
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,982
Messages
2,570,190
Members
46,736
Latest member
zacharyharris

Latest Threads

Top