Python syntax in Lisp and Scheme

P

Peter Seibel

Alexander Schmolck said:
Well, maybe he's seen things like IF*, MVB, RECEIVE, AIF, (or as far as
simplicity is concerned LOOP)...?

I'm not saying that macros always have ill-effects, but the actual
examples above demonstrate that they *are* clearly used to by people
to create idiosyncratic versions of standard functionality. Do you
really think clarity, interoperability or expressiveness is served
if person A writes MULTIPLE-VALUE-BIND, person B MVB and person C
RECEIVE?

Yes. But that's no different with macros than if someone decided that
they like BEGIN and END better than FIRST and REST (or CAR/CDR) and so wrote:

(defun begin (list) (first list))
(defun end (list) (rest list))

As almost everyone who has stuck up for Lisp-style macros has
said--they are just another way of creating abstractions and thus, by
necessity, allow for the possibility of people creating bad
abstractions. But if I come up with examples of bad functional
abstractions or poorly designed classes, are you going to abandon
functions and classes? Probably not. It really is the same thing.

I think the example isn't a bad one, in principle, in practice
however I guess you could handle this superiorly in python.

Well, I admire your faith in Python. ;-)
I develop my testing code like this:

# like python's unittest.TestCase, only that it doesn't "disarm"
# exceptions
TestCase = awmstest.PermeableTestCase
#TestCase = unittest.TestCase

class BarTest(TestCase):
...
def test_foos(self):
assert foo(1,2,3) = 42
assert foo(4,5,6) = 99

Now if you run this in emacs/ipython with '@pdb on' a failure will
raise an Exception, the debugger is entered and emacs automatically
will jump to the right source file and line of code (I am not
mistaken in thinking that you can't achieve this using emacs/CL,
right?)

No, you're mistaken. In my test framework, test results are signaled
with "conditions" which are the Common Lisp version of exceptions. Run
in interactive mode, I will be dropped into the debugger at the point
the test case fails where I can use all the facilities of the debugger
to figure out what went wrong including jumping to the code in
question, examining stack framse, and then if I think I've figured out
the problem, I can redefine a function or two and retry the test case
and proceed with the rest of my test run with the fixed code.
(Obviously, after such a run you'd want to re-run the earlier tests to
make sure you hadn't regressed. If I really wanted, I could keep track
of the tests that had been run prior to such a change and offer to
rerun them automatically.)
and I can interactively inspect the stackframes and objects that
were involved in the failure.

Yup. Me too. Can you retry the test case and proceed with the rest of
your tests?
I find this *very* handy (much handier than having the wrong result
printed out, because in many cases I'm dealing with objects such as
large arrays wich are not easily visualized).

Once the code and test code works I can easily switch to mere
reporting behavior (as described by andrew dalke) by uncommenting
unittest.TestCase back in.

Yup. That's really handy. I agree.

So, in all sincere curiosity, why did you assume that this couldn't be
done in Lisp. I really am interested as I'm writing a book about
Common Lisp and part of the challenge is dealing with people's
existing ideas about the language. Feel free to email me directly if
you consider that too far offtopic for c.l.python.

-Peter
 
J

Jon S. Anthony

Alex Martelli said:
> At the moment the only thing I am willing to denounce as idiotic are
your clueless rants.

Excellent! I interpret the old saying "you can judge a man by the quality
of his enemies" differently than most do: I'm _overjoyed_ that my enemies
are the scum of the earth, and you, sir [to use the word loosely], look as
if you're fully qualified to join that self-selected company.

Whatever.

/Jon
 
J

Jock Cooper

Alexander Schmolck said:
I think this is vital point. CL's inaccessibility is painted as a feature of
CL by many c.l.l denizens (keeps the unwashed masses out),

I have never seen this in c.l.l. - most seem to feel the inaccessibility
("ew the parens") are a necessary evil..
but IMO the CL
community stunts and starves itself intellectually big time because CL is (I
strongly suspect) an *extremely* unattractive language for smart people
(unless they happen to be computer geeks).

Well Hofstadter seems pretty smart to me, I don't think he's a computer geek, and
he's pretty fascinated by Lisp. See G.E.B. and Metamagical Themas.
 
A

Alexander Schmolck

Suppose I cut just one arm of a conditional. When I paste, it is
unclear whether I intend for the code after the paste to be part of
that arm, part of the else, or simply part of the same block.

Sorry, I have difficulties understanding what exactly you mean again. Would
you mind cutting and pasting something like the THEN/ELSE in the examples
below (say just marking the cut region with {}s and and where you'd like to
paste with @)?

(if CONDITION
THEN
ELSE)

if CONDITION:
THEN
else:
ELSE
`immediate visual feedback' = programmer discipline
Laxness at this point is a source of errors.

You got it backwards.
Not forgetting to press 'M-C-\' = programmer discipline.
Laxness at this point is a source of errors.

And indeed, people *do* have to be educated not to be lax when editing lisp -
newbies frequently get told in c.l.l or c.l.s that they should have reindented
their code because then they would have seen that they got their parens mixed
up.

OTOH, if you make an edit in python the result of this edit is immediately
obvious -- no mismatch between what you think it means and what your computer
thinks it means and thus no (extra) programmer discipline required.

Of course you need *some* basic level of discipline to not screw up your
source code when making edits -- but for all I can see at the moment (and know
from personal experience) it is *less* than what's required when you edit lisp
(I have provided a suggested way to edit this particular example in emacs for
python in my previous post -- you haven't provided an analoguous editing
operation for lisp with an explanation why it would be less error-prone)).

It is simply an illustration that there is no obvious glyph associated
with whitespace, and you wanted a concrete example of something that can't
be displayed.

No, I didn't want just *any* example of something that can't be displayed; I
wanted an example of something that can't be displayed and is *pertinent* to
our discussion (based on the Quinean assumption that you wouldn't have brought
up "things that can't be displayed" if they were completely besides the
point).

me:
[more dialog snipped]
I cannot read Abelson and Sussman's minds, but neither of them are
ignorant of the vast variety of computer languages in the world.
Nonetheless, given the opportunity to choose any of them for
exposition, they have chosen lisp. Sussman went so far as to
introduce lisp syntax into his book on classical mechanics.

Well the version of SICM *I've* seen predeominantly seems to use (infixy) math
notation, so maybe Sussman is a little less confident in the perspicuousness
of his brainchild than you (also cf. Iverson)?
Apparently he felt that not only *could* people read ')))))))', but
that it was often *clearer* than the traditional notation.

Uhm, maybe we've got an different interpretation of 'read'?

If by 'read' you mean 'could hypothetically decipher', then yeah, sure with
enough effort and allowable margin of error, people can indeed 'read'
')))))))' and know that it amounts to 7 parens, and with even higher effort
and error margins they should even be able to figure out what each ')'
corresponds to.

I'm equally confident that you'd be in principle capable of 'deciphering' a
printout of my message in rot-13, modulo some errors.

I nontheless suspect I might hear complaints from you along the lines that
"couldn't read that" (if you had some reason to expect that its contents would
be of value to you in the first place).

I'm also pretty sure if I gave you version with each line accompagnied by its
rot-13 equivalent (and told you so) you'd just try to read the alphabetical
lines and ignore the rot-13 as noise (even if I told you that the rot-13 is
really the canonical version and the interspersed lines are just there for
visual convenience).

Now it's pretty much exactly the same for lisp code and trailing parens -- any
sane person in a normal setting will just try to ignore them as best as she
can and go by indentation instead -- despite the fact that doing so risks
misinterpreting the code, because the correspondence between parens and
indentation is unenforced and exists purely by convention (and lispers even
tend to have slightly different conventions, e.g. IF in CL/scheme) and C-M-\.

Reading *to me* means extracting the significant information from some written
representation and the ability to read is to do so with a reasonable amount of
effort. So in my diction, if a certain aspect of a written representation is
systematically and universally ignored by readers (at their peril) then surely
this aspect is unlikely to get points of maximum readability and one might
even conclude that people can't read so-and-so?

I don't personally think (properly formated) lisp reads that badly at all
(compared to say C++ or java) and you sure got the word-seperators right. But
to claim that using lisp-style parens are in better conformance with the
dictum above than python-style indentation frankly strikes me as a bit silly
(whatever other strengths and weaknesses these respective syntaxes might
have).

Obviously the indentation.
But I'd notice the mismatch.

(Hmm, you or emacs?)
If I gave you a piece of python code jotted down on paper that (as these
hypothetical examples usually are) for some reason was of vital importance
but I accidentally misplaced the indentation -- how would you know?

Excellent point. But -- wait! Were it Lisp, how would I know that you didn't
intend e.g.

(if (bar watz) foo)

instead of

(if (bar) watz foo)

?

Like in so many fields of human endeavor, XML provides THE solution:

<if><bar/>watz foo</if>

So maybe we should both switch to waterlang, eh?


Moral: I really think your (stereoptypical) argument that the possibility of
inconsistency between "user interpretation" and "machine interpretation" of a
certain syntax is a feature (because it introduces redundancy that can can be
used for error detection) requires a bit more work.


'as

p.s:

[oh, just to demonstrate that large groups of trailing parens actually do
occur and that, as has been mentioned, even perl has its uses]:

/usr/share/emacs/> perl -ne '$count++ if m/\){7}/; END{print "$count\n";}' **/*el
2008
 
P

Pekka P. Pirinen

Alex Martelli said:
I have given this repeatedly: they can (and in fact have) tempt programmers
using a language which offers macros (various versions of lisp) to, e.g.,
"embed a domain specific language" right into the general purpose language.
I.e., exactly the use which is claimed to be the ADVANTAGE of macros. I
have seen computer scientists with modest grasp of integrated circuit design
embed half-baked hardware-description languages (_at least_ one different
incompatible such sublanguage per lab) right into the general-purpose
language, and tout it at conferences as the holy grail

But this type of domain-specific language is not the advantage that
people mean. (After all, this type of task is rare.) They don't mean
a language for end-users, they mean a language for the programmers
themselves.

Any large software system builds up a domain-specific vocabulary;
e.g., a reactor-control system would have a function called
SHUTDOWN-REACTOR. In other languages, this vocabulary is usually
limited to constants, variables and functions, sometimes extending to
iterators (represented as a collection of the above); whereas, in
Lisp, it can include essentially any kind of language construct,
including ones that don't exist in the base language. E.g., a
reactor-control system can have a WITH-MAINTAINED-CONDITION context
(for the lack of a better word).

Whether or not the expansion of WITH-MAINTAINED-CONDITION is
particularly complex, the ability to indicate the scope of the
construct by enclosing a code block in a "body" is one of the most
useful aspects of this style of program construction. (This can be
done with HO functions in a fairly nice way, I know, assuming the
implementation of the construct does not need to examine or modify the
body code.)

Another very common language feature in these domain-specific
languages is a definer macro. When a number of similar entities need
to be described, a Lisp programmer would usually write a
DEFINE-<entity> macro which would generate all the boilerplate code
for initialization, registration, serialization, and whatever else
might be needed. This is also the way high-level interfaces to many
Lisp packages work: e.g., to use a Lisp GUI package you would
typically write something like
(define-window my-window (bordered-window)
:title "My Application"
:initial-width (/ (screen-width) 2)
...
:panes (sub-window ...)
)
and that would be 80% of the functionality. (Then you'd have to write
methods for the last 20%, which would be the hard bit.) Matthew
Danish' DEFINSTRUCTION macro in another subthread is a good example as
well.
 
T

Terry Reedy

Sean Ross said:
My idea would be to define imap as follows:

def imap(&function, *iterables):
iterables = map(iter, iterables)
while True:
args = [i.next() for i in iterables]
if function is None:
yield tuple(args)
else:
yield function(*args)

and the use could be more like this:

mapped = imap(sequence) with x:
print x # or whatever
return x*x

with x: ... creates a thunk, or anonymous function, which will be fed as an
argument to the imap function in place of the &function parameter.

I find it counter-intuitive both that 'with xx' tacked on to the end
of an assignment statement should act +- like an lambda and even more
that the result should be fed back and up as an invisible arg in the
expression. For someone not 'accustomed' to this by Ruby, it seems
rather weird. I don't see any personal advantage over short lambdas
('func' would have been better) and defs, using 'f' as a default
fname.

....
or, if we want to be explicit:

foobar = map(&thunk, list1, list2, list3) with x, y, z:
astatement
anotherstatement

One could intuitively expect the invisible arg calculated lexically
after the call to be the last rather than first.
so that we know where the thunk is being fed to as an argument. But, this
would probably limit the number of blocks you could pass to a function.

Anyway, as I've said, these are just some fuzzy little notions I've had in
passing. I'm not advocating their inclusion in the language or anything like
that. I just thought I'd mention them in case they're of some use, even if
they're just something to point to and say, "we definitely don't
want that".

That is currently my opinion ;-)

Terry J. Reedy
 
K

Kaz Kylheku

:> I can't see why a LISP programmer would even want to write a macro.
: That's because you are approaching this with a fundamentally flawed
: assumption. Macros are mainly not used to make the syntax prettier
: (though they can be used for that). They are mainly used to add features
: to the language that cannot be added as functions.

Really? Turing-completeness and all that... I presume you mean "cannot

``Turing completeness and all that'' is an argument invoked by the
clueless.

Turing completeness doesn't say anything about how long something
takes to compute, or how easy it is to express some interesting
computation.

In the worst case, for you to get the behavior in some program I wrote
in language A in your less powerful (but Turing complete!) language B,
you might have to write an A interpreter or compiler, and then just
run my original program written in A! The problem is that you did not
actually find a way to *express* the A program in language B, only a
way to make its behavior unfold.

Moreover, you may have other problems, like difficulties in
communicating between the embedded A program and the surrounding B
code! These difficulties could be alleviated if you could write an A
*compiler* in B, a compiler which is integrated into your B compiler,
so that a mixture of A and B code is processed as one unit!

This is precisely what Lisp macros allow us to do: write compilers for
embedded languages, which operate together, all in the same pass.

So for instance an utterance in the embedded language can refer
directly to a local variable defined in a lexically surrounding
construct of the host language.
: DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
: any other way because they take variable names and code as arguments.

What does it mean to take a variable-name as an argument? How is that
different to taking a pointer? What does it mean to take "code" as an
argument? Is that different to taking a function as an argument?

Sheesh. A funtion is an object containing a program, and an
environment that establishes the meaning of entities like variables
for that program.

Code, in this context, means source code: a raw data structure
representing syntax.

Code can be analyzed, subject to transformations, and interpreted to
have arbitrary semantics.

A function can merely be invoked with arguments.

A compiler or interpreter for a functional language like Haskell still
has to deal with the representation of the program at some point: it
has to parse the source characters, recognize the syntax and translate
it into some meaning.

Lisp macros are part of the toolset that allow this translation itself
to be programmable. Thus you are not stuck with a fixed phrase
structure grammar with fixed semantics.

Nearly every programming language has macros, it's just that most of
them have a hard-coded set of ``factory defined'' macros in the form
of a fixed set of production rules with rigidly defined semantics.

What is a macro? It's a recognizer for syntax that implements some
kind of syntax-directed translation. I would argue that while (expr)
statement in the C language is a macro: it's a pattern that matches
a parse subtree that is tagged with the ``while'' token and translates
it into looping code, whose semantics call for the repeated testing of
the guarding expression, followed by execution of the statement if
that expression is true.
 
K

Kaz Kylheku

Matthias Blume said:
Well, no, not really. You can define new syntactic forms in terms of
old ones, and the evaluation rules end up being determined by those of
the old ones. Again, with HOFs you can always get the same
effect -- at the expense of an extra lambda here and there in your
source code.

A macro can control optimization: whether or not something is achieved
by that extra lambda, or by some open coding.

In the worst cases, the HOF solution would require the user to
completely obfuscate the code with explicitly-coded lambdas. The code
would be unmaintainable.

Secondly, it would be unoptimizeable. The result of evaluating a
lambda expression is an opaque function object. It can only be called.

Consider the task of embedding one programming language into another
in a seamless way. I want to be able to write utterances in one
programming language in the middle of another. At the same time, I
want seamless integration between them right down to the lexical
level. For example, the embedded language should be able to refer to
an outer variable defined in the host language.

HOF's are okay if the embedded language is just some simple construct
that controls the evaluation of coarse-grained chunks of the host
language. It's not too much of an inconvenience to turn a few
coarse-grained chunks into lambdas.

But what if the parameters to the macro are not at all chunks of the
source language but completely new syntax? What if that syntax
contains only microscopic utterances of the host language, such as the
mentions of the names of variables bound in surrounding host language?

You can't put a lambda around the big construct, because it's not even
written in the host language! So what do you do? You can use an escape
hatch to code all the individual little references as host-language
lambdas, and pepper these into the embedded language utterance. For
variables that are both read and written, you need a reader and writer
lambda. Now you have a tossed salad. And what's worse, you have poor
optimization. The compiler for the embedded language has to work with
these lambdas which it cannot crack open. It can't just spit out code
that is integrated into the host language compile, where references
can be resolved directly.
This is false. Writing your own macro expander is not necessary for
getting the effect. The only thing that macros give you in this
regard is the ability to hide the lambda-suspensions.

That's like saying that a higher level language gives you the ability
to hide machine instructions. But there is no single unique
instruction sequence that corresponds to the higher level utterance.

Macros not only hide lambdas, but they hide the implementation choice
whether or not lambdas are used, and how! It may be possible to
compile the program in different ways, with different choices.

Moreover, there might be so many lambda closures involved that writing
them by hand may destroy the clarity of expression and maintainability
of the code.
To some people
this is more of a disadvantage than an advantage because, when not
done in a very carefully controlled manner, it ends up obscuring the
logic of the code. (Yes, yes, yes, now someone will jump in an tell
me that it can make code less obscure by "canning" certain common
idioms. True, but only when not overdone.)

Functions can obscure in the same ways as macros. You have no choice.
Large programs are written by delegating details elsewhere so that a
concise expression can be obtained.

You can no more readily understand some terse code that consists
mostly of calls to unfamiliar functions than you can understand some
terse code written in an embedded language build on unfamiliar macros.

All languages ultimately depend on macros, even those functional
languages that don't have user-defined macros. They still have a whole
bunch of syntax. You can't define a higher order function if you don't
have a compiler which recognizes the higher-order-function-defining
syntax, and that syntax is nothing more than a macro that is built
into the compiler which captures the idioms of programming with higher
order functions!

All higher level languages are based on syntax which captures idioms,
and this is nothing more than macro processing.
 
K

Kaz Kylheku

: For example, imagine you want to be able to traverse a binary tree and do
: an operation on all of its leaves. In Lisp you can write a macro that
: lets you write:
: (doleaves (leaf tree) ...)
: You can't do that in Python (or any other langauge).

My Lisp isn't good enough to answer this question from your code,
but isn't that equivalent to the Haskell snippet: (I'm sure
someone here is handy in both languages)

doleaves f (Leaf x) = Leaf (f x)
doleaves f (Branch l r) = Branch (doleaves f l) (doleaves f r)

You appear to be using macros here to define some entities. What if we
took away the syntax which lets you write the above combination of
symbols to achieve the associated meaning? By what means would you
give meaning to the = symbol or the syntax (Leaf x)?

Or give me a plausible argument to support the assertion that the =
operator is not a macro. If it's not a macro, then what is it, and how
can I make my own thing that resembles it?
 
C

Category 5

Lisp macros are part of the toolset that allow this translation itself
to be programmable. Thus you are not stuck with a fixed phrase
structure grammar with fixed semantics.

Nearly every programming language has macros, it's just that most of
them have a hard-coded set of ``factory defined'' macros in the form
of a fixed set of production rules with rigidly defined semantics.

Exactly so. But the average human mind clings viciously to rigid schema
of all kinds in reflexive defence against the terrible uncertainties of
freedom.

To get someone with this neurological ailment to give up their preferred
codification for another is very difficult. To get them to see beyond
the limits of particular hardcoded schema altogether is practically
impossible.

This observation applies uniformly to programming and religion, but is
not limited to them.

--
 
M

Marcin 'Qrczak' Kowalczyk

Secondly, it would be unoptimizeable. The result of evaluating a
lambda expression is an opaque function object. It can only be called.

This is not true. When the compiler sees the application of a lambda,
it can inline it and perform further optimizations, fusing together
its arguments, its body and its context.
 
R

Raffael Cavallaro

Marcin 'Qrczak' Kowalczyk said:
Note that Lisp and Scheme have a quite unpleasant anonymous function
syntax, which induces a stronger tension to macros than in e.g. Ruby or
Haskell.

Actually, I think that any anonymous function syntax is undesirable. I
think code is inerently more readable when functions are named,
preferably in a descriptive fashion.

I think it is the mark of functional cleverness that people's code is
filled with anonymous functions. These show you how the code is doing
what it does, not what it is doing.

Macros, and named functions, focus on what, not how. HOFs and anonymous
functions focus on how, not what. How is an implementation detail. What
is a public interface, and a building block of domain specific languages.
 
D

David C. Ullrich

I am not claiming that it is a counterexample, but I've always met
with some difficulties imagining how the usual proof of Euler's
theorem about the number of corners, sides and faces of a polihedron
(correct terminology, BTW?) could be formalized. Also, however that
could be done, I feel an unsatisfactory feeling about how complex it
would be if compared to the conceptual simplicity of the proof itself.

Well it certainly _can_ be formalized. (Have you any experience
with _axiomatic_ Euclidean geometry? Not as in Euclid - no pictures,
nothing that depends on knowing what lines and points really are,
everything follows strictly logically from explictly stated axioms.
Well, I have no experience with such a thing either, but I know
it exists.)

Whether the formal version would be totally incomprehensible
depends to a large extent on how sophisticated the formal
system being used is - surely if one wrote out a statement
of Euler's theorem in the language of set theory, with no
predicates except "is an element of" it would be totally
incomprehensible. Otoh in a better formal system, for
example allowing definitions, it could be just as comprehensible
as an English version. (Not that I see that this question has
any relevance to the existence of alleged philosophical
inconsistencies that haven't been specified yet...)

Just a thought,
Michele

************************

David C. Ullrich
 
G

Greg Ewing (using news.cis.dfn.de)

dewatf said:
'virus' (slime, poison, venom) is a 2nd declension neuter noun and
technically does have a plural 'viri'.
>
... and also in latin 'viri' is
the nominative for 'men' which you do want to use a lot.

So did Roman feminists use the slogan "All men are slime"?
 
G

Greg Ewing (using news.cis.dfn.de)

Pascal said:
Many programming languages require you to build a model upfront, on
paper or at least in your head, and then write it down as source code.
This is especially one of the downsides of OOP - you need to build a
class hierarchy very early on without actually knowing if it is going to
work in the long run.

I don't think that's a downside of OOP itself, but of statically
typed OO languages that make it awkward and tedious to rearrange
your class hierarchy once you've started on it.

Python's dynamic typing and generally low-syntactic-overhead
OO makes it quite amenable to exploratory OO programming, in my
experience.
 
M

Matthew Danish

|Come on. Haskell has a nice type system. Python is an application of
|Greespun's Tenth Rule of programming.
Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell
does them much better, for example... and so does Python.

Wow. The language with the limited lambda form, whose Creator regrets
including in the language, is ... better ... at HOFs?

You must be smoking something really good.
 
G

Greg Ewing (using news.cis.dfn.de)

Bengt said:
The thing is, the current tokenizer doesn't know def from foo, just that they're
names. So either indenting has to be generated all the time, and the job of
ignoring it passed on upwards, or the single keyword 'def' could be recognized
by the parser in a bracketed context, and it would generate a synthetic indent token
in front of the def name token as wide as if all spaces preceded the def, and then
continue doing indent/dedent generation like for a normal def, until the def suite closed,
at which point it would resume ordinary expression processing (if it was within brackets --
otherwise is would just be a discarded expression evaluated in statement context, and
in/de/dent processing would be on anyway. (This is speculative until really getting into it ;-)

I think there is a way of handling indentation that would make
changes like this easier to implement, but it would require a
complete re-design of the tokenizing and parsing system.

The basic idea would be to get rid of the indent/dedent tokens
altogether, and have the tokenizer keep track of the indent
level of the line containing the current token, as a separate
state variable.

Then parsing a suite would go something like

starting_level = current_indent_level
expect(':')
expect(NEWLINE)
while current_indent_level > starting_level:
parse_statement()

The tokenizer would keep track of the current_indent_level
all the time, even inside brackets, but the parser would
choose whether to take notice of it or not, depending on
what it was doing. So switching back into indent-based
parsing in the middle of a bracketed expression wouldn't
be a problem.
 
D

David Mertz

|On Wed, Oct 08, 2003 at 03:59:19PM -0400, David Mertz wrote:
|> |Come on. Haskell has a nice type system. Python is an application of
|> |Greespun's Tenth Rule of programming.
|> Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell
|> does them much better, for example... and so does Python.

|Wow. The language with the limited lambda form, whose Creator regrets
|including in the language, is ... better ... at HOFs?
|You must be smoking something really good.

I guess a much better saying than Greenspun's would be something like:
"Those who know only Lisp are doomed to repeat it (whenver they look at
another language)." It does a better job of getting at the actual
dynamic.

People who know something about languages that are NOT Lisp know that
there is EXACTLY ZERO relation between lambda forms and HOFs. Well, OK,
I guess you couldn't have playful Y combinators if every function has a
name... but there's little loss there.

In point of fact, Python could completely eliminate the operator
'lambda', and remain exactly as useful for HOFs. Some Pythonistas seem
to want this, and it might well happen in Python3000. It makes no
difference... the alpha and omega of HOFs is that functions are first
class objects that can be passed and returned. Whether they happen to
have names is utterly irrelevant, anonymity is nothing special.

Haskell could probably get rid of anonymous functions even more easily.
It won't, there's no sentiment for that among Haskell programmers. But
there is not conceptual problem in Haskell with replacing every lambda
(more nicely spelled "\" in that language) with a 'let' or 'where'.

Yours, David...
 
M

Matthew Kennedy

[...]
Without macros, when you see you want to design a special-purpose
language you are motivated to put it OUTSIDE your primary language,
and design it WITH its intended users, FOR its intended purposes, which
may well have nothing at all to do with programming. You parse it with a
parser (trivial these days, trivial a quarter of a century ago), and off you
go.

....and off I go. A parser for our new DSL syntax is one thing, but
now I'll need a compiler as well, a symbolic debugger understanding
the new syntax would be nice, and perhaps an interactive environment
(interpreter) would be helpful. If we get time (ha!), lets create an
tools to edit the new syntax.

Seems like a lot of work.

I think I'll stay within Lisp and build the language up to problem
just as Paul Graham describes in On Lisp[1]. That way I get all of
the above for free and in much less time.

Footnotes:
[1] http://www.paulgraham.com/onlisp.html
 
P

prunesquallor

Alexander Schmolck said:
Sorry, I have difficulties understanding what exactly you mean again.

Let me back up here. I originally said:

To which you replied:
I really don't understand why this is a problem, since its trivial to
transform python's 'globally context' dependent indentation block structure
markup into into C/Pascal-style delimiter pair block structure markup.
Significantly, AFAICT you can easily do this unambiguously and *locally*, for
example your editor can trivially perform this operation on cutting a piece of
python code and its inverse on pasting (so that you only cut-and-paste the
'local' indentation).

Consider this python code (lines numbered for exposition):

1 def dump(st):
2 mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime = st
3 print "- size:", size, "bytes"
4 print "- owner:", uid, gid
5 print "- created:", time.ctime(ctime)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
8 print "- mode:", oct(mode)
9 print "- inode/dev:", ino, dev
10
11 def index(directory):
12 # like os.listdir, but traverses directory trees
13 stack = [directory]
14 files = []
15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
22 return files

This code is to provide verisimilitude, not to actually run. I wish
to show that local information is insufficient for cutting and pasting
under some circumstances.

If we were to cut lines 18 and 19 and to insert them between lines
4 and 5, we'd have this result:

3 print "- size:", size, "bytes"
4 print "- owner:", uid, gid
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
5 print "- created:", time.ctime(ctime)
6 print "- last accessed:", time.ctime(atime)

Where we can clearly see that the pasted code is at the wrong
indentation level. It is also clear that in this case, the
editor could easily have determined the correct indentation.

But let us consider cutting lines 6 and 7 and putting them
between lines 21 and 22. We get this:

15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 return files

But it is unclear whether the intent was to be outside the while,
or outside the for, or part of the if. All of these are valid:

15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 return files

15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 return files

15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 return files

Now consider this `pseudo-equivalent' parenthesized code:

1 (def dump (st)
2 (destructuring-bind (mode ino dev nlink uid gid size atime mtime ctime) st
3 (print "- size:" size "bytes")
4 (print "- owner:" uid gid)
5 (print "- created:" (time.ctime ctime))
6 (print "- last accessed:" (time.ctime atime))
7 (print "- last modified:" (time.ctime mtime))
8 (print "- mode:" (oct mode))
9 (print "- inode/dev:" ino dev)))
10
11 (def index (directory)
12 ;; like os.listdir, but traverses directory trees
13 (let ((stack directory)
14 (files '()))
15 (while stack
16 (setq directory (stack-pop))
17 (dolist (file (os-listdir directory))
18 (let ((fullname (os-path-join directory file)))
19 (push fullname files)
20 (if (and (os-path-isdir fullname) (not (os-path-islink fullname)))
21 (push fullname stack)))))
22 files))

If we cut lines 6 and 7 with the intent of inserting them
in the vicinity of line 21, we have several options (as in python),
but rather than insert them incorrectly and then fix them, we have
the option of inserting them into the correct place to begin with.
In the line `(push fullname stack)))))', there are several close
parens that indicate the closing of the WHILE, DOLIST, LET, and IF,
assuming we wanted to include the lines in the DOLIST, but not
in the LET or IF, we'd insert here:
V
21 (push fullname stack))) ))

The resulting code is ugly:

11 (def index (directory)
12 ;; like os.listdir, but traverses directory trees
13 (let ((stack directory)
14 (files '()))
15 (while stack
16 (setq directory (stack-pop))
17 (dolist (file (os-listdir directory))
18 (let ((fullname (os-path-join directory file)))
19 (push fullname files)
20 (if (and (os-path-isdir fullname) (not (os-path-islink fullname)))
21 (push fullname stack)))
6 (print "- last accessed:" (time.ctime atime))
7 (print "- last modified:" (time.ctime mtime))))
22 files))

But it is correct.

(Incidentally inserting at that point is easy: you move the cursor over
the parens until the matching one at the beginning of the DOLIST begins
to blink. At this point, you know that you are at the same syntactic level
as the dolist.)

Let me expand on this point. The lines I cut are very similar to each
other, and very different from the lines where I placed them. But
suppose they were not, and I had ended up with this:

19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 print "- copacetic"
23 return files

Now you can see that lines 6 and 7 ought to be re-indented, but line 22 should
not. It would be rather easy to either accidentally group line seven with
line 22, or conversely line 22 with line 7.
You got it backwards.
Not forgetting to press 'M-C-\' = programmer discipline.
Laxness at this point is a source of errors.

Forgetting to indent properly in a lisp program does not yield
erroneous code.
And indeed, people *do* have to be educated not to be lax when editing lisp -
newbies frequently get told in c.l.l or c.l.s that they should have reindented
their code because then they would have seen that they got their parens mixed
up.

This is correct. But what is recommended here is to use a simple tool to
enhance readability and do a trivial syntactic check.
OTOH, if you make an edit in python the result of this edit is immediately
obvious -- no mismatch between what you think it means and what your computer
thinks it means and thus no (extra) programmer discipline required.

Would that this were the case. Lisp code that is poorly indented will still
run. Python code that is poorly indented will not. I have seen people write
lisp code like this:

(defun factorial (x)
(if (> x 0)
x
(*
(factorial (- x 1))
x
)))

I still tell them to re-indent it. A beginner writing python in this manner
would be unable to make the code run.
Of course you need *some* basic level of discipline to not screw up your
source code when making edits -- but for all I can see at the moment (and know
from personal experience) it is *less* than what's required when you edit lisp
(I have provided a suggested way to edit this particular example in emacs for
python in my previous post -- you haven't provided an analoguous editing
operation for lisp with an explanation why it would be less error-prone)).

Ok. For any sort of semantic error (one in which a statement is
associated with an incorrect group) one could make in python, there is
an analagous one in lisp, and vice versa. This is simply because both
have unambiguous parse trees.

However, there is a class of *syntactic* error that is possible in
python, but is not possible in lisp (or C or any language with
balanced delimiters). Moreover, this class of error is common,
frequently encountered during editing, and it cannot be detected
mechanically.

Consider this thought experiment: pick a character (like parenthesis
for example) go to a random line in a lisp file and insert four of them.
Is the result syntactically correct? No. Could a naive user find them?
Trivially. Could I program Emacs to find them? Sure.

Now go to a random line in a python file and insert four spaces. Is
the result syntactically correct? Likely. Could a naive user find
them? Unlikely. Could you write a program to find them? No.

Delete four adjacent parens in a Lisp file. Will it still compile? No.
Will it even be parsable? No.

Delete four adjacent spaces in a Python file. Will it still compile?
Likely.
No, I didn't want just *any* example of something that can't be displayed; I
wanted an example of something that can't be displayed and is *pertinent* to
our discussion (based on the Quinean assumption that you wouldn't have brought
up "things that can't be displayed" if they were completely besides the
point).

I thought that whitespace was significant to Python.

My computer does not display whitespace. I understand that most
computers do not. There are few fonts that have glyphs at the space
character.

Since having the correct amount of whitespace is *vital* to the
correct operation of a Python program, it seems that the task of
maintaining it is made that much more difficult because it is only
conspicuous by its absence.
me:
People can't "read" '))))))))'.
[more dialog snipped]
I cannot read Abelson and Sussman's minds, but neither of them are
ignorant of the vast variety of computer languages in the world.
Nonetheless, given the opportunity to choose any of them for
exposition, they have chosen lisp. Sussman went so far as to
introduce lisp syntax into his book on classical mechanics.

Well the version of SICM *I've* seen predeominantly seems to use (infixy) math
notation, so maybe Sussman is a little less confident in the perspicuousness
of his brainchild than you (also cf. Iverson)?

Perhaps you are looking at the wrong book. The full title is
`Structure and Interpretation of Classical Mechanics' by Gerald Jay
Sussman and Jack Wisdom with Meinhard E. Mayer, and it is published by
MIT Press. Every computational example in the book, and there are
many, is written in Scheme.

Sussman is careful to separate the equations of classical mechanics
from the *implementation* of those equations in the computer, the
former are written using a functional mathematical notation similar to
that used by Spivak, the latter in Scheme. The two appendixes give
the details. Sussman, however, notes ``For very complicated
expressions the prefix notation of Scheme is often better''
I don't personally think (properly formated) lisp reads that badly at all
(compared to say C++ or java) and you sure got the word-seperators right. But
to claim that using lisp-style parens are in better conformance with the
dictum above than python-style indentation frankly strikes me as a bit silly
(whatever other strengths and weaknesses these respective syntaxes might
have).

And where did I claim that? You originally stated:
Still, I'm sure you're familiar with the following quote (with which I most
heartily agree):

"[P]rograms must be written for people to read, and only incidentally for
machines to execute."

People can't "read" '))))))))'.

Quoting Sussman and Abelson as a prelude to stating that parenthesis are
unreadable is hardly going to be convincing to anyone.
(Hmm, you or emacs?)

Does it matter?
Excellent point. But -- wait! Were it Lisp, how would I know that you didn't
intend e.g.

(if (bar watz) foo)

instead of

(if (bar) watz foo)

You are presupposing *two* errors of two different kinds here: the
accidental inclusion of an extra parenthesis after bar *and* the
accidental omission of a parenthesis after watz.

The kind of error I am talking about with Python code is a single
error of either omission or inclusion.
Moral: I really think your (stereoptypical) argument that the possibility of
inconsistency between "user interpretation" and "machine interpretation" of a
certain syntax is a feature (because it introduces redundancy that can can be
used for error detection) requires a bit more work.

I could hardly care less.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,169
Messages
2,570,920
Members
47,464
Latest member
Bobbylenly

Latest Threads

Top