Python syntax in Lisp and Scheme

D

David Rush

Actually, I think I have just achieved clarity: the one S-expression
syntax is used for at least different evaluation protocols -- normal
functions and special forms, which Lispers have also called FSUBR and
FEXPR *functions*.

Well, I got it, and you raised a good point but used the wrong terminology.
To know how any particular sexp is going to be evaluated you must know
whether the head symbol is bound to either a macro or some other
(preferably
a function) value. The big difference between the CL and Scheme communities
in this respect is that Scheme requires far fewer macros because it has
first-class functions (no need for the context-sensitive funcall). So while
you have a valid point, and indeed a good reason for minimizing the number
of macros in a program, in practice this is *much* less of a problem in
Scheme.
That depends on who defines 'function'.

In Scheme (and Lisp generally I suspect) function != macro for any values
of the above. Both are represented as s-expression 'forms' (which is the
correct local terminology)
"If anything I write below about Lisp does not apply to Scheme
specificly, my aplogies in advance."

No bother...

david rush
 
M

Matthias

:> I can't see why a LISP programmer would even want to write a macro.
: That's because you are approaching this with a fundamentally flawed
: assumption. Macros are mainly not used to make the syntax prettier
: (though they can be used for that). They are mainly used to add features
: to the language that cannot be added as functions.

Really? Turing-completeness and all that... I presume you mean "cannot
so easily be added as functions", but even that would surprise me.
(Unless you mean cannot be added _to_Lisp_ as functions, because I don't
know as much as I'd like to about Lisp's capabilities and limitations.)

IMHO, these discussions are less usefull when not accompanied by
specific examples. What are these macros good for? Some examples
where you might have difficulties with using ordinary functions:

1.) Inventing new control structures (implement lazy data structures,
implement declarative control structures, etc.)
=> This one is rarely needed in everyday application programming and
can easily be misused.

2.) Serve as abbreviation of repeating code. Ever used a code
generator? Discovered there was a bug in the generated code? Had
to fix it at a zillion places?
=> Macros serve as extremely flexible code generators, and there
is only one place to fix a bug.
=> Many Design Patterns can be implemented as macros, allowing you
to have them explicitly in your code. This makes for better
documentation and maintainability.

3.) Invent pleasant syntax in limited domains.
=> Some people don't like Lips' prefix syntax. It's changeable if you
have macros.
=> This feature can also be misused.

4.) Do computations at compile time instead of at runtime.
=> Have heard about template metaprogramming in the C++ world?
People do a lot to get fast performance by shifting computation
to compile time. Macros do this effortlessly.

These are four specific examples which are not easy to do without
macros. In all cases, implementing them classically will lead to code
duplication with all the known maintainability issues. In some cases
misuse will lead to unreadable or buggy code. Thus, macros are
powerful tools for the hand of professionals. You have to know if you
want a sharp knife (which may hurt you when misused) or a less sharper
one (where it takes more effort to cut with).
 
P

Pascal Bourguignon

What does it mean to take a variable-name as an argument? How is that
different to taking a pointer? What does it mean to take "code" as an
argument? Is that different to taking a function as an argument?

The difference is that you can declare (compilation-time) it and
associated variables or functions.

For example, I recently defined this macro, to declare at the same
time a class and a structure, and to define a couple of methods to
copy the objects to and from structures.

That's so useful that even cpp provide us with a ## operator to build
new symbols.

(DEFMACRO DEFCLASS-AND-STRUCT (NAME SUPER-CLASSES ATTRIBUTES OPTIONS)
(LET ((STRUCT-NAME (INTERN (FORMAT NIL "~A-STRUCT" NAME))))
`(PROG1
(DEFCLASS ,NAME ,SUPER-CLASSES ,ATTRIBUTES ,OPTIONS)
(DEFSTRUCT ,STRUCT-NAME
,@(MAPCAR (LAMBDA (ATTRIBUTE)
(CONS
(CAR ATTRIBUTE)
(CONS (GETF (CDR ATTRIBUTE) :INITFORM NIL)
(IF (GETF (CDR ATTRIBUTE) :TYPE NIL)
NIL
(LIST :TYPE (GETF (CDR ATTRIBUTE) :TYPE))))))
ATTRIBUTES))
(DEFMETHOD COPY-TO-STRUCT ((SELF ,NAME))
(MAKE-STRUCT
',NAME
,@(MAPCAN (LAMBDA (ATTRIBUTE)
`(,(INTERN (STRING (CAR ATTRIBUTE)) "KEYWORD")
(COPY-TO-STRUCT (SLOT-VALUE SELF ',(CAR ATTRIBUTE)))))
ATTRIBUTES)))
(DEFMETHOD COPY-FROM-STRUCT ((SELF ,NAME) (STRUCT ,STRUCT-NAME))
,@(MAPCAR
(LAMBDA (ATTRIBUTE)
`(SETF (SLOT-VALUE SELF ',(CAR ATTRIBUTE))
(,(INTERN (FORMAT NIL "~A-~A"
STRUCT-NAME (CAR ATTRIBUTE))) STRUCT)))
ATTRIBUTES)
SELF)
))
);;DEFCLASS-AND-STRUCT
 
R

Raymond Wiker

Matthias said:
1.) Inventing new control structures (implement lazy data structures,
implement declarative control structures, etc.)
=> This one is rarely needed in everyday application programming and
can easily be misused.

This is, IMHO, wrong. One particular example is creating
macros (or read macros) for giving values to application-specific data
structures.
You have to know if you want a sharp knife (which may hurt you when
misused) or a less sharper one (where it takes more effort to cut
with).

It is easier to hurt yourself with a blunt knife than a sharp
one.

--
Raymond Wiker Mail: (e-mail address removed)
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
 
B

Bengt Richter

I think I agree somewhat. The problem is detecting end-of-chunk from the user input.
I think it would be possible to write a different listener that would accept
chunks terminated with an EOF from the user using some key binding, like a function key
or Ctl-z or Ctl-d. Then you could type away until you wanted it interpreted.
A zero length chunk followed by EOF would terminate the overall listener.

The source is open if you want to try it ;-) Let us know how it feels to use.
Alternatively, maybe interactively postponing blank-line dedent processing until the
next line would be better. Two blank lines at the end of an indented series of chuncks
separated by a single blank line would be fairly natural. If there was no indent legal
on the next line, you'd process immediately, not go to ... prompt. But this is tricky
to get right.
Blank lines are ignored by Python.
You are right re the language, but the interactive command line interface (listener)
doesn't ignore them. (Its syntactical requirements are a somewhat separate issue from
Python the language, but they are part of the overall Python user experience):
... def foo(): print 'foo'
... foo()
...
foo

Regards,
Bengt Richter
 
J

Joe Marshall

David Rush said:
But it may also be a mistake to use macros for the boilerplate code when
what you really need is a higher-order function...

Certainly.

One should be willing to use the appropriate tools: higher-order
functions, syntactic abstraction, and meta-linguistic abstraction
(embedding a domain-specific `tiny language' within the host
language). Macros come in handy for the latter two.
 
N

Neelakantan Krishnaswami

In comp.lang.functional Erann Gat <[email protected]>
wrote:
:> I can't see why a LISP programmer would even want to write a macro.
: That's because you are approaching this with a fundamentally flawed
: assumption. Macros are mainly not used to make the syntax prettier
: (though they can be used for that). They are mainly used to add features
: to the language that cannot be added as functions.

Really? Turing-completeness and all that... I presume you mean
"cannot so easily be added as functions", but even that would
surprise me. (Unless you mean cannot be added _to_Lisp_ as
functions, because I don't know as much as I'd like to about Lisp's
capabilities and limitations.)

You know Haskell. Think about the do-noatation for monads: it takes
what would be awkward, error-prone code (using >> and >>= manually)
and makes it pleasant and readable. Do-notation is basically a macro
(and can easily be expressed as such in Scheme or Lisp). Syntactic
convenience is very important; consider how many fewer programmers in
ML are willing to reach for a monadic solution, even when it would be
appropriate. Or for that matter, think how many fewer Java programmers
are willing to write a fold than in ML or Haskell, even when it would
be appropriate.
 
M

Matthew Danish

: (e-mail address removed) writes:
:> Really? Turing-completeness and all that... I presume you mean "cannot
:> so easily be added as functions", but even that would surprise me.

: well you can pass around code full of lambdas so most macros (expect
: the ones which perform hairy source transformations) can be rewritten
: as functions, but that isn't the point. Macros are about saying what
: you mean in terms that makes sense for your particular app.

OK, so in some other application, they might allow you to extend the
syntax of the language to encode some problem domain more naturally?
Right.


:> OK, that's _definitely_ just a filter:
: no it's not, and the proof is that it wasn't written as a filter.

He was saying that this could not be done in Python, but Python has
a filter function, AFAIK.

He meant the way it was expressed. Java can ``do'' it too, but it's not
going to look as simple.
: For whatever reason the author of that snippet decided that the code
: should be written with WITH-COLLECTOR and not as a filter, some
: languages give you this option, some don't, some people think this is
: a good thing, some don't.

Naturally. I'm against extra language features unless they increase
the expressive power, but others care more for ease-of-writing and
less for ease-of-reading and -maintaining than I do.

Then you should like macros, because ease-of-reading and -maintaining is
precisely why I use them. Like with functions, being able to label
common abstractions is a great maintainability boost.

You don't write ((lambda (x) ...) 1) instead of (let ((x 1)) ...), right?
:> : DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
:> : any other way because they take variable names and code as arguments.
:> What does it mean to take a variable-name as an argument? How is that
:> different to taking a pointer? What does it mean to take "code" as an
:> argument? Is that different to taking a function as an argument?
: You are confusing the times at which things happen. A macro is
: expanded at compile time,

OK, yep. It should have occurred to me that that was the difference.
So now the question is "what does that give you that higher-order
functions don't?".
: Another trivial example:
: <IF-BIND>

: Macros allow me to say what I _mean_, not what the compiler wants.

Interesting. It would be interesting to see an example where it allows
you to write the code in a less convoluted way, rather than the three
obfuscating (or would the non-macro Lisp versions be just as obfuscated?
I know Lisp is fine for higher-order functions, but I guess the IF-BIND
stuff might be hard without pattern-matching.) examples I've seen so far.

Here's an example that I am currently using:

(definstruction move ((s register) (d register))
:sources (s)
:destinations (d)
:template "movl `s0, `d0"
:class-name move-instruction)

1. This expands into a
(defmethod make-instruction-move ((s register) (d register)) ...)
which itself is called indirectly, but most importantly it allows the
compiler to compile a multiple-dispatch method statically rather than
trying to replicate that functionality at runtime (which would require
parsing a list of parameters supplied by the &rest lambda-list keyword,
not to mention implementing multiple-dispatch).

2. Sources and destinations can talk about variable names rather than indices
into a sequence. (Templates cannot because they need the extra layer of
indirection--the source and destination lists are subject to change in
this system currently. Well, I suppose it could be worked out anyway,
perhaps if I have time I will try it).

3. Originally I processed the templates at run-time, and upon profiling
discovered that it was the most time-consuming function by far. I modified
the macro to process the template strings statically and produce a
function which could be compiled with the rest of the code, and the
overhead completely disappeared. I can imagine a way to do this with
functions: collect a list of functions which take the relevant values
as arguments, then map over them and apply the results to format.
This is less efficient because
(a) you need to do some extra steps, which the macro side-steps by
directly pasting the code into the proper place, and
(b) FORMATTER is a macro which lets you compile a format string into
a function, and this cannot be used in the functional version,
since you cannot say (FORMATTER my-control-string) but must supply
a string statically, as in (FORMATTER "my control string").
Could FORMATTER be implemented functionally? Probably, but either
you require the use of the Lisp compiler at run-time, which is
certainly possible though heavyweight usually, or you write a
custom compiler for that function. If you haven't figured it out
yet, Lispers like to leverage existing resources =)

4. The macro arranges all the relevant information about a machine instruction
in a simple way that is easy to write even if you don't understand
the underlying system. If you know anything about assembly language,
it is probably pretty easy to figure out what information is being encoded.


Here's another fun macro which I've been using as of yesterday afternoon,
courtesy of Faré Rideau:

(match s
...
((ir move (as temp (ir temp _)) b)
(reorder-stm (list b)
(mlambda ((list b) ;; match + lambda
(make-ir-move temp b)))))
...)

MATCH performs ML/Erlang-style pattern matching with a Lispy twist: patterns
are of the form: literal, variable, or (designator ...) where designator is a
symbol specified by some defining construct.

I wrote this to act like the ML `as' [meta-?]pattern:

(define-macro-matcher as
;; I think this lambda should be folded into the macro, but whatever
#'(lambda (var pat)
(multiple-value-bind (matcher vars)
(pattern-matcher pat)
(values `#'(lambda (form)
(m%and (funcall ,matcher form)
(setf ,var form)))
(merge-matcher-variables (list vars (list var)))))))

for example, which at macro-expansion time computes the pattern-matching code
of the pat argument, adds var to the list of variables (used by MATCH), and
creates a function which first checks the pattern and then sets the supplied
(lexical) variable var to the value of the form at this point the form at this
point. Calling PATTERN-MATCHER yourself is quite enlightening on this:

* (pattern-matcher '(as x 1))
#'(LAMBDA (FORM)
(M%AND (FUNCALL #'(LAMBDA (#:FORM) (M%WHEN (EQL #:FORM '1))) FORM)
(SETF X FORM)))
(X)

MATCH (really implemented in terms of IFMATCH) computes this at macro-expansion
and the Lisp statically compiles it afterwards. Of course, MATCH could be
implemented functionally, but consider the IR matcher that
(a) looks up the first parameter in a table (created by another macro) to
see if it is a valid IR type and get the slot names
(b) which are used to create slot-accessing forms that can be optimized
by a decent CLOS implementation when the slot-name is a literal value
(as when constructed by the macro, something a functional version
could not do).

Not to mention that a functional version would have to look something like:

(match value
'(pattern involving x, y, and z) #'(lambda (x y z) ...)
... ...)

Rather annoying, don't you think? The variables need to be repeated.

The functional version would have to create some kind of structure to hold the
bound variable values and construct a list to apply the consequent function
with. The macro version can get away with modifying lexical variables.

Also the macro version can be extended to support multiple value forms, which
in Lisp are not first-class (but more efficient than returning lists).


A third quick example:

(ir-sequence
(make-ir-move ...)
(make-ir-jump ...)
...)

Which transforms a list of values into a list-like data structure. I wrote
this originally as a macro, because in my mind it was a static transformation.
I realized later that it could be implemented as a function, without changing
any uses, but I didn't because
(a) I wasn't using it with higher-order functions, or situations demanding
them.
(b) It would now have to cons a list every call and do the transformation;
added overhead and the only gain being that it was now a function which
I never even used in a functional way. Rather questionable.
(c) I can always write a separate functional version if I need it.


Basically, this boils down to:

* Macros can do ``compile-time meta-programming'' or whatever the buzzword
is these days, and those above are some real-life examples.
This allows for compiler optimization and static analysis where desired.
* Macros make syntax much more convenient and less cluttered. I really
don't understand the people who think that macros make things harder to read.
It is far better to have clear labelled markers in the source code rather
than having to sort through boilerplate to figure out the intended meaning
of code. Just because I understand lambda calculus doesn't mean I want to
sort through nested lambdas just to use some functionality, every time.
If you are afraid because you are unsure of what the macro does, or its
complete syntax, MACROEXPAND-1 is your friend. That, and an editor with
some hot-keys to find source/docs/expansion/etc.
 
R

Raymond Wiker

Another example:

Given a set of files containing database data, I want to create

- classes that represent (the interesting bits of) each table

- functions that parse lines from the files, create instances
of a given class, and returns the instance along with the "primary
key" of the instance.

The interface to this functionality is the macro

(defmacro define-record (name key args &body body)
...)

which I use like this:

(define-record category oid ((oid 0 integer)
(name 1 string)
(status 4 integer)
(deleted 5 integer)
(parent 9 integer)
active)
(unless (zerop deleted)
(return t)))

which expands into

(PROGN
(DEFCLASS CATEGORY-CLASS
NIL
((OID :INITARG :OID) (NAME :INITARG :NAME) (STATUS :INITARG :STATUS)
(DELETED :INITARG :DELETED) (PARENT :INITARG :pARENT)
(ACTIVE :INITARG :ACTIVE)))
(DEFUN HANDLE-CATEGORY (#:LINE-2524)
(WHEN #:LINE-2524
(LET ((#:FIELDS-2525
(SPLIT-LINE-COLLECT #:LINE-2524
'((0 . INTEGER) (1 . STRING) (4 . INTEGER)
(5 . INTEGER) (9 . INTEGER)))))
(WHEN #:FIELDS-2525
(PROGV
'(OID NAME STATUS DELETED PARENT)
#:FIELDS-2525
(BLOCK NIL
(LET (ACTIVE)
(UNLESS (ZEROP DELETED) (RETURN T))
(VALUES OID
(MAKE-INSTANCE 'CATEGORY-CLASS
:OID
OID
:NAME
NAME
:STATUS
STATUS
:DELETED
DELETED
:pARENT
PARENT
:ACTIVE
ACTIVE))))))))))

The implementation of this macro is probably not perfect (I've
learnt more about Common Lisp since I wrote it). This is OK, since I
can go back and change the innards of the macro whenever I want to :)
Actually, this is probably something that calls for the use of MOP.

--
Raymond Wiker Mail: (e-mail address removed)
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
 
E

Erann Gat

In comp.lang.functional Erann Gat
:> I can't see why a LISP programmer would even want to write a macro.
: That's because you are approaching this with a fundamentally flawed
: assumption. Macros are mainly not used to make the syntax prettier
: (though they can be used for that). They are mainly used to add features
: to the language that cannot be added as functions.

Really? Turing-completeness and all that... I presume you mean "cannot
so easily be added as functions", but even that would surprise me.

No, I meant what I wrote. Turing-completeness is a red herring with
respect to a discussion of programming language features. If it were not
then there would be no reason to program in anything other than machine
language.

: For example, imagine you want to be able to traverse a binary tree and do
: an operation on all of its leaves. In Lisp you can write a macro that
: lets you write:
: (doleaves (leaf tree) ...)
: You can't do that in Python (or any other langauge).

My Lisp isn't good enough to answer this question from your code,
but isn't that equivalent to the Haskell snippet: (I'm sure
someone here is handy in both languages)

doleaves f (Leaf x) = Leaf (f x)
doleaves f (Branch l r) = Branch (doleaves f l) (doleaves f r)

I'd be surprised if Python couldn't do the above, so maybe doleaves
is doing something more complex than it looks to me to be doing.

You need to change your mode of thinking. It is not that other languages
cannot do what doleaves does. It is that other langauges cannot do what
doleaves does in the way that doleaves does it, specifically allowing you
to put the code of the body in-line rather than forcing you to construct a
function.

Keep in mind also that this is just a trivial example. More sophisticated
examples don't fit well in newsgroup postings. Come see my ILC talk for
an example of what you can do with macros in Lisp that you will find more
convincing.
: Here's another example of what you can do with macros in Lisp:

: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))

: This returns a list of all the lines in a file that have some property.

OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".

The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.
: DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
: any other way because they take variable names and code as arguments.

What does it mean to take a variable-name as an argument? How is that
different to taking a pointer? What does it mean to take "code" as an
argument? Is that different to taking a function as an argument?

These questions are answered in various books. Go seek them out and read
them. Paul Graham's "On Lisp" is a good place to start.

E.
 
A

Alex Martelli

Erann said:
I presume you meant to say that xrange is *not* a hack. Well, hackiness

Please don't put words in my mouth, thanks. xrange _IS_ a hack, because
it was introduced in Python back in the dark ages before Python had the
iterator protocol. If Python could be redesigned from scratch, without
needing to ensure backwards compatibility, the 'range' builtin (which
might perhaps be better named 'irange', arguably) would no doubt be a
generator returning suitable iterators depending on its parameters.

[[ It is possible, though far from certain, that the next major release
of Python (3.0, presumably coming in a few years), which by definition of
"major" IS entitled to introduce some backwards incompatibilities, will
solve this legacy problem. 3.0's main theme will be to simplify Python
by removing redundancies, "more than one way to do it"'s, accreted over
the years; in my personal and humble opinion, range and xrange are just
such mtowdtis -- e.g. when one does need a list containing an arithmetic
progression, list(irange( ... -- the normal way to build a list from
any finite iterator, applied to a bounded arithmetic-progression
iterator -- is "the obvious way to do it", again IMPAHO. ]]
is to some extent a matter of taste. The only reason xrange exists is
because range uses more memory than it needs to in the context where it is
most often used. IMO, any construct that exists solely to work around an
inefficiency which should never have been there in the first place (as
evidenced by the fact that Python is unique among programming langauges in
having this particular inefficiency) is a hack.

We agree in broad terms on the definition of "hack" -- although any
discourse using "should" is obviously going to be debatable, e.g., I
consider cuts in Prolog, and strictness annotations in Haskell, as
generally being hacks, but (never having implemented compilers for
either) I can't discuss whether the inefficiencies they work around
"should" be there, or not. Generally, whenever release N+1 of a
language adds a construct or concept that is really useful, one
could argue that the new addition "should" have been there before
(except, perhaps, in those exceedingly rare cases where the new
feechur deals with things that just didn't exist in the past).

That seems to be the crux of the difference between Python and Lisp. For

I agree with you on this point, too.
some reason that I don't quite fathom, Pythonistas seem to think that this
limitation in their language is actually a feature. It boggles my mind.

Imagine a group of, say, a dozen programmers, working together by
typical Agile methods to develop a typical application program of
a few tens of thousands of function points -- developing about
100,000 new lines of delivered code plus about as much unit tests,
and reusing roughly the same amount of code from various libraries,
frameworks, packages and modules obtained from the net and/or from
commercial suppliers. Nothing mind-boggling about this scenario,
surely -- it seems to describe a rather run-of-the-mill case.

Now, clearly, _uniformity_ in the code will be to the advantage
of the team and of the project it develops. Extreme Programming
makes a Principle out of this (no "code ownership"), but even if
you don't rate it quite that highly, it's still clearly a good
thing. Now, you can impose _some_ coding uniformity (within laxer
bounds set by the language) _for code originally developed by the
team itself_ by adopting and adhering to team-specific coding
guidelines; but when you're reusing code obtained from outside,
and need to adopt and maintain that code, the situation is harder.
Either having that code remain "alien", by allowing it to break
all of your coding guidelines; or "adopting" it thoroughly by,
in practice, rewriting it to fit your guidelines; is a serious
negative impact on the team's productivity.

In any case, to obtain any given level of uniformity, even just
in the code newly produced by the team, you need more and more
coding guidelines as the language becomes laxer. For example:
in Ruby, class names MUST start with an uppercase letter, no
ifs, no buts; languages where such a language-imposed rule does
not exist (such as Python) may choose to adopt it as a coding
convention -- but it's one more atom of effort in writing those
conventions and in enforcing them. The language that makes this
a rule makes life marginally _simpler_ for you; the language
that gives you more freedom makes life _harder_ for you. I hope
that by choosing an example where Python is the "freer", and thus
LESS simple, language, I can show that this line of reasoning is
not a language-induced bias; I appreciate such uniformity, and I
think it simplifies the life of teams engaged in application
development, quite apart from Python vs other-languages issues.
(Of course, I could have chosen many examples where Python is
the more uniform/restrictive language: e.g., after a "def x()",
Python _mandates_ a colon, where Ruby 1.8 makes it _optional_ in
the same syntactic position -- here, Ruby is "freer", and thus
Python makes your life simpler).

The issue is by no means limited to lexical-level minutiae (though
in practice many people, and thus many teams, are quite prone to
waste inordinate amount of debate on exactly such minutiae -- one
of the "laws" first humorously identified by Cyril Northcote Parkinson
in his hilarious columns for "The Economist" later collected in book
form). A language which allows dozens of forms of loops gives you
much more freedom of expression -- and thereby gives you headaches
when you try to write coding guidelines making the team's code more
uniform (and worse ones if you need to adopt and maintain reused
code of alien origin). One that lets (e.g.) the definition of a
single module, or class, be spread over many separate code parts
over many files, again, gives you much more freedom -- and more
headaches -- than one which mandates classes and modules be
defined in textually-contiguous blocks of code. Any increase in
freedom of expression is thus not an unalloyed good: it carries
costs as well as benefits. Therefore, just like any other aspect
of an engineering design, it is subject to trade-offs.

It should be clear by now that the "quantum jump" (using this phrase
in the quaint popular sense of "huge leap", rather than in the correct
one of "tiny variation":) in expressiveness that is afforded by a
powerful macro system, one letting you enrich and enhance, and thus
CHANGE, the very language you're using, can be viewed as TOO MUCH
(for the context of application development by middle-sized teams
reusing substantial amounts of alien code) without needing to boggle
anybody's mind. You may perfectly well disagree with this and
counter-claim that having macros available will make everybody into
wonderful language designers: I will still prefer to stick to what
my experience has taught me, even within the context of my generally
optimistic stance on human nature, which is that it just ain't so.

It's optimistic enough to believe that average practitioners WILL be
able to design good interfaces (functions, procedures, classes and
hierarchies thereof, ...) suitable for the task at hand and with some
future potential for reuse in similar but not identical contexts; I
believe that there are plenty of problems even within these limited
confines, and giving more powerful tools to the same practitioners,
more degrees of freedom yet, is IMHO anything but conducive to optimal
performance in the areas I'm most interested in (chiefly application
development by mid-sized teams with very substantial reuse).

I hope this presents my opinions (shared by many, though far from all,
in the Python community) clearly enough that we can "agree to
disagree" without offending each other with such quips as "boggling
the mind".

By using Lisp you *become* a language designer in the normal course of
learning to program in it. After a while you even become a good language
designer.

Here is the crux of our disagreement. If you believe everybody can
become a good language designer, I think the onus is on you to explain
why most languages are not designed well. Do remember that a vast
majority of the people who do design languages HAVE had some -- in
certain cases, VAST -- experience with Lisp, e.g., check out the
roster of designers for the Java language. My thesis is that the
ability to design languages well is far rarer than the ability to
design well within more restricted confines, and good design is taught
and learned much more easily in restricted realms, much less easily
the broader the degrees of freedom.

If you prefer to remain forever shackled by the limitations imposed by
someone else, someone who may not have known what they were doing, or
someone who made tacit assumptions that are not a good fit to your
particular problem domain, then by all means go for Python. (Oh, you also

If I thought Python's design was badly done, and a bad fit for my
problem domain, then, obviously, I would not have chosen Python (I
hardly lack vast experience in many other programming languages,
after all). Isn't this totally obvious?
have to be willing to give up all hope of ever having efficient
native-code compilation.)

This assertion is false. The psyco specializing-compiler already
shows what can be done in these terms within the restrictions of the
existing classic-python implementation. The pypy project (among
whose participants Armin Rigo, the author of psyco, is counted) aims
(among other things) to apply the same techniques, without those very
confining restrictions, to provide "efficient native-code compilation"
to vastly greater extents. Therefore, clearly, your assertion that
(to adopt Python) one has "to give up all hope" of such goals is not
at all well-founded. There is nothing intrinsic to Python that can
justify it. You may want to rephrase it in terms of having production
quality compilers to native code available _today_ -- but surely not in
terms of hopes, or in fact realistic possibilities, for the future.

Similar, but not quite the same. But there are other reasons to use
S-expressions besides macros. For example, supposed you are writing code
for a mission-critical application and you want to write a static analyzer
to check that the code has a particular property. That's vastly easier to
do if your code is in S-expressions because you don't have to write a
parser.

If your toolset includes a parser, you don't have to write one -- it's
there, ready for reuse. The difficulty of writing a parser may give some
theoretical pause in a "greenfield development" idealized situation, but
given that parsers ARE in fact easily available it's not a compelling
argument in practice. Still, we're not debating S-expressions, but,
rather, macros: from that POV, it seems to me that Dylan is quite a bit
more similar to Lisp than to Python -- even though, in terms of many
aspects of surface syntax, the reverse many appear true. (Just to show
that I'm _NOT_ acritically accepting of anything Python and critical of
anything nonPython: I _do_ envy Dylan, and Lisp, the built-in generic
function / multimethod approach -- I think it's superior to the single
dispatch of Smalltalk / Ruby / Python / C++ / Java, with more advantages
than disadvantages, and am overjoyed that in pypy we have based the
whole architecture on a reimplementation of such multiple dispatch).

True. For any single example (especially simple ones) I can give you can
almost certainly find some language somewhere that can do that one thing
with a specialized construct in that language. The point is, macros let
you do *all* these things with a single mechanism.

And nuclear warheads let you dispatch any enemy with a single kind of
weapon. Despite which, some of us are QUITE happy that other weapon
systems still exist, and that our countries have abjured nukes...;-).

Forcing you to either waste a lot of memory or write some very awkward
code.

I _BEG_ your pardon...? Assuming for definiteness that a tree is a
sequence of leaves and subtrees, and some predicate leafp tells me
whether something IS a leaf:

def doleaves(tree):
for item in tree:
if leafp(item):
yield item
else:
for leaf in doleaves(item):
yield leaf

where do I "waste a lot of memory"? What is "very awkward" in
the above code? I really don't understand.

Here's another example: suppose you're writing embedded code and you need
to write a critical section, that is, code that runs with no interrupts
enabled. In Lisp you can use macros to add a macro that lets you write:

(critical-section
Code:
)

Getting this macro right is non-trivial because you have to make sure that
it works properly if critical sections are nested, and if there are
non-local exits from the code.[/QUOTE]

In Ruby and Smalltalk, you can clearly pass the code block to the critical-
section method just as you would pass it to any other iterator (i.e.,
same non-syntax-altering mechanism does cover this kind of needs just
as well as it covers looping).  In Python, the cultural preference is
for explicitness, thus try/finally (which IS designed specifically to
ensure handling of "nonlocal exits", and has no problem being nested)
enjoys strong preference over the "use of the same construct for widely
different purposes" which WOULD be currently allowed by iterators:
....   def __init__(self): print 'Entering'
....   def __del__(self): print 'Exiting'
....   def __iter__(self): return self
....   def next(self): return None
........   print "about to nonlocal-exit"
....   raise RuntimeError, "non-local exit right here"
....
Entering
about to nonlocal-exit
Exiting
Traceback (most recent call last):

Such reliance on the __del__ ("destructor") is not a well-received
idiom in Python, and the overall cultural preference is strongly for
NOT stretching a construct to perform widely different tasks (looping
vs before/after methods), even though technically it would be just
as feasible with Python's iterators as with Smalltalk's and Ruby's.

So, it IS quite possible that Python will grow a more specific way
to ensure the same semantics as try/finally in a more abstract way --
not because Python's iterators aren't technically capable of it, but
because using them that way would hit against such cultural issues
as the dislike for stretching a single tool to do different things.
(Clearly, macros would not help fight such a cultural attitude:-).

[QUOTE]
Another example: suppose you want to write some code that insures that a
particular condition is maintained while a code block executes.  In Lisp
you can render that as:

(with-maintained-condition [condition] [code])

e.g.:

(with-maintained-condition (< minval (reactor-temp) maxval)
(start-reactor)
(operate-reactor)
(shutdown-reactor))

What's more, WITH-MAINTAINED-CONDITION can actually look at the code in
the CONDITION part and analyze it to figure out how to take action to
avoid having the condition violated, rather than just treating the
condition as a predicate.[/QUOTE]

I have no idea of how with-maintained-condition would find and
examine each of the steps in the body in this example; isn't
the general issue quite equivalent to the halting problem, and
thus presumably insoluble?  If with-maintained-condition is, as
it would appear here, written by somebody who's not a chemical
engineer and has no notion about control of temperatures in
chemical reactors (or, other specialized engineers for completely
different types of reactors), HOW does it figure out (e.g.) the
physical model of the outside world that is presumably being
controlled here?  It seems to me that, compared to these huge
semantical issues, the minor ones connected to allowing such
"nifty syntax" pale into utter insignificance.

[QUOTE]
Yes, I'm only focusing on that because I wanted to come up with simple
examples.  The problem is that the real power of macros can't really be
conveyed with a simple example.  By definition, any simple macro can
easily be replaced with simple code.  WITH-MAINTAINED-CONDITION starts to
come closer to indicating what macros are capable of.[/QUOTE]

If your claim is that macros are only worthwhile for "artificial
intelligence" code that is able, by perusing other code, to infer
(and perhaps critique?) the physical world model it is trying to
control, and modify the other code accordingly, I will not dispute
that claim.  Should I ever go back to the field of Artificial
Intelligence (seems unlikely, as it's rather out of fashion right
now, but, who knows) I will probably ask you for more guidance
(the Prolog that I was using 15/20 years ago for the purpose was
clearly nowhere near up to it... it lacked macros...!-).  But as
long as my interests suggest _eschewing_ "self-modifying code" as
the plague, it seems to me I'm at least as well off w/o macros!-)

[QUOTE]
So which is the one obvious way to do it, range or xrange?[/QUOTE]

An iterator.  Unfortunately, iterators did not exist when range and
xrange were invented, and thus the 'preferably' is violated.  By dint
of such issues of maintaining backwards compatibility, and not having
been born "perfect like Athena from Zeus's head" from day one, Python
is not perfect: it's just, among all imperfect languages, the one that,
in my judgment, best fits my current needs and interests.

[QUOTE]
Nice.  I particularly like the following:

"Errors should never pass silently."

and

"In the face of ambiguity, refuse the temptation to guess."

I predict that if Python is ever used in mission-critical applications[/QUOTE]

Hmmm, "if"?  Google for python "air traffic control", or python
success stories, depending on your definition of "mission-critical".
Either way, it IS being so used.
[QUOTE]
that it is only a matter of time before a major disaster is caused by
someone cutting and pasting this:

def foo():
if baz():
f()
g()

and getting this:

def foo():
if baz():
f()
g()[/QUOTE]

Ah, must be a mutation of the whitespace-eating nanovirus that was
identified and neutralized years ago -- a whitespace-*adding*
nanovirus, and moreover one which natural selection has carefully
honed to add just the right number of spaces (two, in this weird
indentation style).  I would estimate the chance of such a nanovirus
attacking as comparable to that of the attack of other mutant
strains, such as the dreaded balanced-parentheses-eaters which might 
prove particularly virulent in the rich culture-broth of an
S-expressions environment.  Fortunately, in either case, the rich
and stong suite of unit tests that surely accompanies such a
"mission-critical application" will easily detect the nefarious
deed and thus easily defang the nanoviruses' (nanovirii's? nanovirorum...?)
menace, even more easily than it detects and defangs the "type errors"
so utterly dreaded by all those who claim that only strictly statically
typed languages could ever possibly be any use in mission-critical
applications.

[QUOTE]
If you are content to forever be a second-class citizen in the programming
world, to blindly accept the judgements of the exalted language designers
as if they are gospel, even when the language designers obviously don't
know what they're doing as Guido clearly didn't in early version of Python
as evidenced by the fact that proper lexical scoping wasn't added until
version 2, even when the language designers come up with horrible messes
like C++, if you are willing to take on faith that the language designers
anticipated every need you might ever have in any programming domain you
might ever choose to explore, then indeed macros are of no use to you.[/QUOTE]

Whoa there.  I detect in this tirade a crucial unspoken assumption: that
One Language is necessarily going to be all I ever learn, all I ever use,
for "any programming domain I might ever choose to explore".  This is,
most obviously, a patently silly crucial unspoken assumption, and this
obvious and patent silliness undermines the whole strength of the
argument, no matter how forcefully you choose to present it.

There being no open-source, generally useful operating system kernels in
any language but C, if one "programming domain I choose to explore" is to
modify, enrich and adapt the kernel of the operating system I'm using, with
the ambition of seeing my changes make it into the official release -- I
had better learn and use C pretty well, for example.  Does this mean, by
the (silly, unspoken) "one-language rule", that I am condemned to do _ALL_
of my programming in C...?  By no means!  I can, and do, learn and use more
than one programming language, depending on the context -- who must I be
cooperating with, in what "programming domain", on what sets of platforms,
and so on.  Since I know (and, at need, use) several different programming
languages, I have no need to BLINDLY accept anything whatsoever -- my
(metaphorical) eyes are quite open, and quite able to discern both the
overall picture, and the minutest details -- in point of fact far better
than my "actual" eyes are, since my actual, physical eyesight is far from
good.  With Python, I know by actual experience as well as supporting
reflection and analytical thought, I am quite able to cooperate fruitfully
in mid-sized teams of programmers of varying levels of ability working
rapidly and productively to implement application programs (and frameworks
therefor) of reasonable richness and complexity: the language's clarity,
simplicity, and uniformity (and the cultural biases reinforcing the same
underlying values) help quite powerfully in such collaboration.  If and
when specialized languages are opportune, they can be and are designed
separately (often subject to external constraints, e.g., XML for purposes
of cooperation with other -- separately designed and implemented -- "alien"
applications) and _implemented_ with Python.

[QUOTE]
If on the other hand you dream of obtaining a deep understanding of what
programming is and how languages work, if you dream of some day writing
your own compilers rather than waiting for someone else to write them for[/QUOTE]

Beg pardon: I *HAVE* "written my own compilers", several times in the
course of my professional career.  I don't particularly "dream" of doing
it again, I just suspect it's quite possible that it may happen again,
e.g., next time somebody hires me to help write an application that
must suck in some alien-application-produced data (presumably in XML
with some given schema, these days) and produce something else as a
result.  I neither dread nor yearn for such occasions, any more than I
do wrt writing my own network protocols, GUI frameworks, device drivers,
schedulers, and so on -- all tasks I have had to perform, and which
(if I am somewhat unlucky, but within the reasonable span of possiiblities)
may well happen to fall on my shoulders again.  If I'm lucky, I will instead
find and be able to re-use *existing* device drivers, compilers, network
protocols, etc, etc -- I would feel luckier, then, because I could devote
more of my effort to building application programs that are going to be
directly useful to other human beings and thus enhance their lives.

I think I have a reasonably deep understanding of "what programming is" --
an activity performed by human beings, more often than not in actual or
virtual teams, and thus first and foremost an issue of collaboration
and cooperation.  "How languages work", from this POV, is first and
foremost by facilitating (or not...;-) the precise, unambiguous
communication that in turn underlies and supports the cooperation and
collaboration among these human beings.  Writing device drivers to
learn what hardware is and how interfaces to it work is one thing; in
most cases, if you find me writing a device driver it will be because,
after searching to the best of my ability, I have not located a device
driver I could simply and productively reuse (and couldn't "wait for
someone else to write" one for me).  And quite similarly, if you find
me writing a network protocol, a compiler, a GUI framework, etc, etc.
I'm not particularly _motivated_ in the abstract to such pursuits, and
to say I "dream" of spending my time building plumbing (i.e., building
infrastructure) would be laughable; I have often had to, and likely
will again in the future, e.g. because a wonderful new piece of HW I
really truly want to use doesn't come with a device driver for the
operating system I need to use it with, etc, etc.

[QUOTE]
you, if you want to write code that trancends the ordinary and explores
ideas and modes of thought that no one has explored before, and you want[/QUOTE]

Pure research?  Been there, done that (at IBM Research, in the 80's),
eventually moved to an application-development shop because I realized
that such "transcending and exploring" wasn't what I really wanted to
spend my whole life doing -- I wanted to make applications directly
useful to other human beings; I'm an engineer, not a pure scientist.
In any case, even in such pure research I didn't particularly miss
macros.  I was first hired by Texas Instruments in 1980 specifically
because of my knowledge of Lisp -- but was quite glad in 1981 to move
to IBM, and back to APL, which I had used at the start of my "tesi di
laurea" before being forced to recode it all in Fortran, Pascal and
assembly language on a then-newfangled machine called VAX-11 780.  For
the kind of purely numerical processing that constituted my "transcending
and exploring" back then, reusing the existing array-computation
infrastructure in APL was _so_ much better than having to roll my
own in Lisp -- let alone the half dozen different languages all called
"Lisp" (or in a couple of cases "Scheme") that different labs and factions
within labs inside TI had cobbled together with their own homebrew sets
of macros.  Factions and "NIH" are part of the way human beings are
made, of course, but at least, with APL, everybody WAS using the same
language -- the lack of macros constrained the amount of divergence --
so that collaborating with different groups, and sharing and reusing
code, was much more feasible that way.  (Of course, APL had its own
defects -- and how... -- but for the specific task of array computations
it was OK).  Of course, by that time it was already abundantly clear
to me that "horses for courses" was a much better idea in programming
than "one ring to bind them all".  Many people will never agree with
that (and thus almost all languages keep growing to try and be all
things to all people), but since my background (in theory) was mostly HW
(even though I kept having to do SW instead) the idea of having to use
one programming language for everything struck me as silly as having to
use one hardware tool for everything -- I'd rather have a toolbox with
a hammer for when I need to pound nails, AND a screwdriver for when I
need to drive screws, than a "superduper combined hammer+screwdriver+
scissors+pliers+soldering iron+..." which is apparently what most
programming languages aim to be...
[QUOTE]
to do all this without having to overcome arbitrary obstacles placed in
your way by people who think they know more than you do but really don't,[/QUOTE]

Been there, done that; I can easily say that most of the technologies
I've used in the course of about a quarter century do indeed respond quite
well to this description -- not just programming languages, mind you.  The
fundamental reason I've moved to using Python more than any other language,
these days, is that it doesn't.  It is not perfect -- but then, I have
never used any _perfect_ human-made artefact; it IS simple enough that I
can comfortably grasp it, explain it, understand its defects as well as
its strengths [and where both kinds of characteristics come from] and easily
see how to use it in reasonably good ways for tasks I care a lot about; it
promotes the cultural values that I see as most important for programming
collaboration in typical mid-sized teams -- simplicity, clarity, uniformity.

When it comes to programming language design, my experience learning, using
and teaching Python tells me one thing: Guido thinks he knows more than I
do about programming language design... and _he is right_, he does.  In an
egoless, objective mindset, I'm happier and more productive re-using the
fruits of his design, than I've ever been using languages of my own design
(and those designed by others yet).  I squabble with him as much as anybody,
mind you -- the flamewars you can find in the archives of python-dev and
the main Python list/newsgroup can only testify to a part of that, and
will never capture the expression on his face when he heard somebody else
at PythonUK presenting inter alia the Borg nonpattern I had designed (he
did not know that I, sitting next to him in the audience, was the designer,
so his disgusted diatribe was quite unconstrained -- it probably would have
been anyway, as he's quite an outspoken fellow:-), nor mine after my N-th
fruitless attempt to "sell" him on the "Protocol Adaptation metaprotocol",
the huge benefits of case-insensitivity, or some other of my
hobby-horses;-).  But, you know -- macros wouldn't help me on ANY of
these.  The PAm needs no syntax changes -- it's strictly a semantic issue
and I could easily implement it today (the problem is, due to what in
economics is called a "network effect", the PAm is worth little UNLESS
it's widely adopted, which means making it into the RELEASED language...).
As for changing a language from case-sensitive to case-insensitive, or
viceversa -- well, how WOULD you do it with macros -- without instantly
breaking a zillion lines of good, reusable existing code that depend on
the case-sensitivity rules defined in the official language definition?
And I care more about that wonderful panoply of reusable code, than I do
about making life easier for (e.g.) user of screen-reading software; so
I don't really think Python ever will or should become case-insensitive,
I only DREAM about it, to quote you (but my dream includes a time machine
to go back to 1990 and make it case-insensitive *from the start*, which
is about as likely as equipping it with iterators or PAm from then...:-).
[QUOTE]
then macros - and Lisp macros in particular - are a huge win.[/QUOTE]

I think macros (Lisp ones in particular) are a huge win in situations
in which the ability to enrich / improve / change the language has more
advantages than disadvantages.  So, I think they would be a great fit
for languages which target just such situations, such as, definitely,
Perl, and perhaps also Ruby; and a net loss for languages which rely on
simplicity and uniformity, such as, definitely, Python.  If and when I
find myself tackling projects where macros "are a huge win", I may use a
Lisp of some sort, or Dylan, or maybe check out if Glasgow Haskell's
newest addition is usable within the limited confines of my brain -- I
just dearly and earnestly hope that Python remains true to its own
self -- simple, clear, uniform -- and NOT grow any macros itself...!!!


Alex
 
F

Frode Vatvedt Fjeld

Alex Martelli said:
[..] If you believe everybody can become a good language designer, I
think the onus is on you to explain why most languages are not
designed well. Do remember that a vast majority of the people who
do design languages HAVE had some -- in certain cases, VAST --
experience with Lisp, e.g., check out the roster of designers for
the Java language.

This argument is in my opinion invalid, for at least three reasons:

- Designing something like the Java language is very, very different
from designing a lisp macro.

- Designing good functional abstractions (libraries) that are
suitable for sharing with other programmers is also quite
difficult. This in no way implies that the defun or equivalent
operator should be removed.

- That some programming language feature might be beyond some
people's ability to use well, is not a good reason to dismiss that
feature, at least not in my book. Of course, languages are
designed to different ends, and I do seem to recall that one of
Python's explicit goals is to be "the language for everybody", so
this argument might make sense from a Python point of view.
My thesis is that the ability to design languages well is far rarer
than the ability to design well within more restricted confines, and
good design is taught and learned much more easily in restricted
realms, much less easily the broader the degrees of freedom.

This is obviously true, but in my opinion completely irrelevant to the
issue of whether macros are helpful or not.
 
D

David Eppstein

<my-first-name.my-last-name-0610030955090001@k-137-79-50-101.jpl.nasa.go
v>,
The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.

Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
and do-with-file-lines is expressed in python with a for loop. So:

def lines_with_some_property(some_file_name):
for l in some_file_name:
if some_property(l):
yield l

Your only use of macros in this example is to handle the with-collector
syntax, which is handled in a clean macro-free way by Python's "yield".
So this is unconvincing as a demonstration of why macros are a necessary
part of a good programming language.

Of course, with-collector could be embedded in a larger piece of code,
while using yield forces lines_with_some_property to be a separate
function, but modularity is good...

Answer 2: poetic translation into Python. If I think about "how" I want
to express this sort of filtering, I end up with something much less
like the imperative-style code above and much more like:

[l for l in some_file_name if some_property(l)]

I have no problem with the assertion that macros are an important part
of Lisp, but you seem to be arguing more generally that the lack of
macros makes other languages like Python inferior because the literal
translations of certain macro-based code are impossible or more
cumbersome. For the present example, even that argument fails, but more
generally you'll have to also convince me that even a freer poetic
translation doesn't work.
 
M

Marco Antoniotti

Sander said:
the exceptions SRFI and saying it is there as an extension would imho be a
better answer.


It would also be more correct to point out that most of the SRFI's
address features that are already in the CL standard and reliably
implemented in all CL implementations (which are at least 9).

And the number is likely to continue increase over the years. Scheme is
very easy to implement, including as an extensions language inside the
runtime of something else. The same doesn't really hold for common lisp.

One of the reasons why all the time spent on the godzillionth
incompatible Scheme implementation would be better spent on improving
Common Lisp.

Cheers
 
P

Pascal Costanza

David said:
<my-first-name.my-last-name-0610030955090001@k-137-79-50-101.jpl.nasa.go
v>,



Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
and do-with-file-lines is expressed in python with a for loop. So:

def lines_with_some_property(some_file_name):
for l in some_file_name:
if some_property(l):
yield l

Your only use of macros in this example is to handle the with-collector
syntax, which is handled in a clean macro-free way by Python's "yield".
So this is unconvincing as a demonstration of why macros are a necessary
part of a good programming language.

I don't know a lot about Python, so here is a question. Is something
along the following lines possible in Python?

(with-collectors (collect-pos collect-neg)
(do-file-lines (l some-file-name)
(if (some-property l)
(collect-pos l)
(collect-neg l))))


I actually needed something like this in some of my code...

Pascal
 
D

David Rush

It is easier to hurt yourself with a blunt knife than a sharp
one.

Actually I've noticed that I usually cut myself when I *switch* from
a dull knife to a sharp one.

david rush
 
E

Erann Gat

Please don't put words in my mouth, thanks. xrange _IS_ a hack,

Perhaps English is not your first langauge? When one says "Almost right,
except..." the implication is that you are disagreeing with something.
But you didn't disagree, you parroted back exactly what I said, making it
not unreasonable to assume that you inadvertantly left out the word "not".
it was introduced in Python back in the dark ages before Python had the
iterator protocol.

But it's always the dark ages. Any non-extensible langauge is going to be
missing some features, but that is usually not apparent until later. The
difference between Python and Lisp is that when a user identifies a
missing feature in Lisp all they have to do is write a macro to implement
it, whereas in Python they have no choice but to wait for the next version
to come along.
Now, clearly, _uniformity_ in the code will be to the advantage
of the team and of the project it develops.

Yes. But non-extensible languages like Python only enforce the appearance
of uniformity, they do not and cannot enforce true stylistic uniformity.
As a result there are two kinds of code, the kind that fits naturally into
the style of the language, and the kind that doesn't and has to be
shoehorned in. Of course, superficially both kinds of code sort of look
the same. But underneath code of the second sort becomes a horrible mess.
Here is the crux of our disagreement. If you believe everybody can
become a good language designer, I think the onus is on you to explain
why most languages are not designed well.

Because most langauges are designed by people who have had very little
practice designing langauges. And they've had very little practice
designing langauges because designing languages is perceived as a hard
thing to do. And if you try to do it without the right tools it is in
fact a hard thing to do. If you try to do it with the right tool (Lisp)
then it's very easy, you can do lots of iterations in a short period of
time, and gain a lot more experience about what works and what doesn't.
That's why people who use Lisp tend to be good language designers, and
people who don't tend not to be. It's also why every language feature
ever invented what invented in Lisp first.
If I thought Python's design was badly done, and a bad fit for my
problem domain, then, obviously, I would not have chosen Python (I
hardly lack vast experience in many other programming languages,
after all). Isn't this totally obvious?

No, it's not. You have taken a strong position against macros, which
means that if you ever encounter a problem domain that is not a good fit
for any language that you know then you have a problem. I don't know how
you'd go about solving that problem, but I think that a likely outcome is
that you'd try to shoehorn it in to some language that you know (and maybe
not even realize that that is what you are doing).
Therefore, clearly, your assertion that
(to adopt Python) one has "to give up all hope" of such goals is not
at all well-founded. There is nothing intrinsic to Python that can
justify it.

Actually, there is. Python's dynamicism is so extreme that efficient
native code compilation is impossible unless you change the semantics of
the language.
nuclear warheads

An inappropriate metaphor.
I _BEG_ your pardon...?

Oh, right, I forgot they added the "yield" thingy. But Python didn't
always have yield, and before it had yield you were stuck.

I could come up with another example that can't be done with yield, but
your response will undoubtedly be, "Oh, that can be handled by feature FOO
which is going to be in Python 3.0" or some such thing. The point is, a
Python programmer is dependent on Guido for these features. Lisp
programmer's aren't dependent on anyone.

All this does is demonstrate how destructors can be used to emulate
unwind-protect. If you use this to implement crtical-section in the
obvious way you will find that your critical sections do not nest
properly.
I have no idea of how with-maintained-condition would find and
examine each of the steps in the body in this example; isn't
the general issue quite equivalent to the halting problem, and
thus presumably insoluble?

Only if the conditions you write are unconstrained. But there is no
reason for them to be unconstrained. The WITH-MAINTAINED-CONDITION macro
would presumably generate a compile-time error if you asked it to maintain
a condition that it didn't know how to handle.
If your claim is that macros are only worthwhile for "artificial
intelligence" code that is able, by perusing other code, to infer
(and perhaps critique?) the physical world model it is trying to
control, and modify the other code accordingly, I will not dispute
that claim.
s/only/also/

Ah, must be a mutation of the whitespace-eating nanovirus

No, auto-indent in emacs Python mode will generate indentation bugs.
Whoa there. I detect in this tirade a crucial unspoken assumption: that
One Language is necessarily going to be all I ever learn, all I ever use,
for "any programming domain I might ever choose to explore".

No, the issue is general. If you concede that for any non-extensible
langauge there are things for which that language is not well suited, then
for any finite number of such languages there will be things for which
none of those languages are well suited. At that point you only have two
choices: use an inappropriate language, or roll your own. And if you
choose to roll your own the easiest way to do that is to start with Lisp.

Of course, the same reasoning that leads you to conclude that Lisp is good
for *something* also leads you inexorably to the conclusion that Lisp is
good for *anything*, since its extensibility extends (pardon the pun) to
everything, not just features that happen not to exist in other languages
at the time.
There being no open-source, generally useful operating system kernels in
any language but C

That is indeed unfortunate. Perhaps some day the Lisp world will produce
its own Linus Torvalds.
I think I have a reasonably deep understanding of "what programming is"

My last remarks weren't addressed to you in particular, but to anyone who
might be reading this dialog. If you want (meaning if one wants) to gain
a deep understanding of how computers work, Lisp provides a better path
IMO. (And Eric Raymond thinks so too.)
I think macros (Lisp ones in particular) are a huge win in situations
in which the ability to enrich / improve / change the language has more
advantages than disadvantages. So, I think they would be a great fit
for languages which target just such situations, such as, definitely,
Perl, and perhaps also Ruby; and a net loss for languages which rely on
simplicity and uniformity, such as, definitely, Python.

That's not an unreasonable position.

E.
 
D

David Eppstein

Pascal Costanza said:
I don't know a lot about Python, so here is a question. Is something
along the following lines possible in Python?

(with-collectors (collect-pos collect-neg)
(do-file-lines (l some-file-name)
(if (some-property l)
(collect-pos l)
(collect-neg l))))


I actually needed something like this in some of my code...

Not using simple generators afaik. The easiest way would probably be to
append into two lists:

collect_pos = []
collect_neg = []
for l in some_file_name:
if some_property(l):
collect_pos.append(l)
else:
collect_neg.append(l)

If you needed to do this a lot of times you could encapsulate it into a
function of some sort:

def posneg(filter,iter):
results = ([],[])
for x in iter:
results[not filter(x)].append(x)
return results

collect_pos,collect_neg = posneg(some_property, some_file_name)
 
E

Erann Gat

David said:
<my-first-name.my-last-name-0610030955090001@k-137-79-50-101.jpl.nasa.go
v>,


Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
and do-with-file-lines is expressed in python with a for loop. So:

def lines_with_some_property(some_file_name):
for l in some_file_name:
if some_property(l):
yield l

You left out the with-collector part.

But it's true that my examples are less convincing given the existence of
yield (which I had forgotten about). But the point is that in pre-yield
Python you were stuck until the langauge designers got around to adding
it.

I'll try to come up with a more convincing short example if I find some
free time today.

E.
 
D

David Rush

Guido's generally adamant stance for simplicity has been the
key determinant in the evolution of Python.

Simplicity is good. I'm just finding it harder to believe that Guido's
perception of simplicity is accurate.
Anybody who doesn't value simplicity and uniformity is quite
unlikely to be comfortable with Python

I would say that one of the reasons why I program in Scheme is *because* I
value simplicity and uniformity. The way that Python has been described in
this discussion make me think that I would really
*hate* Python for it's unecessary complications if I went back to it.
And I have spent years admiring Python from afar. The only reason I
didn't adopt it years ago was that it was lagging behind the releases
of Tk which I needed for my cross-platform aspirations. At the time, I
actually enjoyed programming in Python as a cheaper form of Smalltalk
(literally, Unix Smalltalk environments were going for $4000/seat).

Probably the most I can say now is that I think that Python's syntax is
unecessarily reviled (and there are a *lot* of people who think that
Python's syntax is *horrible* - I am not one of them mind you), in
much the same way that s-expressions are a stumbling block for programmers
from infix-punctuation language communities.

david rush
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Staff online

Members online

Forum statistics

Threads
474,175
Messages
2,570,942
Members
47,489
Latest member
BrigidaD91

Latest Threads

Top