J
Joachim Durchholz
Peter said:Hmmm. If it will make you feel any better, macros are just fuctions
whose domain and range happens to be Lisp expressions. That happen to
be run by the compiler. So eventually the compiler is evaluating
constant expressions, just some of them were automatically derived
from the written source.
Hmm... you're right here.
The HOF approach has one advantage over the DEFMACRO approach: code
written using HOFs will automagically adapt if some of the constant
inputs become variable, or vice versa. For DEFMACRO, if a constant
becomes input, the macro will become inapplicable and the source code
will have to change; for HOFs, the compiler will be able to
automatically adapt.
Well it depends whether you consider syntax to be "anything". I think
it was you who objected to one of my examples by saying, "that's
just syntactic sugar". Macros can (and many do) do large amount of
under-the-covers bookkeeping. For instance here are a few rules from
a grammar for a lexer for Java source code:
(defprod line-terminator () (/ #\newline (#\return (? #\newline))))
(defprod white-space () (/ #\space #\tab #\page line-terminator))
(defprod input () ((* input-element) (? #\Sub)))
(defprod input-element () (/ white-space comment token))
(defprod token () (/ identifier java-keyword literal separator
operator))
DEFPROD is a macro that squirrels away the stuff on the right which
is an s-expy form of BNF. The rest of the grammar is more of the
same. At the bottom of the grammar file where the productions are
diffined I have this form:
(deflexer java-lexer (input) (tokens identifier java-keyword
literal separator operator)))
That DEFLEXER call (another macro) expands into a single parsing
function built out of all the productions created by DEFPROD calls,
appropriately wired together and embedded into code that takes care
of the stepping through the input and gather up values, etc. And that
function is compiled into extremely efficient code because all the
intercommunication between productions goes through lexical
variables. And the file containing these calls to DEFPROD and
DEFLEXER is legal Lisp source which I can feed to the compiler and
get native machine code back.
So I don't know if that is "anything" or not.
It most definitely is "something"
I don't know how I would write such a thing in Haskell, et al. but I
know this is a *lot* cleaner than what *I'd* be able to do in Java,
Perl, Python, or C.
I'm looking at things from a Haskell perspective.
Actually, functional languages do similar things; it's called
"combinator parsing".
The basic approach is this: you have parsing functions that each
recognize a particular language, trivial parsers that each recognize
just one element of the alphabet, and parser combinators that take one,
two, or more subparsers and combine them into a bigger one (for
constructing alternatives, options, repetitions and whatever your
personal flavor of BNF can do).
I don't know enough about any approach to do an in-detail comparison,
but the rough picture seems to be pretty similar.
Downside of combinator parsing: it's difficult to get bottom-up parsers
done that way. Also, simple-minded combinator parsers tend to do
backtracking, though it's not very difficult to make the combinators
diagnose and report violations of LL(whatever) properties.
Ah - I see one other thing that HOFs cannot do: issue compile-time error
messages.
Unless, of course, there is a data type that, when evaluated at compile
time, causes the compiler to emit an error message... not /that/
difficult to do, but might require some careful thought to make the
mechanism interact well with other properties of the language.
Actually, changing the syntax is--if one thinks one must--is really
done by read-macros which are quite different. But most Lispers agree
with you--there's just not enough benefit to changing the syntax to
be worth it.
OK.
Except for occasionally making a new syntax for expressing certain
frequently created literal objects that otherwise would require a
much more verbose creation form. (Someone gave a great example the
other day in another thread of an airline reservation system (Orbitz
I think) that has a special kind of object used to represent the
three-letter airport codes. Since they wanted to always have the same
object representing a given airport they needed to intern the objets
with the TLA as the key. But rather than writing (intern-airport-code
"BOS") everywhere, they wrote a reader macro that let them write:
#!BOS. Since this was an incredibly common operation in their system,
it was worth a tiny bit of new syntax. But note, again, that's not
*changing* the syntax so much as extending it.)
In my book, "extending" isn't so much different than "changing".
I agree it's the kind of worthwhile change that makes sense.
In Haskell, one would probably have an "Airport" module that defined
these codes, and write something like
TLA "BOS"
which is more syntax than #!BOS but seems good enough for me. (YMMV.)
Fair enough. But do you object to the ability to write new functions
on the grounds that that just means you have a lot of new functions
to learn and that complicates things needlessly? That's obviously a
rhetorical question but I am actually curious why you find them
different, if you do.
It's just the KISS principle: why two abstraction facilities (macros and
functions) if one suffices?
Provided that functions suffice, actually
The funny thing is to me, when you say "two-tier thinking" that
perfectly describes how I think about the process of making
abstractions. Regardless of the *kind* of abstraction one is
creating, one has to be facile at switching mental gears between
*building* the abstraction and *using* it. You are probably so used
to doing this when writing functions that you don't even notice the
switch.
Hmm... not consciously, but there is certainly a difference.
It seems to be smaller with functional languages, particularly if you're
working at higher levels.
In an FPL with proper syntactical minimalism, programming enters a
"we're sticking functions together" style, which partially abstracts
away the parameters (at least as entities you're conscious about). I.e.
the "using abstractions" thinking mode diminishes. (I'm not far enough
into that style to say how this works after that style was fully adopted.)
Possibly... can't tell.But because macros are a bit strange you *notice* the switching and
it annoys you. I suspect that anyone who's capable of building
functional abstractions would--if they actually used macros--quickly
learn to switch gears equally smoothly when writing and using macros.
Regards,
Jo