Python syntax in Lisp and Scheme

A

Alex Martelli

Erann Gat wrote:
...
mindset, that anything that is potentially dangerous ought to be avoided
because it is potentially dangerous, is IMHO a perverse impediment to
progress. There is no reward without risk.

I'm quite happy for you, and all other unreasonable men "without which
there can be no progress", to run all the risks you want, and I'll be happy
to applaud you for your successes -- as long as I don't bear any part of
the costs of your failures.

As for me personally, and for my typical customers, we have lower
appetite for risk: risk is something to be assessed and managed, not
run for its own sake. A risky strategy is one that has higher expected
returns but also higher volatility: and often there's a threshold effect,
where beating the threshold by a lot isn't much more valuable than
beating it by a little, but falling short of the threshold would be a
disaster. Think of delivery dates, for example: delivering a month in
advance is cool, but not really all that important; delivering a month
late may mean the firm has gone bust in the meantime for lack of
the application it was counting on. I prefer software development
strategies that let me be reasonably sure I can deliver within the
deadline, rather than ones that may, with luck, allow spectacularly
earlier delivery... but risk missing the crucial deadline. I'm a lucky
man, but part of the secret of luck is not pushing it;-).

You seem to be saying that the technologies you prefer are suited
only for cases in which, e.g., running substantial risks of missing the
deadline IS to be considered acceptable. Presumably, that must be
because no safer technology stands a good chance of delivering
reliably -- i.e., cases in which you're pushing the envelope, and the
state of the art. Been there, done that, decided that research does
not float my boat as well as working in the trenches, in software
production environments. I like delivering good working programs
that people will actually use, thus making their life a little bit better.

In other words, I'm an engineer, not a scientist. Scientists whose
goals are not the programs they write -- all those for whom the
programs are mere tools, not goals in themselves -- tend to feel
likewise about programming and other technologies that support
their main work but are secondary to it, though they may have
ambitions as burning as you wish in other, quite different areas.

So, maybe, your favourite technologies are best for research in
computer science itself, or to develop "artificial intelligence" programs
(has the term gone out of fashion these days?) and other programs
pushing the envelope of computer science and technology -- and
mine are best for the purpose of delivering normal, useful, working
applications safely and reliably. If this is true, there is surely space
for both in this world.
things. If you have no ambitions beyond writing
yet-another-standard-web-app then macros are not for you. But if your

Not necessarily web, of course; and generally not standard, but rather
customized for specific customers or groups thereof. But yes, I basically
feel there's a huge unfilled demand for perfectly normal applications,
best filled by technologies such as Python, which, I find, maximize the
productivity of professional programmers in developing such apps and
frameworks for such apps, AND empower non-professional programmers
to perform some of their own programming, customizing, and the like.
My only, very modest ambition is to make some people's lives a little
better than they would be without my work -- I think it's a realistic goal,
and that, a little bit at a time, I am in fact achieving some part of it.

So, general macros in a general-purpose language are not for me, nor for all
those who feel like me -- not for _production_ use, at least, though I do
see how they're fun to play with.

example, there is no reason it should take multiple work years to write an
operating system. There is no fundamental reason why one could not build
a computational infrastructure that would allow a single person to write
an operating system from scratch in a matter of days, maybe even hours or
minutes. But such a system is going to have to have a fairly deep

So why don't you do it? As I recall, the "lisp machines"' operating
systems, written in lisp, were nothing lilke that -- were the lisp
programmers working for lisp machine companies so clueless as not to
see the possibilities that, to you, are so obvious? And yet I had the
impression those companies hired the very best they could find, the stardom
of lispdom. Well then, there's your chance, or that of any other lisper
who agrees with you -- why don't you guys go for it and *show* us, instead
of making claims which some of us might perhaps think are empty boasts...?


Alex
 
P

prunesquallor

Pascal Costanza said:
Yes, we disagree in this regard.


I am no expert in quantum computing, so I can't comment on
that. However, you have mentioned that someone has implemented an
quantum extension for Perl

That doesn't sound too hard. Perl is *already* fairly non-deterministic.
Personally, I prefer not to think about syntax anymore. It's
boring. But that's maybe just me.

Agreed. It's a solved problem.
 
R

Robin Becker

Alex Martelli said:
Jon S. Anthony wrote:
...

Is it a good thing that you can define "bombs waiting to go off"?



One, and preferably only one, of those ways should be the obvious one,
i.e., the best solution. There will always be others -- hopefully they'll
be clearly enough inferior to the best one, that you won't have to waste
too much time considering and rejecting them. But the obvious one
"may not be obvious at first unless you're Dutch".

The worst case for productivity is probably when two _perfectly
equivalent_ ways exist. Buridan's ass notoriously starved to death in
just such a worst-case situation; groups of programmers may not go
quite as far, but are sure to waste lots of time & energy deciding.


Alex
I'm not sure when this concern for the one true solution arose, but even
GvR provides an explicit example of multiple ways to do it in his essay
http://www.python.org/doc/essays/list2str.html

Even in Python there will always be tradeoffs between clarity and
efficiency.
 
A

Alex Martelli

Kenny Tilton wrote:
...
Stop, your scaring me. You mean to say there are macros out there whose
output/behavior I cannot predict? And I am using them in a context where
I need to know what the behavior will be? What is wrong with me? And
what sort of non-deterministic macros are these, that go out and make
their own conclusions about what I meant in some way not documeted?

Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it? OK, now, since you don't appear to think it was an idiotic example,
then SHOW me how it takes the code for the condition it is to maintain and
the (obviously very complicated: starting a reactor, operating the reactor,
stopping the reactor -- these three primitives in this sequence) program
over which it is to maintain it, and how does it modify that code to ensure
this purpose. Surely, given its perfectly general name, that macro does not
contain, in itself, any model of the reactor; so it must somehow infer it
(guess it?) from the innards of the code it's analyzing and modifying.

Do you need to know what the behavior will be, when controlling a reactor?
Well, I sort of suspect you had better. So, unless you believe that Gat's
example was utterly idiotic, I think you can start explaining from right
there.
I think the objection to macros has at this point been painted into a
very small corner.

I drastically disagree. This is just one example, that was given by one of
the most vocal people from your side, and apparently not yet denounced
as idiotic, despite my asking so repeatedly about it, so presumably agreed
with by your side at large. So, I'm focusing on it until its import is
clarified. Once that is done, we can tackle the many other open issues.

For example, the fact that Gat himself says that if what I want to write
are normal applications, macros are not for me: only for those who want
to push the boundaries of the possible are they worthwhile. Do you think
THAT is idiotic, or wise? Please explain either the reason of the drastic
disagreements in your camp, or why most of you do keep trying pushing
macros (and lisp in general) at those of us who are NOT particularly
interested in "living on the edge" and running big risks for their own sake,
accordingly to your answer to the preceding question, thanks.

"Small corner"?! You MUST be kidding. Particularly given that so many
on your side don't read what I write, and that you guys answer the same
identical questions in completely opposite ways (see below for examples
of both), I don't, in fact, see how this stupid debate will ever end, except
by exhaustion. Meanwhile, "the objection to macros" has only grown
larger and larger with each idiocy I've seen spouted in macros' favour,
and with each mutual or self-contradiction among the macros' defenders.

There is one c.l.l. denizen/guru who agrees with you. I believe his

....and there's another who has just answered in the EXACTLY opposite
way -- that OF COURSE macros can do more than HOF's. So, collectively
speaking, you guys don't even KNOW whether those macros you love so
much are really necessary to do other things than non-macro HOFs allow
(qualification inserted to try to divert the silly objection, already made
by others on your side, that macros _are_ functions), or just pretty things
up a little bit. Would y'all mind coming to some consensus among you
experienced users of macros BEFORE coming to spout your wisdom over
to us poor benigthed non-lovers thereof, THANKYOUVERYMUCH...?
Oh. OK, now that you mention it I have been skimming lately.

In this case, I think it was quite rude of you to claim I was not answering
questions, when you knew you were NOT READING what I wrote.


As you claim that macros are just for prettying things up, I will restate
(as you may not have read it) one of the many things I've said over and
over on this thread: I do not believe the minor advantage of prettying
things up is worth the complication, the facilitation of language
divergence between groups, and the deliberate introduction of multiple
equivalent ways to solve the same problem, which I guess you do know
I consider a bad thing, one that impacts productivity negatively.


Alex
 
A

Andrew Dalke

(e-mail address removed):
The smartest programmers I know all prefer Lisp (in some form or
another). Given that they agree on very little else, that's saying
a lot.

Guess you don't know Knuth.

The smartest programmers I know prefer Python. Except Guido.
He write a lot of C.

Bias error? On whose side?

The smartest people I know aren't programmers. What does
that say?

Andrew
(e-mail address removed)
 
A

Andrew Dalke

Me:
My continued response is that [Lisp is] not optimal for all
domains.

Pascal Costanza:
Yes, we disagree in this regard.

*Shrug* The existence of awk/perl (great for 1-liners on the
unix command-line) or PHP (for simple web programming) or
Mathematica (for symbolic math) is strong enough evidence for
me to continue to disagree.

Pascal Costanza:
However, you have mentioned that someone has implemented an quantum
extension for Perl - and if that's possible then you can safely bet that
it's also possible in pure Lisp.

The fact that all computing can be programmed in Turing Machine
Language doesn't mean TML is the optimal programming language.

The fact that there is perl code for emulating *some* quantum
programming means that Lisp can handle that subset. It doesn't mean
that people have fully explored even in Lisp what it means to do all
of quantum computing.
Furthermore, it has been suggested more than once that a valid
working model is that a good Lisp programmer can provide a
domain-specific language for the non-professional programmer. It's very
likely that a DSL matches better the needs of the user than some
restricted general-purpose language.

Another *shrug* And a good C programmer can provide a
domain-specific language for the non-professional programmer.

Any good Python programmer could make an implementation
of a Lisp (slow, and not all of GC Lisp, but a Lisp) in Python, like

import lisp
def spam(distance):
"""(time_to_fall 9.8 distance)"""
spam = lisp.convert(spam)

def time_to_fall(g, distance):
print "The spam takes", (2.0*distance/g)**(0.5), "seconds to fall"

print spam(10)
Ah, but then you need to constantly change the syntax and need to
remember the idiosyncrasies of several languages.

Yup. Just like remembering what macros do for different domains.
I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.
I am not so sure whether this is a good idea. Personally, I prefer not
to think about syntax anymore. It's boring. But that's maybe just me.

wave equation vs. matrix approach
Newtownian mechanics or Lagrangian
measure theory or non-standard analysis
recursive algorithms vs. iterative ones
travelling salesman vs. maximum clique detection

Each is a pair of different but equivalent ways of viewing the same
problem. Is the difference just syntax?

Thank you for your elaboration. You say the driving force is the
ability to handle unexpected events. I assumed that means you need
new styles of behaviour.
If it's only a syntactical issue, then it's a safe bet that you can add
that to the language. Syntax is boring.

Umm... Sure. C++ can be expressed as a parse tree, and that
parse tree converted to an s-exp, which can be claimed to be
a Lisp; perhaps with the right set of macros.

Still doesn't answer my question on how nicely Lisp handles
the 'unexpected' need of allocating objects from different
memory arenas.
Sounds like a possible application for the CLOS MOP.

Or any other language's MOP.
Seriously, I haven't invented this analogy, I have just tried to answer
a rhetorical question by Alex in a non-obvious way. Basically, I think
the analogy is flawed.

Then the right solution is to claim the analogy is wrong, not
go along with it as you did. ;)

Andrew
(e-mail address removed)
 
P

Peter Seibel

Alex Martelli said:
Jon S. Anthony wrote:
[snip]
No it isn't, because they the mode of _expression_ may be better with
on in context A and better with the other in context B.

I care about "mode of expression" when I write poetry. When I write
programs, I care about simplicity, clarity, directness.

Right on. Then you should dig (Common Lisp-style) macros because they
give programmers tools it *increase* simplicity, clarity, and
directness. That's the point. They are just another tool for creating
useful abstractions--in this case a way to abstract syntax that would
otherwise be repetitive, obscure, or verbose so the abstracted version
is more clear.

If for some reason you believe that macros will have a different
effect--perhaps decreasing simplicity, clarity, and directness then
I'm not surprised you disapprove of them. But I'm not sure why you'd
think they have that effect.

I don't know if you saw this example when I originally posted it since
it wasn't in c.l.python so I'll take the liberty of quoting myself.
(Readers who've been following this thread in c.l.lisp, c.l.scheme, or
c.l.functional have probably already seen this:

In request for examples of macros that allow one to write
less-convoluted code I give this example:

Okay, here's an example of a couple macros I use all the time. Since I
write a lot of unit tests, I like to have a clean syntax for
expressing the essence of a set of related tests. So I have a macro
DEFTEST which is similar to DEFUN except it defines a "test function"
which has some special characteristics. For one, all test functions
are registered with the test framework so I can run all defined tests.
And each test function binds a dynamic variable to the name of the
test currently being run which is used by the reporting framework when
reporting results. So, to write a new test function, here's what I
write:

(deftest foo-tests ()
(check
(= (foo 1 2 3) 42)
(= (foo 4 5 6) 99)))

Note that this is all about the problem domain, namely testing. Each
form within the body of the CHECK is evaluated as a separate test
case. If a given form doesn't evaluate to true then a failure is
reported like this which tells me which test function the failure
was in, the literal form of the test case and then the values of any
non-literal values is the function call (i.e. the arguments to = in
this case.)

Test Failure:

Test Name: (FOO-TESTS)
Test Case: (= (FOO 1 2 3) 42)
Values: (FOO 1 2 3): 6


Test Failure:

Test Name: (FOO-TESTS)
Test Case: (= (FOO 4 5 6) 99)
Values: (FOO 4 5 6): 15


So what is the equivalent non-macro code? Well the equivalent code
to the DEFTEST form (i.e. the macro expansion) is not *that* much
more complex--it just has to do the stuff I mentioned; binding the
test name variable and registering the test function. But it's
complex enough that I sure wouldn't want to have to type it over and
over again each time I write a test:

(progn
(defun foo-tests ()
(let ((test::*test-name*
(append test::*test-name* (list 'foo-tests))))
(check
(= (foo 1 2 3) 42)
(= (foo 4 5 6) 99))))
(eval-when :)compile-toplevel :load-toplevel :execute)
(test::add-test 'foo-tests)))

But the real payoff comes when we realize that innocent looking CHECK
is also a macro. Thus to see what the *real* benefit of macros is we
need to compare the original four-line DEFTEST form to what it expands
into (i.e. what the compiler actually compiles) when all the
subsidiary macros are also expanded. Which is this:

(progn
(defun foo-tests ()
(let ((test::*test-name*
(append test::*test-name* (list 'foo-tests))))
(let ((#:end-result356179 t))
(tagbody
test::retry
(multiple-value-bind (#:result356180 #:bindings356181)
(let ((#:g356240 (foo 1 2 3)) (#:g356241 42))
(values (= #:g356240 #:g356241)
(list (list '(foo 1 2 3) #:g356240))))
(if #:result356180
(signal
'test::test-passed
:test-name test::*test-name*
:test-case '(= (foo 1 2 3) 42)
:bound-values #:bindings356181)
(restart-case
(signal
'test::test-failed
:test-name test::*test-name*
:test-case '(= (foo 1 2 3) 42)
:bound-values #:bindings356181)
(test::skip-test-case nil)
(test::retry-test-case nil (go test::retry))))
(setq #:end-result356179
(and #:end-result356179 #:result356180))))
(tagbody
test::retry
(multiple-value-bind (#:result356180 #:bindings356181)
(let ((#:g356242 (foo 4 5 6)) (#:g356243 99))
(values (= #:g356242 #:g356243)
(list (list '(foo 4 5 6) #:g356242))))
(if #:result356180
(signal
'test::test-passed
:test-name test::*test-name*
:test-case '(= (foo 4 5 6) 99)
:bound-values #:bindings356181)
(restart-case
(signal
'test::test-failed
:test-name test::*test-name*
:test-case '(= (foo 4 5 6) 99)
:bound-values #:bindings356181)
(test::skip-test-case nil)
(test::retry-test-case nil (go test::retry))))
(setq #:end-result356179
(and #:end-result356179 #:result356180))))
#:end-result356179)))
(eval-when :)compile-toplevel :load-toplevel :execute)
(test::add-test 'foo-tests)))


Note that it's the ability, at macro expansion time, to treat the code
as data that allows me to generate test failure messages that contain
the literal code of the test case *and* the value that it evaluated
to. I could certainly write a HOF version of CHECK that accepts a list
of test-case-functions:

(defun check (test-cases)
(dolist (case test-cases)
(if (funcall case)
(report-pass case)
(report-failure case))))

which might be used like:

(defun foo-tests ()
(check
(list
#'(lambda () (= (foo 1 2 3) 42))
#'(lambda () (= (foo 4 5 6) 99)))))


But since each test case would be an opaque function object by the
time CHECK sees it, there'd be no good option for nice reporting from
the test framework. (Of course I'm no functional programming wizard so
maybe there are other ways to do it in other languges (or even Lisp)
but for me, the test, no pun intended, is, is the thing I have to
write to define a new test function much more complex than my original
DEFTEST form?


-Peter
 
A

Alex Martelli

Doug Tolton wrote:
...
I can understand and respect honest differences of opinions. I too

If so, that makes you an exception in the chorus of lispers who are
screaming against my alleged idiocy and cluelessness (though I
seem to remember you also did, but I may be confusing people "on
your side" -- it's been a LONG and unpleasant thread).
believe that causes of divergence are largely sociological. I differ
though in thinking that features which allow divergence will necessarily
result in divergence.

I don't think it _necessarily_ will, just that it increases probability by
enough to be a concausal factor.

I have this personal theory (used in the non-strict sense here) that
given enough time any homogenous group will split into at least two
competing factions. This "theory" of mine had it's roots in a nice

Unless there's a common enemy who's perceived to be threatening
enough to the whole group -- this is probably the single strongest
unifying factor ("united we stand, divided we fall" -- if people are scared
enough to believe this, they may then remain united).

But the amount of time and other external factors needed to eventually
promote divergence varies with internal factors -- and in cases where
technology plays a key role in the group, then the details of that
technology matter. Some technological aspects intrinsically promote
convergence and cooperation and thus may help counteract sociological
factors working in the other direction -- other technological aspects
facilitate divergence. "In the long run we're all dead" (Keynes), so if
the time needed for divergence is not enough, divergence will not
happen within the group's lifetime;-).

However in the opensource world I expect splinters to happen frequently,
simply because there is little to no organizational control. Even

And you would be wrong: forks are much less frequent than your theory
predicts. Read some Eric Raymond to understand this, he's got good
ideas about these issues.
Python hasn't been immune to this phenomenon with both Jython and
Stackless emerging.

Neither of them is a fork, nor a splinter, nor a split. Jython tracks the
Python standard (as much as it can with far too few core developers)
and the Python standard has been modified to allow Jython to conform
on some points (e.g. timing of garbage collection). Stackless has been
rewritten to become a patch as small as possible on standard Python.

And if you look at the people involved, well, I've seen Samuele Pedroni
(Jython's chief maintainer), Christian Tismer (Mr Stackless), Guido van
Rossum (Mr Python) -- and many others, including yours truly -- sitting in
the same room hacking at the same project during a "sprint" this summer.
And happily guzzling beer and/or wine at the barbecue afterwards, of course.

There's just no "splinter" effect in all of this -- just different
technology needs (e.g. a need to cooperate with some Java libraries
seamlessly), requiring different implementations of the same language,
Python.
Some people want power and expressiveness. Some people want control and
uniformity. Others still will sacrifice high level constucts for raw
pedal to the metal speed, while others wouldn't dream of this sacrifice.

And people who have freely chosen Python appear to share enough core
values (simplicity, one obvious way to do it, etc) to be very far from any
splintering yet. Why does that surprise you?
What I'm getting at is that I can understand why people don't like
Macros. As David Mertz said, some people are just wired in dramatically
different ways.

Well, it's nice that you understand this. I just wish more people on your
side did -- most seem to think I'm crazy, my posts boggle the mind, etc.

What get's me is when people (and I do this sometimes as well) expess an
opinion as fact, and that all rational people will agree with them. So,
for what it' worth, for the times I have expressed my opinion as the one
true way of thinking, I'm sorry.

I don't think there's anything worth apologizing for, in believing the
opinion you hold is the true one. Insulting people who hold a different
rational opinion is of course another issue.

top secret pet project ;) ) every day. I very much respect your
knowledge Alex, because I do believe you have some good insights, and I
do enjoy discussing issues that we disagree on (when we aren't being
"bristly" ;) ) because you have many times helped me to understand my
own point of view better. So even though we don't always agree, I still
appreciate your opinions.

Likewise -- for you and the other reasonable people on your side (right
now Pascal Costanza is the only one who comes to mind, but, as I said,
it's easy to start confusing people after a thread as long and confused
as this one).


I don't think macros are evil, either -- I just don't want them in
Python:). Let me give an example: one thing my main client is working on
is a specialized language that represents information-models (actions in
them are in embedded Python), presentation-data to guide generic clients
for GUI / web / print presentation, etc. I do think we need macros there
sooner or later (and sooner would be better;-) -- simply to help remove
redundancy and duplication, because in this purely declarative language
there are no "functions". (Were it not for language/metalanguage confusion
risks, and the issues of ensuring the clarity of error-diagnostics, python
functions emitting code in the specialized language [which gets compiled
into Python], i.e. "functions that run at compile-time", would be the
obvious solution, and we could hack them in within an afternoon -- but we
DO want to provide very clear and precise error diagnostics, of course,
and the language/metalanguage issue is currently open). You will note
that this use of macros involves none of the issues I have expressed about
them (except for the difficulty of providing good error-diagnostics, but
that's of course solvable).


Alex
 
D

dewatf

Actually, the last discussion of this that I saw (can't remember where)
came to the conclusion that the word 'virus' didn't *have* a plural
in Latin at all, because its original meaning didn't refer to something
countable.

'virus' (slime, poison, venom) is a 2nd declension neuter noun and
technically does have a plural 'viri'. However such nouns were usually
only used in the nominative and accusative singular in latin. You don't
normally want to start a sentence with 'venoms'. As you said 'virus'
didn't refer to something you usually count, and also in latin 'viri' is
the nominative for 'men' which you do want to use a lot.

The latin plural of census is census (with a long u, 4th declension).

So in English use viruses.

dewatf.
 
D

Daniel P. M. Silva

Alex said:
As for me personally, and for my typical customers, we have lower
appetite for risk:

Right, and your customers use no C apps.
In other words, I'm an engineer, not a scientist. Scientists whose
goals are not the programs they write -- all those for whom the
programs are mere tools, not goals in themselves -- tend to feel
likewise about programming and other technologies that support
their main work but are secondary to it, though they may have
ambitions as burning as you wish in other, quite different areas.

So, maybe, your favourite technologies are best for research in
computer science itself, or to develop "artificial intelligence" programs
(has the term gone out of fashion these days?) and other programs
pushing the envelope of computer science and technology -- and
mine are best for the purpose of delivering normal, useful, working
applications safely and reliably. If this is true, there is surely space
for both in this world.

How depressing. Leave the powerful languages to those with Computer Science
degrees and let the "software engineers" use weaker systems? What happens
when more functionality is needed (eg., web services)?

It's now nearly the end of 2003 and web applications are still created with
old, flawed technologies -- CGI, server pages, and explicit session
objects. Wouldn't it be nice if web applications were written like, well,
applications?

# my web adder
print_result( get_number() + get_number() )

where print_result sends out a web page, and get_number gets a number from a
web request.

He's wrong. "Advanced" technologies like continuations and macros are
exactly what are needed to make web apps work correctly. The web-app
writer doesn't have to know he's using any of it, but even the Apache
people realized they needed to provide more power in their application
framework.
[...] I basically
feel there's a huge unfilled demand for perfectly normal applications,
best filled by technologies such as Python, which, I find, maximize the
productivity of professional programmers in developing such apps and
frameworks for such apps, AND empower non-professional programmers
to perform some of their own programming, customizing, and the like.
[...]
So, general macros in a general-purpose language are not for me, nor for
all those who feel like me -- not for _production_ use, at least, though I
do see how they're fun to play with.

What the non-professional programmer wants to write a program using two or
more languages? Should we restrict him? Or give him the tools...
(py-eval "x = 1")
(define py_x (in-python (ns-get 'x)))
(define scm_x (->scheme py_x))
scm_x 1
(in-python (ns-set! 'x (->python (+ 2 scm_x))))
(py-eval "print x")
3

In this case, a special form is again needed:

(define-syntax (in-python stx)
(syntax-case stx ()
[(_ expr) #`(parameterize ([current-namespace pns])
expr)]))

- DS
 
D

Daniel P. M. Silva

Andrew said:
Another *shrug* And a good C programmer can provide a
domain-specific language for the non-professional programmer.

The last thing we need in this world is more non-professional programmers
writing C programs to make software even more unstable. The ATM near where
I work crashes enough as it is.

You can restrict a DSL in lisp/scheme to the point of not allowing it to
call eval, for example. Can you restrict a C programmer's ability to
dereference NULL? Can you hide 'exec' from the Python newbie?
Any good Python programmer could make an implementation
of a Lisp (slow, and not all of GC Lisp, but a Lisp) in Python, like

import lisp
def spam(distance):
"""(time_to_fall 9.8 distance)"""
spam = lisp.convert(spam)

def time_to_fall(g, distance):
print "The spam takes", (2.0*distance/g)**(0.5), "seconds to fall"

print spam(10)

Please let me know if you hear of anyone implementing lisp/scheme in
Python :)
Umm... Sure. C++ can be expressed as a parse tree, and that
parse tree converted to an s-exp, which can be claimed to be
a Lisp; perhaps with the right set of macros.

Still doesn't answer my question on how nicely Lisp handles
the 'unexpected' need of allocating objects from different
memory arenas.

Just like Python does: native modules.

- DS
 
K

Kenny Tilton

Alex said:
Kenny Tilton wrote:
...



Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it?

No, and Google is not as great as we think it is. :( I did after
extraordinary effort (on this my second try) find the original, but that
was just an application of the macro, not its innards, and I did not get
enough from his description to make out what it was all about. Worse, I
could not find your follow-up objections. I had stopped following this
thread to get some work done (and because I think the horse is dead).

All I know is that you are trying to round up a lynch mob to string up
WITH-MAINTAINED-CONDITION, and thet Lisp the language is doomed to
eternal damnation if we on c.l.l. do not denounce it. :)

No, seriously, what is your problem? That the macro would wlak the code
of the condition to generate a demon that would not only test the
condition but also do things to maintain the condition, based on its
parsing of the code for the condition?

You got a problem with that? Anyway, this is good, I was going to say
this chit chat would be better if we had some actual macros to fight over.

[Apologies up front: I am guessing left and right at both the macro and
your objections. And ILC2003 starts tomorrow, so I may not get back to
you all for a while.]

kenny

ps. Don't forget to read Paul Grahams Chapter's 1 & 8 in On Lisp, from
now on I think it is pointless not to be debating what he said, vs what
we are saying. The whole book is almost dedicated to macros. From the
preface:

"The title [On Lisp] is intended to stress the importance of bottom-up
programming in Lisp. Instead of just writing your program in Lisp, you
can write your own language on Lisp, and write your program in that.

"It is possible to write programs bottom-up in any language, but Lisp is
the most natural vehicle for this style of programming. In Lisp,
bottom-up design is not a special technique reserved for unusually large
or difficult programs. Any substantial program will be written partly in
this style. Lisp was meant from the start to be an extensible language.
The language itself is mostly a collection of Lisp functions, no
different from the ones you define yourself. What’s more, Lisp functions
can be expressed as lists, which are Lisp data structures. This means
you can write Lisp functions which generate Lisp code.

"A good Lisp programmer must know how to take advantage of this
possibility. The usual way to do so is by defining a kind of operator
called a macro. Mastering macros is one of the most important steps in
moving from writing correct Lisp programs to writing beautiful ones.
Introductory Lisp books have room for no more than a quick overview of
macros: an explanation of what macros are,together with a few examples
which hint at the strange and wonderful things you can do with them.

"Those strange and wonderful things will receive special attention here.
One of the aims of this book is to collect in one place all that people
have till now had to learn from experience about macros."

Alex, have you read On Lisp?
 
K

Kenny Tilton

Andrew said:
Guess you don't know Knuth.

Eight years to do TeX? How smart can he be? He should have used Lisp.
The smartest people I know aren't programmers. What does
that say?

Aren't you the scientist who praised a study because statistics showed
the studies statistics had a better than even chance of not being
completely random? Credibility zero, dude. (if you now complain that it
was fully a 75% chance of not being completely random, you lose.)
 
A

Andrew Dalke

Peter Seibel:
So, to write a new test function, here's what I
write:

(deftest foo-tests ()
(check
(= (foo 1 2 3) 42)
(= (foo 4 5 6) 99)))

Python bases its unit tests on introspection. Including the
full scaffolding, the equivalent for Python would be

import unittest
import foo_module # I'm assuming 'foo' is in some other module

class FooTestCase(unittest.TestCase):
def testFoo(self):
self.assertEquals(foo_module.foo(1, 2, 3), 42)
self.assertEquals(foo_module.foo(4, 5, 6), 99)

if __name__ == '__main__':
unittest.main()

Here's what it looks like
.... def testFoo(self):
.... self.assertEquals(foo(1,2,3), 42)
.... self.assertEquals(foo(4,5,6), 99)
....F
======================================================================
FAIL: testFoo (__main__.FooTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<interactive input>", line 4, in testFoo
File "E:\Python23\Lib\unittest.py", line 302, in failUnlessEqual
raise self.failureException, \
AssertionError: 42 != 99

A different style of test can be done with doctest, which uses Python's
docstrings. I'll define the function and include an invalid example
in the documentation.

def foo(x, y, z):
"""Returns 42
""""
return 42

Here's what I see when I run it.
*****************************************************************
Failure in example: foo(5,6,7)
from line #3 of __main__.foo
Expected: 99
Got: 42
*****************************************************************

Doctests are fun. ;)
Note that this is all about the problem domain, namely testing. Each
form within the body of the CHECK is evaluated as a separate test
case.

The unittest example I have makes them all part of the same test case.
To be a different test case it needs a name. If it has a name, it can
be tested independent of the other tests, eg, if you want to tell the
regression framework to run only one of the tests, as when debugging.
If you have that functionality you'll have to specify the test by number.
If a given form doesn't evaluate to true then a failure is
reported like this which tells me which test function the failure
was in, the literal form of the test case and then the values of any
non-literal values is the function call (i.e. the arguments to = in
this case.)

The Python code is more verbose in that regard because ==
isn't a way to write a function. I assume you also have tests for
things like "should throw exception of type X" and "should not
throw expection" and "floating point within epsilon of expected value"?
Test Failure:

Test Name: (FOO-TESTS)
Test Case: (= (FOO 1 2 3) 42)
Values: (FOO 1 2 3): 6

Feel free to compare with the above. The main difference, as you
point out below, is that you get to see the full expression. Python
keeps track of the source line number, which you can see in the
traceback. If the text was in a file it would also show the contents
of that line in the traceback, which would provide equivalent output
to what you have. In this case the input was from a string and it
doesn't keep strings around for use in tracebacks.

(And the 'doctest' output includes the part of the text used to
generate the test; the previous paragraph only applies to unittest.)

I expect a decent IDE would make it easy to get to an
error line given the unittest output. I really should try one of
the Python IDEs, or even just experiment with python-mode.

I expect the usefulness of showing the full expression to be
smaller when the expression is large, because it could be
an intermediate in the expression which has the problem, and
you don't display those intermediates.
So what is the equivalent non-macro code? Well the equivalent code
to the DEFTEST form (i.e. the macro expansion) is not *that* much
more complex--it just has to do the stuff I mentioned; binding the
test name variable and registering the test function. But it's
complex enough that I sure wouldn't want to have to type it over and
over again each time I write a test:

Python's introspection approach works by looking for classes of a
given type (yes, classes, not instances), then looking for methods
in that class which have a given prefix. These methods become the
test cases. I imagine Lisp could work the same way, except that
because other solutions exist (like macros), there's a prefered
reason to choose another style.

The Python code is the same number of lines as your code, except
that it is more verbose. It does include the ability for tests to have
a setup and teardown stage, which appears to be harder for your
code to handle.
Note that it's the ability, at macro expansion time, to treat the code
as data that allows me to generate test failure messages that contain
the literal code of the test case *and* the value that it evaluated
to. I could certainly write a HOF version of CHECK that accepts a list
of test-case-functions:
But since each test case would be an opaque function object by the
time CHECK sees it, there'd be no good option for nice reporting from
the test framework.

You are correct in that Python's way of handling the output doesn't
include the expression which failed. Intead, it includes the location
(source + line number) in the stack trace and if that source is a file
which still exists it shows that line which failed.

A solution which would get what you want without macros is
the addition of more parse tree information, like the start/end positions
of each expression. In that way the function could look up the
stack, find the context from which it was called, then get the full
text of the call. This gets at the code "from the other direction",
that is, from looking at the code after it was parsed rather than
before.

Or as I said, let the IDE help you find the error location and
full context.
but for me, the test, no pun intended, is, is the thing I have to
write to define a new test function much more complex than my original
DEFTEST form?

I'll let you decide if Lisp's introspection abilities provide an alternate
non-macro way to handle building test cases which is just as short.

Knowing roughly no Lisp and doing just pattern matching, here's
a related solution, which doesn't use classes.

(defun utest-foo ()
(= (foo 1 2 3) 42)
(= (foo 4 5 6) 99))

...
(run-unit-tests)

where run-unit-tests looks at all the defined symbols, finds
those which start with 'utest-', wraps the body of each one
inside a 'check' then runs the
(eval-when :)compile-toplevel :load-toplevel :execute)
..
on the body.

If that works, it appears to make the unit test code slightly
easier because the 'check' macro is no longer needed in each
of the test cases; it's been moved to 'run-unit-tests' and can
therefore work as a standard function.

Andrew
(e-mail address removed)
 
P

Pascal Costanza

Andrew said:
Me:
My continued response is that [Lisp is] not optimal for all
domains.


Pascal Costanza:
Yes, we disagree in this regard.


*Shrug* The existence of awk/perl (great for 1-liners on the
unix command-line) or PHP (for simple web programming) or
Mathematica (for symbolic math) is strong enough evidence for
me to continue to disagree.

Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
Pascal Costanza:



The fact that all computing can be programmed in Turing Machine
Language doesn't mean TML is the optimal programming language.
Right.

The fact that there is perl code for emulating *some* quantum
programming means that Lisp can handle that subset. It doesn't mean
that people have fully explored even in Lisp what it means to do all
of quantum computing.

....and you expect me to have fully explored it? If this topic is so
interesting to you, why don't you just grab a Common Lisp environment
and start working on it? ;)

I am pretty sure you can get very far.
Another *shrug* And a good C programmer can provide a
domain-specific language for the non-professional programmer.

Sure, but it's much more work.
Any good Python programmer could make an implementation
of a Lisp (slow, and not all of GC Lisp, but a Lisp) in Python, like

import lisp
def spam(distance):
"""(time_to_fall 9.8 distance)"""
spam = lisp.convert(spam)

def time_to_fall(g, distance):
print "The spam takes", (2.0*distance/g)**(0.5), "seconds to fall"

print spam(10)

Sure, but inconvenient.
Yup. Just like remembering what macros do for different domains.

Sure, there is no way around that. But you can reduce the tediousness in
the long run.

I believe it is an accepted fact that uniformity in GUI design is a good
thing because users don't need to learn arbitrarily different ways of
using different programs. You only need different ways of interaction
when a program actually requires it for its specific domain.

That's pretty much the same when you program in Lisp. It takes some time
to get used to s-expressions, but afterwards you forget about syntax and
focus on the real problems.
I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.

Sure, but is it worth it?
wave equation vs. matrix approach
Newtownian mechanics or Lagrangian
measure theory or non-standard analysis
recursive algorithms vs. iterative ones
travelling salesman vs. maximum clique detection

Each is a pair of different but equivalent ways of viewing the same
problem. Is the difference just syntax?

Probably not. This question is too general though for my taste.
Thank you for your elaboration. You say the driving force is the
ability to handle unexpected events. I assumed that means you need
new styles of behaviour.

Convenience is what matters. If you are able to conveniently express
solutions for hard problems, then you win. In the long run, it doesn't
matter much how things behave in the background, only at first.

Or do you really still care about how sorting algorithms work? No, you
look for an API that has some useful sorting functions, and then you
just use them.

Macros are just another tool to create new abstractions that allow you
to conveniently express solutions for hard problems.

It seems to me that in Python, just as in most other languages, you
always have to be aware that you are dealing with classes and objects.
Why should one care? Why does the language force me to see that when it
really doesn't contribute to the solution?

That's why lambda expressions are sometimes also not quite right. When I
want to execute some code in some context, why should I care about it
being wrapped in a lambda expression to make it work? How does that
contribute to the problem I am trying to tackle?

I want to think in terms of the problem I need to solve. Lisp is one of
the very rare languages that doesn't force me to think in terms of its
native language constructs.
Umm... Sure. C++ can be expressed as a parse tree, and that
parse tree converted to an s-exp, which can be claimed to be
a Lisp; perhaps with the right set of macros.

That's computational equivalence, and that's not interesting.
Still doesn't answer my question on how nicely Lisp handles
the 'unexpected' need of allocating objects from different
memory arenas.

If it's a good Lisp library I would expect it to work like this:

(with-allocation-from :shared-memory
...)

;)

Any more questions?
Or any other language's MOP.
Sure.



Then the right solution is to claim the analogy is wrong, not
go along with it as you did. ;)

Thanks for your kind words. ;)


Pascal
 
A

Andrew Dalke

Kenny Tilton:
Aren't you the scientist who praised a study because statistics showed
the studies statistics had a better than even chance of not being
completely random? Credibility zero, dude. (if you now complain that it
was fully a 75% chance of not being completely random, you lose.)

Wow! You continue to be wrong in your summaries:

- I am not a scientist and haven't claimed to be one in about 8 years

- I didn't 'praise' the study, I pointed out that it exists and that it
offers
some interesting points to consider. At the very least it makes a
testable prediction.

- After someone asserted that that study had been "debunked" I asked for
more information on the debunking, and pointed out the results of
one experiment suggest that the language mapping was not complete
bunkum. (Note that since 100% correlation is also a 'better than even
chance' your statement above is meaningless. What is your
threshold?)

I would be *delighted* to see more studies on this topic,
even ones which state that COBOL is easier to use than Python.

- When I make statements of belief, I present where possible the
sources and the analyses used to justify the belief and, in an
attempt at rigour, the weaknesses of those arguments. As such,
I find it dubious that my credibility can be lower than someone
making claims based solely on gut feelings and illogical thought.
I take that back; a -1.0 credibility makes a wonderful oracle.

Given how imprecise you are in your use of language (where your
thoughtless turns of phrase gracelessly demean those who don't believe
that programming is the be-all and end-all of ambitions), your inability
to summarize matters correctly, and your insistance on ad hominum attacks
(dude!) over logical counter-argument and rational discourse, I'm surprised
you can make a living as a programmer or in any other field which
requires mental aptitude and the ability to communicate.

Andrew
(e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,170
Messages
2,570,925
Members
47,468
Latest member
Fannie44U3

Latest Threads

Top