Python syntax in Lisp and Scheme

R

Raffael Cavallaro

Pascal Costanza said:
Lispniks are driven by the assumption that there is always the
unexpected. No matter what happens, it's a safe bet that you can make
Lisp behave the way you want it to behave, even in the unlikely event
that something happens that no language designer has ever thought of
before. And even if you cannot find a perfect solution in some cases,
you will at least be able to find a good approximation for hard
problems.

This I believe is the very crux of the matter. The problem domain to
which lisp has historically been applied, artificial intelligence,
more or less guaranteed that lisp hackers would run up against the
sorts of problems that no one had ever seen before. The language
therefore evolved into a "programmable programming language," to quote
John Foderaro (or whoever first said or wrote this now famous line).

Lisp gives the programmer who knows he will be working in a domain
that is not completely cut and dried, the assurance that his language
will not prevent him for doing something that has never been done
before. Python gives me the distinct impression that I might very well
run up against the limitations of the language when dealing with very
complex problems.

For 90% of tasks, even large projects, Python will certainly have
enough in its ever expanding bag of tricks to provide a clean,
maintainable solution. But that other 10% keeps lisp hackers from
using Python for exploratory programming - seeking solutions in
problem domains that have not been solved before.
 
P

Pascal Costanza

Alex said:
Pascal Costanza wrote:
...



Indeed, a chorus of "don't do that" is the typical comment each
and every time a newbie falls into that particular mis-use. Currently,
the --shadow option of PyChecker only warns about shadowing of
_variables_, not shadowing of _functions_, but there's really no
reason why it shouldn't warn about both. Logilab's pylint does
diagnose "redefining built-in" with a warning (I think they mean
_shadowing_, not actually _redefining_, but this may be an issue
of preferred usage of terms).

"Nailing down" built-ins (at first with a built-in warning for overriding
them, later in stronger ways -- slowly and gradually, like always, to
maintain backwards compatibility and allow slow, gradual migration of the
large existing codebase) is under active consideration for the next version
of Python, expected (roughly -- no firm plans yet) in early 2005.

OK, I understand that the Python mindset is really _a lot_ different
than the Lisp mindset in this regard.
Note that SOME built-ins exist SPECIFICALLY for the purpose of
letting you override them. Consider, for example, __import__ -- this
built-in function just exposes the inner mechanics of the import
statement (and friends) to let you get modules from some other
place (e.g., when your program must run off a relational database
rather than off a filesystem). In other word, it's a rudimentary hook
in a "Template Method" design pattern (it's also occasionally handy
to let you import a module whose name is in a string, without
going to the bother of an 'exec', so it will surely stay for that purpose
even though we now have a shiny brand-new architecture for
import hooks -- but that's another story).

Ah, you want something like final methods in Java, or better probably
final implicitly as the default and means to make select methods
non-final, right?
Anyway, back to your contention: I do not think that the fact that
the user can, within his functions, choose very debatable names,
such as those which shadow built-ins, is anywhere as powerful,
and therefore as dangerous, as macros. My own functions using
'sum' will get the built-in one even if yours do weird things with
that same name as a local variable of their own. The downsides
of shadowing are essentially as follows...

What makes you think that macros have farther reaching effects in this
regard than functions? If I call a method and pass it a function object,
I also don't know what the method will do with it.

Overriding methods can also be problematic when they break contracts.
(Are you also considering to add DBC to Python? I would expect that by
now given your reply above.)

Can you give an example for the presumably dangerous things macros
supposedly can do that you have in mind?


Pascal
 
R

Raffael Cavallaro

Pascal Costanza said:
Lispniks are driven by the assumption that there is always the
unexpected. No matter what happens, it's a safe bet that you can make
Lisp behave the way you want it to behave, even in the unlikely event
that something happens that no language designer has ever thought of
before. And even if you cannot find a perfect solution in some cases,
you will at least be able to find a good approximation for hard
problems.

This I believe is the very crux of the matter. The problem domain to
which lisp has historically been applied, artificial intelligence,
more or less guaranteed that lisp hackers would run up against the
sorts of problems that no one had ever seen before. The language
therefore evolved into a "programmable programming language," to quote
John Foderaro (or whoever first said or wrote this now famous line).

Lisp gives the programmer who knows he will be working in a domain
that is not completely cut and dried, the assurance that his language
will not prevent him for doing something that has never been done
before. Python gives me the distinct impression that I might very well
run up against the limitations of the language when dealing with very
complex problems.

For 90% of tasks, even large projects, Python will certainly have
enough in its ever expanding bag of tricks to provide a clean,
maintainable solution. But that other 10% keeps lisp hackers from
using Python for exploratory programming - seeking solutions in
problem domains that have not been solved before.
 
A

Alex Martelli

Pascal Costanza wrote:
...
Well, maybe I am wrong. However, in a recent example, a unit test
expressed in Python apparently needed to say something like
"self.assertEqual ...". Who is this "self", and what does it have to do
with testing? ;)

Here self is 'the current test case' (instance of a class specializing
TestCase), and what it has to do with the organization of unit-testing
as depicted in Kent Beck's framework (originally for Smalltalk, adapted
into a lot of different languages as it, and test-driven design, became
deservedly popular) is "just about everything". I think you might like
reading Kent's book on TDD -- and to entice you, you'll even find him
criticizing the fact that 'self' is explicit in Python (it's kept implicit
in Smalltalk). If you don't like O-O architecture, you doubtlessly won't
like Ken's framework -- he IS a Smalltalk-thinking OO-centered guy.


Alex
 
P

Pascal Costanza

Raffael said:
For 90% of tasks, even large projects, Python will certainly have
enough in its ever expanding bag of tricks to provide a clean,
maintainable solution. But that other 10% keeps lisp hackers from
using Python for exploratory programming - seeking solutions in
problem domains that have not been solved before.

I would like to add to that by pointing out that it is even a good idea
to use Lisp for problem domains that others have solved before but that
_I_ (or "you") don't understand completely yet.

To me, programming languages are tools that help me to explore domains
by writing models for them and interactively testing how they react to
commands. In a certain sense, they are an extension of my brain that
help me to manage the tedious and repetitive tasks, and let me focus on
the essential problems.

Many programming languages require you to build a model upfront, on
paper or at least in your head, and then write it down as source code.
This is especially one of the downsides of OOP - you need to build a
class hierarchy very early on without actually knowing if it is going to
work in the long run.

What you actually do when you build a model of something is that you
start _somewhere_, see how far you can get, take some other route, and
so on, until you have a promising conceptualization. The cool thing
about Lisp is that I can immediately sketch my thoughts as little
functions and (potential) macros from the very beginning, and see how
far I can get, exactly like I would when I would be restricted to work
on paper. Except that in such an exploratory programming mode, I can get
immediate feedback by trying to run the functions and expanding the
macros and see what they do.

I know that OOP languages have caught up in this regard by providing
refactoring tools and other IDE features. And I definitely wouldn't want
to get rid of OOP in my toolbox because of its clear advantages in
certain scenarios.

But I haven't yet seen a programming language that supports exploratory
thinking as well as Lisp. It's like that exactly because of the
s-expressions, or more specifically because of the fact that programs
and data are the same in Lisp.

Computers are there to make my tasks easier. Not using them from the
very beginning to help me solve programming tasks is a waste of
computing resources.

In one particular case, I have used the CLOS MOP to implement some
special case of a method combination. At a certain stage, I have
realized that I have made a conceptual mistake - I have tried to resolve
the particular method combination at the wrong stage. Instead of doing
it inside of the method combination it had to be done at the call site.
It was literally just a matter of placing a quote character at the right
place - in front of the code to be executed - that allowed me to it pass
to the right place as data, and then expand it at the call site. I can't
describe in words what an enlightening experience this was. In any other
language I have known until then, this change would have required a
complete restructuring of the source code, of the phases in which to
execute different parts of the code, of the representation for that
code, and so on. In Lisp, it was just one keystroke!

It's because of such experiences that Lispniks don't want to switch to
lesser languages anymore. ;-)


(More seriously, there are probably very different ways to think about
problems. So Lisp might not be the right language for everyone, because
other people might find completely different things helpful when they
try to tackle a problem. It would be interesting to do some research on
this topic. As much as I don't think that there is a single programming
paradigm that is best suited for all possible problems I also don't
think that there is a single programming style that is best suited for
all programmers.)


Pascal
 
T

Terry Reedy

....
Python's eval--as I understand it--handles this differently. Common
Lisp's EVAL may be the way it is partially because it is not needed
for things like this given the existence of macros. There's are also
some semantic difficulties of what lexical environment the EVAL should
occur in.

Now that you mention it, it seems sensible that a concept like 'eval
expression' might have slight but significantly different
implementations in different environments with different name-space
systems and different alternatives for doing similar jobs. Thanks for
the answer.

Terry J. Reedy
 
T

Terry Reedy

> Well, this proves that Python has a language feature that is as
> dangerous as many people seem to think macros are.

There are two reasons to not generally prohibit overriding builtins.

1. Every time a new builtin is added with a nice name like 'sum',
there is existing code that uses the same nice name. For 2.3 to have
broken every program with a 'sum' variable would have been nasty and
unpopular.

2. There are sometimes good reasons add additional or alternative
behavior. This is little different from a subclass redefining a
method in one of its base classes, and perhaps calling the base class
method as part of the subclass method.

The most dangerous and least sensible overriding action, anything like
import something; something.len = 'haha'
will probably become illegal.

Terry J. Reedy
 
A

Alex Martelli

Pascal Costanza wrote:
...
OK, I understand that the Python mindset is really _a lot_ different
than the Lisp mindset in this regard.

As in, no lisper will ever admit that a currently existing feature is
considered a misfeature?-)

Ah, you want something like final methods in Java, or better probably
final implicitly as the default and means to make select methods
non-final, right?

Not really, the issue I was discussing was specifically with importing.

Normally, an import statement "looks" for a module [a] among those
already loaded, among the ones built-in to the runtime, [c] on
the filesystem (files in directories listed in sys.path). "import hooks"
can be used to let you get modules from other places yet (a database,
a server over the network, an encrypted version, ...). The new architecture
I mentioned lets many import hooks coexist and cooperate, while the
old single-hook architecture made that MUCH more difficult, that's all.

"final implicitly as the default and means to make select methods
non-final" is roughly what C++ has -- the "means" being the "virtual"
attribute of methods. Experience proves that's not what we want.
Rather, builtin (free, aka toplevel) _names_ should be locked down
just as today names of _attributes_ of builtin types are mostly
locked down (with specific, deliberate exceptions, yes). But I think
I'm in a minority in wanting similar mechanisms for non-built-ins,
akin to the 'freeze' mechanism of Ruby (and I'm dismayed by reading
that experienced Rubystas say that freeze LOOKS like a cool idea
but in practice it's almost never useful -- they have the relevant
experience, I don't, so I have to respect their evaluation).

What makes you think that macros have farther reaching effects in this
regard than functions? If I call a method and pass it a function object,
I also don't know what the method will do with it.

Of course not -- but it *cannot possibly* do what Gat's example of macros,
WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the
condition it's meant to maintain (in his example a constraint on a variable
named temperature), about the code over which it is to be maintained
(three functions, or macros, that start, run, and stop the reactor),
presumably infer from that code a model of how a reactor _works_, and
rewrite the control code accordingly to ensure the condition _is_ in fact
being maintained. A callable passed as a parameter is _atomic_ -- you
call it zero or more times with arguments, and/or you store it somewhere
for later calling, *THAT'S IT*. This is _trivially simple_ to document and
reason about, compared to something that has the potential to dissect
and alter the code it's passed to generate completely new one, most
particularly when there are also implicit models of the physical world being
inferred and reasoned about. Given that I've seen nobody say, for days!,
that Gat's example was idiotic, as I had first I thought it might be, and
on the contrary I've seen many endorse it, I use it now as the simplest
way to show why macros are obviously claimed by their proponents to
be _scarily_ more powerful than functions. (and if a few voices out of
the many from the macro-lovers camp should suddely appear to claim
that the example was in fact idiotic, while most others keep concurring
with it, that will scale down "their proponents" to "most of their
proponents", not a major difference after all).

Overriding methods can also be problematic when they break contracts.

That typically only means an exception ends up being raised when
the method is used "inappropriately" - i.e. in ways depending on the
contract the override violates. The only issue is ensuring that the
diagnostics of the error are clear and complete (and giving clear and
complete error diagnostics is often not trivial, but that is common to
just about _any_ classes of errors that programmers do make).
(Are you also considering to add DBC to Python? I would expect that by
now given your reply above.)

Several different implementations of DBC for Python are around, just
like several different architectures for interfaces (or, as I hope,
Haskell-like typeclasses, a more powerful concept). [Note that the
lack of macros stops nobody from playing around with concepts they
would like to see in Python: they just don't get to make new syntax
to go with them, and, thus, to fragment the language thereby:)].

Guido has already declared that ONE concept of interfaces (or
typeclasses, or protocols, etc) _will_ eventually get into Python -- but
_which one_, it's far too early to tell. I would be surprised if whichever
version does make it into Python doesn't let you express contracts.
A contract violation will doubtlessly only mean a clear and early error
diagnostic, surely a good thing but not any real change in the
power of the language.

Can you give an example for the presumably dangerous things macros
supposedly can do that you have in mind?

I have given this repeatedly: they can (and in fact have) tempt programmers
using a language which offers macros (various versions of lisp) to, e.g.,
"embed a domain specific language" right into the general purpose language.
I.e., exactly the use which is claimed to be the ADVANTAGE of macros. I
have seen computer scientists with modest grasp of integrated circuit design
embed half-baked hardware-description languages (_at least_ one different
incompatible such sublanguage per lab) right into the general-purpose
language, and tout it at conferences as the holy grail -- while competitors
were designing declarative languages intended specifically for the purpose
of describing hardware, with syntax and specifically limited semantics that
seemed to be designed in concert with the hardware guys who'd later be
USING the gd thing (and were NOT particularly interested in programming
except in as much it made designing hardware faster and cheaper). The
effort of parsing those special-purpose language was of course trivial (even
at the time -- a quarter of a century ago -- yacc and flex WERE already
around...!), there was no language/metalanguage confusion, specialists in
the domain actually did a large part of the domain-specific language design
(without needing macro smarts for the purpose) and ended up eating our
lunch (well, except that I jumped ship before then...;-).

Without macros, when you see you want to design a special-purpose
language you are motivated to put it OUTSIDE your primary language,
and design it WITH its intended users, FOR its intended purposes, which
may well have nothing at all to do with programming. You parse it with a
parser (trivial these days, trivial a quarter of a century ago), and off you
go. With macros, you're encouraged to do all the wrong things -- or, to
be more precise, encouraged to do just the things I saw causing the
many costly failures (one or more per lab, thanks to the divergence:)
back in that my early formative experience.

I have no problem with macros _in a special-purpose language_ where
they won't tempt you to embed what you _should_ be "out-bedding",
so to speak -- if the problem of giving clear diagnostics of errors can be
mastered, denoting that some functions are to be called at compile
time to produce code in the SPL has no conceptual, nor, I think,
particular "sociological" problem. It's only an issue of weighing their
costs and usefulness -- does the SPL embody other ways to remove
duplication and encourage refactoring thereby, are there overlap
among various such ways, etc, etc. E.g., a purely declarative SPL,
with the purpose of describing some intricate structure, may have no
'functions' and thus no other real way to remove duplication than
macros (well, it depends on whether there may be other domain
specific abstractions that would be more useful than mere textual
expansions, of course -- e.g. inheritance/specialization, composition
of parts, and the like -- but they need not be optimal to capture some
inevitable "quasi-accidental duplications of SPL code" where a macro
might well be).


Alex
 
A

Alex Martelli

Andrew said:
Alex:

Pshaw. My hypothetical house of the 2050s or so will know
that "could" in this context is a command. :)

Good luck the first time you want to ask it about its capabilities,
and my best wishes that you'll remember to use VERY precise
phrasing then.

But what if computers someday become equally capable
as humans in understanding uncontrained speech? It
can be a dream, yes?

If computers become as complicated as human beings, and
I think that IS necessary for the understanding you mention,
I'll treat them as human beings. I also think we have enough
human beings, and very fun ways to make new ones, as is,
so I don't see it as a dream to have yet more but made of
(silicon or whatever material is then fashionable).

An unfortunate typo. I meant "speech" instead of "speed" but
my fingers are too used to typing the latter. Here I would like
a computer to ask "um, did you really mean that?" -- so long as
the false positive rate was low enough.

Well, I and other humans didn't even think that you might have made
a simple 'fingero' (not quite a typo but equivalent)...!-)

The conjecture that computer programming languages are
contrained by the form of I/O and that other languages, based
on speech, free-form 2D writing, or other forms of input may
be more appropriate, at least for some domain.

This was in response to the idea that Lisp is the most appropriate
language for all forms of programming.

The syntax of Python would surely have to be changed drastically
if speech was the primary mean of I/O for it, yes. As for lisp, that's
less certain to me (good ways to pronounce open and closed
parens look easier to find than good ways to pronounce whitespace
AND the [expletivedeleted] case-sensitive identifiers...:).


Alex
 
D

dewatf

Doesn't it belong to the group that includes 'fructus'? Of course this has
nothing to do with the plural used in English, but still... :)

It sort of does for nouns with a latin plural that plural is often
brought into English too.
This page, which has a lot of info on this issue, seems to think so:

http://www.perl.com/language/misc/virus.html

Thanks for the link interesting reading.

The Oxford Latin Dictionary and the Persus project classify 'virus' as
irregular second declension noun (of which there a few like virus).
Betts and others argue it is a 4th declension like 'census' and
'fructus', (though Betts still lists it as 2nd declension irregular in
his latin textbook which I was I was using). The matter turns on a
couple of surviving references to the genative singular.

And that argument doesn't affect the plural not being used. I will after
reading the page change my 'such nouns were usually only used in the
nominative and accusative singular in latin' to 'were only used in the
singular'.

dewatf.
 
H

Henrik Motakef

Alex Martelli said:
As in, no lisper will ever admit that a currently existing feature is
considered a misfeature?-)

You might want to search google groups for threads about "logical
pathnames" in cll :)
Guido has already declared that ONE concept of interfaces (or
typeclasses, or protocols, etc) _will_ eventually get into Python -- but
_which one_, it's far too early to tell.

A propos interfaces in Python: The way they were done in earlier Zope
(with "magic" docstrings IIRC) was one of the things that led me to
believe language extensibilit was a must, together with the phletora
of SPLs the Java community came up with, either in comments (like
JavaDoc and XDoclet) or ad-hoc XML "configuration" files that grow and
grow until they are at least turing-complete at some point. (blech)

People /will/ extend the base language if it's not powerfull enough
for everything they want to do, macros or not. You can either give
them a powerfull, documented and portable standart way to do so, or
ignore it, hoping that the benevolent dictator will someday change the
core language in a way that blesses one of the extensions (most likely
a polished variant of an existing one) the "obvious", official one.

It is the difference between implementing a proper type system and
extending lint to check for consistent use of the hungarian notation
at the end of the day.
 
P

Pascal Costanza

Alex said:
Pascal Costanza wrote:


Of course not -- but it *cannot possibly* do what Gat's example of macros,
WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the
condition it's meant to maintain (in his example a constraint on a variable
named temperature), about the code over which it is to be maintained
(three functions, or macros, that start, run, and stop the reactor),
presumably infer from that code a model of how a reactor _works_, and
rewrite the control code accordingly to ensure the condition _is_ in fact
being maintained.

....but such a macro _exclusively_ reasons about the code that is passed
to it. So the effects are completely localized. You don't do any damage
to the rest of the language because of such a macro.

Of course, such a macro needs to be well-defined and well-documented.
But that's the case for any code, isn't it?
(Are you also considering to add DBC to Python? I would expect that by
now given your reply above.)
[...]

Guido has already declared that ONE concept of interfaces (or
typeclasses, or protocols, etc) _will_ eventually get into Python -- but
_which one_, it's far too early to tell. I would be surprised if whichever
version does make it into Python doesn't let you express contracts.

OK, I think I understand the Python mindset a little bit better. Thanks.
I have given this repeatedly: they can (and in fact have) tempt programmers
using a language which offers macros (various versions of lisp) to, e.g.,
"embed a domain specific language" right into the general purpose language.
I.e., exactly the use which is claimed to be the ADVANTAGE of macros. I
have seen computer scientists with modest grasp of integrated circuit design
embed half-baked hardware-description languages (_at least_ one different
incompatible such sublanguage per lab) right into the general-purpose
language, and tout it at conferences as the holy grail

That's all? And you really think this has anything to do with macros?

Yes, macros allow you to write bad programs - but this is true for any
language construct.

Your proposed model means that for each DSL you might need you also need
to implement it as a separate language. Well, this has also been done
over and over again, with varying degrees of success. You can probably
name several badly designed "out-bedded" little languages. Does this
mean that your propose sucks as well? It doesn't seem to guarantee good
little languages, does it?

In reality, in both approaches we can just find both badly and well
designed DSLs.

Bad languages, no matter whether embedded or "out-bedded", exist not
because of the technology that is being used to implement them but
because of the fact that humans can fail when they undertake something.

You surely can name some badly desigend libraries, even for Python. Does
this mean that the respective languages suck? That the very concept of
libraries suck?

Here is a one of my favorite quotes, by Guy Steele and Gerald Sussman:
"No amount of language design can _force_ a programmer to write clear
programs. If the programmer's conception of the problem is badly
organized, then his program will also be badly organized. The extent to
which a programming language can help a programmer to organize his
problem is precisely the extent to which it provides features
appropriate to his problem domain. The emphasis should not be on
eliminating “bad” language constructs, but on discovering or inventing
helpful ones." (from "Lambda - The Ultimate Imperative")


Pythonistas seem to think otherwise wrt language design that can force
programmers to write clear programs. If you think that this is a good
summary of the Python mindset then we can stop the discussion. I simply
don't buy into such a mindset.


Pascal
 
R

Raffael Cavallaro

Pascal Costanza said:
Many programming languages require you to build a model upfront, on
paper or at least in your head, and then write it down as source code.
This is especially one of the downsides of OOP - you need to build a
class hierarchy very early on without actually knowing if it is going to
work in the long run.

This parallels Paul Graham's critique of the whole idea of program
"specifications." To paraphrase Graham,for any non-trivial software,
there is no such thing as a specification. For a specification to be
precise enough that programmers can convert it directly into code, it
must already be a working program! What specifications are in reality is
a direction in which programmers must explore, finding in the process
what doesn't work and what does, and how, precisely, to implement that.

Once you've realized that there is really no such thing as the waterfall
method, it follows inevitably that you'll prefer bottom up program
development by exploratory methods. Once you realize that programs are
discovered, not constructed from a blueprint, you'll inevitably prefer a
language that gives you freedom of movement in all directions, a
language that makes it difficult to paint yourself into a corner.
 
R

Raffael Cavallaro

Alex Martelli said:
And you would be wrong: forks are much less frequent than your theory
predicts. Read some Eric Raymond to understand this, he's got good
ideas about these issues.

Because the forks happen at a much higher level. People know that
they'll catch hell if they try to fork Perl, so they design Python or
Ruby instead.
 
D

Daniel P. M. Silva

Alex said:
Good luck the first time you want to ask it about its capabilities,
and my best wishes that you'll remember to use VERY precise
phrasing then.

Hehe, I hope I never scream "NAMESPACE-MAPPED-SYMBOLS" at my house :p

- DS
 
E

Erann Gat

Alex Martelli said:
Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it? OK, now, since you don't appear to think it was an idiotic example,
then SHOW me how it takes the code for the condition it is to maintain and
the (obviously very complicated: starting a reactor, operating the reactor,
stopping the reactor -- these three primitives in this sequence) program
over which it is to maintain it, and how does it modify that code to ensure
this purpose. Surely, given its perfectly general name, that macro does not
contain, in itself, any model of the reactor; so it must somehow infer it
(guess it?) from the innards of the code it's analyzing and modifying.

It is not necessary to exhibit a theory of how WITH-CONDITION-MAINTAINED
actually works to understand that if one had such a theory one can package
that theory for use more attractively as a macro than as a function. It
is not impossible to package up this functionality as a function, but it's
very awkward. Control constructs exist in programming languages for a
reason, despite the fact that none of them are really "necessary". For
example, we can dispense with IF statements and replace them with a purely
functional IF construct that takes closures as arguments. Or we can do
things the Java way and create a new Conditional object or some such
thing. But it's more convenient to write an IF statement.

The claim that macros are useful is nothing more and nothing less than the
claim that the set of useful control constructs is not closed. You can
believe that or not. To me it is self-evidently true, but I don't know
how to convince someone that it's true who doesn't already believe it.
It's rather like arguing over whether the Standard Model of Physics covers
all the useful cases. There's no way to know until someone stumbles
across a useful case that the Standard Model doesn't cover.
For example, the fact that Gat himself says that if what I want to write
are normal applications, macros are not for me: only for those who want
to push the boundaries of the possible are they worthwhile. Do you think
THAT is idiotic, or wise? Please explain either the reason of the drastic
disagreements in your camp, or why most of you do keep trying pushing
macros (and lisp in general) at those of us who are NOT particularly
interested in "living on the edge" and running big risks for their own sake,
accordingly to your answer to the preceding question, thanks.

I can't speak for anyone but myself of course, but IMO nothing worthwhile
is free of risks. I also think you overstate the magnitude of the risk.
You paint nightmare scenarios of people "changing the language"
willy-nilly in all sorts of divergent ways, but 1) in practice on a large
project people tend not to do that and 2) Lisp provides mechanisms for
isolating changes to the language and limiting the scope of their effect.
So while the possibility exists that someone will change the language in a
radical way, in practice this is not really a large risk. The risk of
memory corruption in C is vastly larger than the risk of "language
corruption" in Lisp, and most people seem to take that in stride.
...and there's another who has just answered in the EXACTLY opposite
way -- that OF COURSE macros can do more than HOF's. So, collectively
speaking, you guys don't even KNOW whether those macros you love so
much are really necessary to do other things than non-macro HOFs allow
(qualification inserted to try to divert the silly objection, already made
by others on your side, that macros _are_ functions), or just pretty things
up a little bit.

But all any high level language does is "pretty things up a bit". There's
nothing you can do in any language that can't be done in machine
language. "Prettying things up a bit" is the whole point. Denigrating
"prettying things up a bit" is like denigrating cars because you can get
from here to there just as well by walking, and all the car does is "speed
things up a bit".

E.
 
G

Greg Ewing (using news.cis.dfn.de)

Dave said:
Here's my non-PEP for such a feature:

return { |x, y|
print x
print y
}

This is scary! Some years ago I devised a language called "P" that
was translated into Postscript. Its parameterised code blocks looked
EXACTLY like that!

I wouldn't like to see this in Python, though -- it doesn't quite
look Pythonic enough, somehow.

Maybe instead of trying to find a way of shoehorning a compound
statement into an expression, we should be trying to find a way
of writing a procedure call, which would normally be an expression,
as a statement... maybe something like

with_directory "/foo/blarg" do:
print os.listdir(".")

which would be equivalent to

def _somefunc():
print os.listdir(".")
with_directory("/foo/blarg", do = _somefunc)
 
E

Erann Gat

Alex Martelli said:
Of course not -- but it *cannot possibly* do what Gat's example of macros,
WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the
condition it's meant to maintain (in his example a constraint on a variable
named temperature), about the code over which it is to be maintained
(three functions, or macros, that start, run, and stop the reactor),
presumably infer from that code a model of how a reactor _works_, and
rewrite the control code accordingly to ensure the condition _is_ in fact
being maintained. A callable passed as a parameter is _atomic_ -- you
call it zero or more times with arguments, and/or you store it somewhere
for later calling, *THAT'S IT*. This is _trivially simple_ to document and
reason about, compared to something that has the potential to dissect
and alter the code it's passed to generate completely new one, most
particularly when there are also implicit models of the physical world being
inferred and reasoned about. Given that I've seen nobody say, for days!,
that Gat's example was idiotic, as I had first I thought it might be, and
on the contrary I've seen many endorse it, I use it now as the simplest
way to show why macros are obviously claimed by their proponents to
be _scarily_ more powerful than functions.

Why "scarily"?

E.
 
G

Greg Ewing (using news.cis.dfn.de)

Andrew said:
It has sometimes been said that Lisp should use first and
rest instead of car and cdr

I used to think something like that would be more logical, too.
Until one day it occurred to me that building lists is only
one possible, albeit common, use for cons cells. A cons cell
is actually a completely general-purpose two-element data
structure, and as such its accessors should have names that
don't come with any preconceived semantic connotations.

From that point of view, "car" and "cdr" are as good
as anything!
 
G

Greg Ewing (using news.cis.dfn.de)

Dave said:
In that case, why do we eschew code blocks, yet have no problem with the
implicit invocation of an iterator,

I don't think code blocks per se are regarded as a bad thing.
The problem is that so far nobody has come up with an entirely
satisfactory way of fitting them into the Python syntax as
expressions.
What if I wrote:

for byte in file('input.dat'):
do_something_with(byte)

That would be a bit misleading, no?

It certainly would! But I think the implicitness which is
tripping the reader up here lies in the semantics of the
file object when regarded as a sequence, not in the for-loop
construct. There is more than one plausible way that a file
could be iterated over -- it could be a sequence of bytes
or a sequence of lines. An arbitrary choice has been made
that it will (implicitly) be a sequence of lines. If this
example shows anything, it's that this was perhaps a bad
idea, and that it might have been better to make it explicit
by requiring, e.g.

for line in f.iterlines():
...

or

for byte in f.iterbytes():
...

then if you wrote

for byte in f.iterlines():
...

the mistake would stick out just as much as it does in
Ruby.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,170
Messages
2,570,925
Members
47,464
Latest member
Bobbylenly

Latest Threads

Top