Python syntax in Lisp and Scheme

K

Kenny Tilton

Kenny said:
Andrew Dalke wrote:

Speaking of non-pros:

"Lisp is easy to learn

Lisp's syntax is simple, compact and spare. Only a handful of “rules”
are needed. This is why Lisp is sometimes taught as the first
programming language in university-level computer science courses. For
the composer it means that useful work can begin almost immediately,
before the composer understands much about the underlying mechanics of
Lisp or the art of programming in general. In Lisp one learns by doing
and experimenting, just as in music composition. "

From: http://pinhead.music.uiuc.edu/~hkt/nm/02/lisp.html

No studies, tho.

kenny
 
R

Raymond Wiker

Andreas Rossberg said:
I'm not terribly familiar with the details of Lisp macros but since
recursion can easily lead to non-termination you certainly need tight
restrictions on recursion among macros in order to ensure termination
of macro substitution, don't you? Or at least some ad-hoc depth
limitation.

Same as with function calls, you mean?

--
Raymond Wiker Mail: (e-mail address removed)
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
 
H

Hannu Kankaanp??

Dave Benjamin said:
For instance, I always thought this was a cooler alternative to the
try/finally block to ensure that a file gets closed (I'll try not to
mess up this time... ;) :

open('input.txt', { |f|
do_something_with(f)
do_something_else_with(f)
})

But being a function, it'd have the nasty property of a
separate scope (yes, that can be nasty sometimes). I'd perhaps
want to do

open('input.txt', { |f| data = f.read() })

But alas, 'data' would be local to the anonymous function and
not usable outside. Perhaps I'd want to do this:

open('input.txt', { |f|
data = f.read()
return data.startswith('bar')
})

Well, unfortunately that return would only return from the
anonymous function. 'open' could return this result,
so the above could be done awkwardly:

return open('input.txt', { |f|
data = f.read()
return data.startswith('bar')
})

But instead for this, and for all the other objects that
require a resource to be freed after it's use, I think a separate
syntax would be preferable (or macros, but we'll never get those).
IIRC, this has been suggested several times before, with
varying syntax:

with f=file('input.txt'):
data = f.read()
print data[0:3]

That's it. Or for opening multiple files, here's a fabricated example:

with f1=file('1.txt'), f2=file('2.txt', 'w'):
data = f1.read()
if data.startswith('foo'):
break #break breaks out of 'with'
f2.write(data)
return True
print 'bleh'
return False


'with' can't be said to be non-explicit either. It'd be only
used with variables that do have resources to be released, so
what really happens is said clearly. C++ RAII could be considered
implicit on the other hand.

Hmm.. With those 'break' semantics, some might be tempted to use
'with' without any variables as well:

with:
if x == y:
x = 1
break
x = 0
if y == z:
y = 1
break
y = 0

In current Python, that'd have to be done like this (or with
a single-element for loop)

if x == y:
x = 1
else:
x = 0
if y == z:
y = 1
else:
y = 0

Hmm, that would lead to two different approaches, which
some might not like. Former is flatter though, at least
if you continue with similar condition/breaks
("Flat is better than nested.") ;)

On a second thought, maybe the break-suggestion was bad
after all. With such break, breaking outside of loop within
'with' wouldn't be so easy. And since 'continue' inside
'with' doesn't make sense, the following would be strange:

for x in range(5):
with f=file('data%d.txt' % x):
continue # would continue loop
break # would break out of 'with'


Ok, never mind 60% of this message then. Just consider
'with' without break (but with possibility to handle
multiple variables).
 
A

Andreas Rossberg

Raymond said:
Same as with function calls, you mean?

In functional languages you at least have no limitation whatsoever on
the depth of tail calls. Is the same true for macros?

Apart from that, can you have higher-order macros? With mutual recursion
between a macro and its argument? That is, can you write a fixpoint
operator on macros?

I'm not saying that any of this would be overly useful. Just trying to
refute Dirk's rather general statement about macros subsuming HOF's.

- Andreas

--
Andreas Rossberg, (e-mail address removed)-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
 
P

Pascal Costanza

Kenny said:
Speaking of non-pros:

"Lisp is easy to learn

Lisp's syntax is simple, compact and spare. Only a handful of “rules”
are needed. This is why Lisp is sometimes taught as the first
programming language in university-level computer science courses. For
the composer it means that useful work can begin almost immediately,
before the composer understands much about the underlying mechanics of
Lisp or the art of programming in general. In Lisp one learns by doing
and experimenting, just as in music composition. "

From: http://pinhead.music.uiuc.edu/~hkt/nm/02/lisp.html

No studies, tho.

Here they are: http://home.adelphi.edu/sbloch/class/hs/testimonials/

(This is about Scheme.)


Pascal
 
P

Peter Seibel

Pascal Costanza said:
As you have already noted in another note, car and cdr can be
composed. cadr is the second element, caddr is the third, cadddr is
the fourth, and so on. cddr is the rest after the second element,
cdddr is the rest after the third element, and so on. Other
abbreviations I have used relatively often are caar, cdar, cadar.

These abbreviations seem strange to a Lisp outsider, but they are
very convenient, and they are easy to read once you have gotten used
to them. You don't actually "count" the elements in your head every
time you see these operators, but they rather become patterns that
you recognize in one go.

As a follow-on to Pascal's point: It might seem, if one just thinks
about function calls that the benefit of the composed C[AD]*R
operations is fairly small, and perhaps not worth the "cost" of being
more cryptic: i.e. Is the savings of a few characters in (cddr foo) vs
(rest (rest foo)) that big a benefit? But since Lisp supports higher
order functions, having single name for those composite functions
saves the clutter of having to create a lambda expression. For
instance, compare:

(loop for (x y) on list by #'cddr do (foo x y))

vs

(loop for (x y) on list by #'(lambda (l) (rest (rest l))) do (foo xy))


I figured this out by deciding--as a matter of style--that I was just
going to use FIRST/REST all the time and then noticing that in
situations like this, CDDR is much more convenient. The point being,
it's hard to forsee all the subtle ways different features interact.
So it can be simultaneously true that CAR and CDR were originally
choosen as names for pretty much arbitrary historical reasons *and*
that they have persisted for a lot of "hard-headed" but subtle
engineering reasons. (Or maybe soft-headed, aesthetic reasons, if you
care to draw the distinction when talking about programming language
design which I don't.)

-Peter
 
P

Pascal Costanza

Andreas said:
I'm not terribly familiar with the details of Lisp macros but since
recursion can easily lead to non-termination you certainly need tight
restrictions on recursion among macros in order to ensure termination of
macro substitution, don't you? Or at least some ad-hoc depth limitation.

The Lisp mindset is not to solve problems that you don't have.

If your code has a bug then you need to debug it. Lisp development
environments provide excellent debugging capabilities out of the box.
Don't guess how hard it is when you don't have the experience yet.


Pascal
 
M

Marco Antoniotti

David said:
You know I think that this thread has so far set a comp.lang.* record
for civilitiy in the face of a massively cross-posted language
comparison thread. I was even wondering if it was going to die a quiet
death, too.

Ah well, We all knew it was too good to last. Have at it, lads!

Common Lisp is an ugly language that is impossible to understand with
crufty semantics

Scheme is only used by ivory-tower academics and is irerelevant to real
world programming

Python is a religion that worships at the feet of Guido vanRossum
combining the syntactic flaws of lisp with a bad case of feeping
creaturisms taken from languages more civilized than itself

There. Is everyone pissed off now?

You forgot the INTERCAL crowd :)

Cheers
 
A

Alex Martelli

Dave Benjamin wrote (answering Mike Rovner):
...
In that case, why do we eschew code blocks, yet have no problem with the
implicit invocation of an iterator, as in:

for line in file('input.txt'):
do_something_with(line)

I don't see that there's anything "implicit" in the concept that a
special operation works as indicated by its syntax. I.e., I do not
find this construct any more "implicit" in the first line than in
its second one, which is the juxtaposition of a name and a pair of
parentheses to indicate calling-with-arguments -- and alternatives
such as:

do_something_with.call_with_arguments(line)

aren't "more explicit", just more verbose.

Similarly, the fact that
file('input.txt')
(a call to a type object) creates and returns an object is not any
more "implicit" than it would be to have to call factory classmethods
a la:
file.open_for_reading_an_existing_textfile_named('input.txt')
would not be "more explicit", just more verbose.

Simply juxtaposing parentheses right after a callable CALLS it,
because that syntax is defined to be THE syntax for such calls in
Python. Similarly, simply prepending "for xx in" before an iterable
ITERATES ON it, because that syntax is defined to be THE syntax for
such iteration in Python. Neither is "less explicit" than verbose
alternatives requiring (e.g.) access to attributes on the callable
or iterable object. Such access to attributes could not (by first
class objects rule -- "everything's an object") produce anything
BUT objects -- so where does one stop...? x.call.call.call.call...???

This has nothing to do with "eschewing code blocks", btw; code blocks
are not "eschewed" -- they are simply syntactically allowed, as
"suites", only im specific positions. If Python's syntax defined
other forms of suites, e.g. hypothetically:

with <object>:
<suite>

meaning to call the object (or some given method in it, whatever)
with the suite as its argument, it would be just as explicit as, e.g.:

for <name> in <object>:
<suite>

or

<object>(<object>)

are today. Whether it would be wise, useful, etc, etc, is a different
set of issues, but I disagree that there is relevance here of the
implicit vs explicit principle (I know the poster you were replying to
did first claim that principle mattered here, but I just disagree with
him as well:). Of course, if we did adopt that 'with' or similar
syntax we'd also have to decide on the TYPE of a 'suite' thus literally
expressed, an issue which current syntax constructs using suites do
not have -- perhaps a code object, perhaps a callable (but in the
latter case you'r probably also want 'arguments' -- so the syntax might
have to be slightly more extensive, e.g. "with <object>, <name1>, <name2>:"
instead). But that seems a secondary issue to me.

This is not to say that I dislike that behavior; in fact, I find it
*beneficial* that the manner of looping is *implicit* because you can
substitute a generator for a sequence without changing the usage. But

You could do so even if you HAD to say iter(<object>) instead of
just <object> after every "for <name> in" -- it wouldn't be any
more "explicit", just more verbose (redundant, boiler-platey). So
I do not agree with your motivation for liking "for x in y:" either;-).
there's little readability difference, IMHO, between that and:

file('input.txt').each_line({ |line|
do_something_with(line)
})

Not huge, but the abundance of ({ | &c here hurts a little bit.

Plus, the first example is only obvious because I called my iteration
variable "line", and because this behavior is already widely known. What
if I wrote:

for byte in file('input.dat'):
do_something_with(byte)

That would be a bit misleading, no? But the mistake isn't obvious. OTOH,
in the more explicit (in this case) Ruby language, it would look silly:

open('input.txt').each_line { |byte|
# huh? why a byte? we said each_line!
}

Here, you're arguing for redundance, not for explicitness: you are claiming
that IF you had to say the same thing more than once, redundantly, then
mistakes might be more easily caught. I.e., the analogy is with:

file('foo.txt').write('wot?')

where the error is not at all obvious (until runtime when you get an
exception): file(name) returns an object *open for reading only* -- so
if you could not call file directly but rather than do say, e.g.:

file.open_for_reading_only('foo.txt').write('wot?')

the contrast induced by the mandated redundance might (one hopes)
make the error "look silly". Many languages do rather enthusiastically
embrace this, making you write more redundant boilerplate than useful
code connected with your program's task, or so it seems at times --
neither Ruby nor Python, however, go in for such systematic use of
redundance in general. In any case, redundance and explicitness are
separate concepts: if you have to express something more than once,
that is redundance -- if you have to express it (once or more) rather
than having the language guess on your behalf, that is explicitness.

Having sensible defaults does not necessarily violate explicitness,
btw. E.g., the reason we have to say "class x(object):" today is NOT
"just to be explicit" -- it's an unfortunate consequence of the need
to continue having old-style classes (and the prudent choice to keep
them as the default to ensure slow, smooth migration); an issue of
legacy, backwards compatibility, and concern for the existing body of
code, in other words, rather than of "implicit vs explicit". Once we
proceed on the slow process of burying classic classes, we can make
object the default base _without_ damaging anything. Of course, one
COULD puristically disagree -- but, practicality beats purity...;-).

I think this is important to point out, because the implicit/explicit
rule comes up all the time, yet Python is implicit about lots of things!
To name a few:

- for loops and iterators

Already addressed above: nothing implicit there.
- types of variables

There are none, so how could such a nonexisting thing be EITHER implicit
OR explicit? Variables don't HAVE types -- OBJECTS do.

Etc, etc -- can't spend another 1000 lines to explain why your "lots of
things" do not indicate violations of "explicit is better than implicit".

If all you're saying is that naming something is better than not naming
something because explicit is better than implicit, I'd have to ask why:

Sometimes it is (to avoid perilous nesting), sometimes it isn't (to
avoid wanton naming). I generally don't mind naming things, but it IS
surely possible to overdo it -- without going to the extreme below,
just imagine a language where ONLY named argument passing, and no use
of positional arguments, was allowed (instead of naming arguments being
optional, as it is today in Python).


I don't agree with Mike that try/finally is particularly readable. Yes,
it IS understandable, but its lack of support for [a] an explicit
initialization phase and distinction, when needed, between normal
and unnormal exists, does lead to frequent problems -- e.g.:

try:
f = open('goo.gah')
process_file(f)
finally:
f.close()

this is a FREQUENT bug -- if open fails, and thus f remains unbound,
the finally clause will STILL try to call close on it. Psychologically
it comes natural to write the initialization INSIDE the try/finally,
but technically it should be inside. Also, when the actual actions
require more than one object, try/finally leads to deep nesting, and
flat is better than nested, e.g.:

fs = open(xx, 'r')
try:
f1 = open(x1, 'w')
try:
f2 = open(x2, 'w')
try:
skip_prefix(fs)
split_from_to(pred, fs, f1, f2)
add_postfix(f1)
add_postfix(f2)
finally:
f2.close()
finally:
f1.close()
finally:
fs.close()

In a word, "yeurgh".

Not that the introduction of "block syntax" would be a panacea here,
necessarily. But claiming there is no problem with try/finally is,
IMHO, kidding ourselves.

Readability is a moving target. I think that the code block syntax
strikes a nice balance between readability and expressiveness. As far as

Maybe. I'm still not sold, though I think I do understand well why
one would WANT a literal form for code blocks. But some of the
use cases you give for 'not naming' -- e.g, a return statement --
just don't sit right with the kind of syntax that I think might help
with many of them (e.g. the 'with' keyword or something similar, to
pass blocks as arguments to callables, ONLY -- that's all Ruby allows,
too, so your 'return' use case above-mentioned wouldn't work all that
well there, either, though its lack of expression/statement split may
perhaps help a little).

If a Pythonic syntax can't be found to solve ALL use cases you've
raised, then the "balance" may be considered not nice enough to
compensate for the obvious problem -- a serious case of MTOWTDI.
what the majority of Python developers consider evil, I don't think
we've got the stats back on that one.

I don't think anybody has "stats", but following python-dev
regularly does give you a pretty good sense of what the consensus
is on what issues (it matters up to a point, since in the end Guido
decides, but, it does matter somewhat). MTOWTDI, for example, is
a dead cert for a chorus of boos -- even when the existing WTDI is
anything but "obvious", e.g. reduce(operator.add, somelist) in 2.2
and before, proposing an obvious alternative. e.g. sum(somelist) in
2.3, is SURE to draw some disagreement (good thing, in this case,
that Guido overruled the disagremeent and adopted the 'sum' builtin).

So, the emergence of a way to write, e.g.:

"""
def loop_on_each_byte(filename):
def looping_callable(block):
...block(byte)...
return looping_callable

with loop_on_each_byte(filename), byte:
process_byte(byte)
"""

as an OWTDI from

"""
def each_byte(filename):
...yield byte...

for byte in each_byte(filename):
process_byte(byte)
"""

would SURELY draw a well-justified roar. The benefits would have
to be very overwhelming indeed to overcome this issue. But if we
were to support "return this literal code block", for example,
then the code block literal syntax would have to be an expression
rather than a suite -- and I just can't find a good way to do
THAT. And if we don't support use cases which advocates of such
a new construct, like you, quote fondly -- and, I repeat, the
"why should I have to name something I am just going to return"
WAS one of the few use cases for literal code blocks you brought,
even though it's not directly supported in Ruby (or Smalltalk, as
fas as I know). So, I suspect there may be no good solution.

This is nothing like APL... if anything, it's like Smalltalk, a language
designed to be readable by children!

Cite pls? I knew that Logo and ABC had been specifically designed
with children in mind, but didn't know that of Smalltalk.
I realize that APL sacrificed
readability for expressiveness to an uncomfortable extreme, but I really
think you're comparing apples and oranges here. List comprehensions are
closer to APL than code blocks.

As an ex-user of APL (and APL2) from way back when, I think you're
both talking through your respective hats: neither list comprehensions
(particularly in the Python variation on a Haskell theme, with
keywords rather than punctuation) nor code blocks resemble APL in the least.


Alex
 
A

Alex Martelli

Paul said:
That's silly. Something being successful means people want to use it
to get things done in the real world. At that point they start
needing the tools that other languages provide for dealing with the
real world. The real world is not a small and simple place, and small
simple systems are not always enough to cope with it. If GVR had kept
his gem small and simple, it would have remained an academic toy, and
I think he had wide-reaching ambitions than that.

I disagree, somewhat. Simplicity is not just history: it's still a
principle Pythonistas cherish -- and 3.0 will be about restoring some
of that, not by removing "tools you NEED for dealing with the real
world", but at least some of the MTOWTDI that HAS crept in by instead
adding a few of the "tools that *other languages* provide". Sure,
practicality beats purity -- and while simple is better than complex,
complex is better than complicated. I further argue that GvR *HAS*
"kept his [gem? what gem? it's Python, not Ruby!] small and simple" --
not QUITE as small and simple as it might have been kept in the best
of all possible worlds, but still outstandingly so compared with other
languages of comparable power and ease.

And Kenny's suggestion to "chase wish-listers away" is excellent --
one can use Dylan or C# or O'CAML or whatever else as an alternative
to Lisp, if that's what will best get them to stop bleating. Besides,
"if you want PL/I you know where to find it" has nice precedents (in
the only other language which was widely successful in the real world
while adhering to "provide only one way to perform an operation" as
one of its guiding principles -- not perfectly, but, close enough:).


Alex
 
J

Jon S. Anthony

Andrew Dalke said:
The tricky thing about using McConnell's book is the implications
of table 31-2 in the section "Using Rapid Development Languages",

This thing has been debunked for years. No one with a clue takes it
seriously. Even the author(s) indicate that much of it is based on
subjective guesses.

/Jon
 
D

Doug Tolton

Andrew said:
Doug Tolton:



I disagree with your summary. Compare:

The argument is that expressive power for a single developer can, for
a group of developers and especially those comprised of people with
different skill sets and mixed expertise, reduce the overall effectiveness
of the group.

Notice the "can". Now your summary is:

...allowing expressiveness via high level constructs detracts
from the effectiveness of the group

That implies that at least I assert that *all* high level constructs
detract from group effectiveness, when clearly I am not saying
that.





Nor can you, because I did not say that. I said that the arguments you
use to justify your assertions could be stronger if you were to include
cases in your history and experience which show that you understand
the impacts of a language feature on both improving and detracting from
a group effort. Since you do have that experience, bring it up. But
since your arguments are usually along the lines of "taking tools out
of your hands", they carry less weight for this topic.

(Ambiguity clarification: "your hands" is meant as 2nd person singular
possessive and not 2nd person plural. :)




McConnell's book has the same study, with outliers for assembly
and APL. Indeed, I mentioned this in my reply:



I assume you refer to "Succinctness is Power" at
http://www.paulgraham.com/power.html

It does not make as strong a case as you state here. It argues
that "succintness == power" but doesn't make any statement
about how much more succinct Lisp is over Python. He doesn't
like Paul Prescod's statement, but there's nothing to say that
Python can't be both easier to read and more succinct. (I am
not making that claim, only pointing out that that essay is pure
commentary.)

Note also that it says nothing about group productivity.
If it takes me 5% longer to write a program in language X
then language Y, but where I can more easily use code and
libraries developed by others then it might be a good choice
for me to use a slightly less succinct language.

Why don't people use APL/J/K with it's succinctness?

I also disagree with Graham's statement:



I develop software for computational life sciences. I would
do so in Perl, C++, Java, even Javascript because I find
the domain to be very interesting. I would need to be very
low on money to work in, say, accounting software, even if
I had the choice of using Python.





Yes. In this I have a large body of expertise by which to compare
things. Perl dominates bioinformatics sofware development, and the
equivalent Python code is quite comparable in side -- I argue that
Python is easier to understand, but it's still about the same size.




"Can't be located"!?!?! I gave a full reference to the secondary material,
included the full quote (with no trimming to bias the table more my way),
gave the context to describe the headings, and gave you a reference
to the primary source! And I made every reasonable effort to find both
sources online.

Since you can't be suggesting that I tracked down and destroyed
every copy of McConnell's book and of the primary literature (to make
it truely unlocatable) then what's your real complaint? That things exist
in the world which aren't accessible via the web? And how is that my
fault?




If I want some real world numbers on program length, I do it myself:
http://pleac.sourceforge.net/
I wrote most of the Python code there

Still, since you insist, I went to the scorecard page and changed
the weights to give LOC a multipler of 1 and the others a multiplier
of 0. This is your definition of succinctness, yes? This table
is sorted (I think) by least LOC to most.

SCORES
Language Implementation Score Missing
Ocaml ocaml 584 0
Ocaml ocamlb 584 0
Ruby ruby 582 0
Scheme guile 578 0
Python python 559 0
Pike pike 556 0
Perl perl 556 0
Common Lisp cmucl 514 0
Scheme bigloo 506 1
Lua lua 492 2
Tcl tcl 478 3
Java java 468 0
Awk mawk 457 6
Awk gawk 457 6
Forth gforth 449 2
Icon icon 437 7
C++ g++ 435 0
Lisp rep 427 3
Haskell ghc 413 5
Javascript njs 396 5
Erlang erlang 369 8
PHP php 347 9
Emacs Lisp xemacs 331 9
C gcc 315 0
SML mlton 284 0
Mercury mercury 273 8
Bash bash 264 14
Forth bigforth 264 10
SML smlnj 256 0
Eiffel se 193 4
Scheme stalin 131 17

So:
- Why aren't you using Ocaml?
- Why is Scheme at the top *and* bottom of the list?
- Python is right up there with the Lisp/Scheme languages
- ... and with Perl.

Isn't that conclusion in contradiction to your statements
that 1) "Perl is *far* more compact than Python is" and 2)
the implicit one that Lisp is significantly more succinct than
Python? (As you say, these are small projects .. but you did
point out this site so implied it had some relevance.)




I invite you to dig up the original paper (which wasn't McConnell)
and enlighten us. Until then, I am as free to agree with McConnell --
more so because his book is quite good and comprehensive with
sound arguments comparing and contrasting the different
approaches and with no strong hidden agenda that I can detect.




My lack of knowledge not withstanding, the question I pose to
you is, in three parts:
- is it possible for a language feature to make a single programmer
more expressive/powerful while hindering group projects?
Yes I believe this to be the case. However in my own experience even
working with language such as Visual Basic and Java (which are far less
expressive than Python), people give me code that is so obfuscated that
is could compete in the Perl contenst.

In my experience, it hasn't been expressiveness per se that caused the
most problems. It has been lack of familiarity with sound software
engineering concepts, or more specific lack of experience building real
world applications.

So the short answer, is that *any* operator / feature used incorrectly
can cause massive confusion. I've seen this with simple operators such
as loop (ever seen seven nested loops doing different things at
different levels? It's can be ugly)
- can you list three examples of situations where that's occured?
Hmm, does everythime I've read someone elses code count? ;)
In seriousness, I have yet to be on any serious project where someone
doesn't do something that I disagree with. Personally though, I haven't
run across a problem where a cleanly implemented abstraction (ie class,
macro, HOF or metaclass) has caused a loss of productivity. In my
experience it has been quite the opposite.

Most of the development teams that I've worked on have gravitated
towards two groups. Those who write utilities and substrates for the
development framework, and those who consume them. This has happened
even if not specified by management, simply because those with the
ability to write reusable abstractions end up doing it a lot. I have
personally seen on numerous occaisions development speed up greatly when
the proper high level constructs were in place.
- can you list one example where the increased flexibility was, in
general, a bad idea? That is, was there a language which would
have been better without a language feature.
I don't necessarily believe that to be the case. Certainly I can list
cases where utilizing a certain feature for a certain problem has been a
bad idea. That doesn't general to the language would be better without
the feature though. For that to be the case, IMO, there would have to
be *no* redeaming value to the feature, or it's use would have to be so
massively problematic that it nearly always causes problems.

I can't think of any feature off hand where I would say "take it out of
the language, that's just stupid". Perhaps there are some, and I'm just
missing them while I'm thinking about it.

One example of mis-use that caused some serious headaches:
Back in 1999 I was lead on a team building a heavy duty enterprise web
application. Management decided that our best choice was to use Visual
Basic and MTS. The system had to be scalable, it had to be extremely
fault tolerant and it had to be very flexible. The architecture
initially decided upon was to have three web servers, two application
servers and a fully fault tolerant sql server.

Based on the initial reports from MS we decided to test DCOM from the
web servers to the application servers (remember when that was the big
fad?). We quickly found out that our performance was terrible, and
couldn't scale to support our minimum required users. Switching things
around we re-configured and went with five web servers each running the
MTS components locally.

Another problem we ran into was that we decided to test out the XML
hype. All of our messaging between objects and between systems was sent
via XML payloads. This turned out to be extremely slow, and we ended up
ripping out most of the XML messaging guts in order to spead up the system.

We also encountered serious problems with people not knowing how to
efficiently utilize a SQL Server. For instance they would get a
recordset from each table (rather than joining) and then loop through
each recordset comparing the values and constructing their resultset.
Rewriting the queries to properly utilize joins and where clauses
yielded several orders of magnitude performance increases.
Note that I did not at all make reference to macros. Your statements
to date suggest that your answer to the first is "no."
That's not exactly my position, rather my position is that just about
anything can and will be abused in some way shape or fashion. It's a
simple fact of working in teams. However I would rather err on the side
of abstractability and re-usability than on the side of forced restrictions.
 
E

Erann Gat

"Vis Mike" said:
v...

Ahh, but overloading only works at compile time:

That's irrelevant. When it happens doesn't change the fact that this
proves it (multiple dispatch with non-congruent arglists) is possible.
Nothing prevents you from using the same algorithm at run time.

E.
 
A

Alex Martelli

Dave said:
Here's my non-PEP for such a feature:

return { |x, y|
print x
print y
}

Which would be the equivalent of:

def anonymous_function(x, y):
print x
print y
return anonymous_function

Oh, and what should:

return {
}

MEAN? An empty dictionary, like today, or the equivalent of

return lambda: None

i.e. an empty argument-less function?

This is just reason #1 why this syntax is not satisfactory (I
guess it could be forced by mandating || to mean "takes no
args" -- deviating from Ruby in that sub-issue, though). The
second point is the use of punctuation in a way that no other
Python syntactic context allows -- it really feels alien.

Further, something that is more often than not desired by
people who desire code blocks is that they *don't* start a
new scope. Ruby fudges it with an ad-hoc rule -- use of
variables already existing outside is "as if" you were in
the same scope, use of new variables isn't (creates a new
variable on each re-entry into the block via yield, right?).
Clearly we can't use that fudge in Python. So, this is a
semantic problem to solve for whatever syntax. Can we find
any approach that solves ALL use cases? I don't know, but my
personal inclination would be to try saying that such a block
NEVER define a new lexical scope, just like list comprehensions
don't -- i.e. in this sense such blocks would NOT be at all
equivalent to the functions produced by a def statement (lots
of implementation work, of course) -- all variables that might
look like locals of such a block would instead be considered
locals of the "enclosing" scope (which isn't enclosing, in a
sense, as there's no other new scope to enclose...;-).

SOME explicit termination is no doubt necessary if we want to
allow returning e.g. a tuple of two or more such functions --
which is why we can't try taking another (and less repellent
to Pythonic syntax) leaf from Ruby's book (using do instead
of { -- requires making do a keyword -- and leaving out the
end that Ruby always requires, of course):

return do(x, y):
print x
print y

there would be no way to write something more after this
block but within the same expression if the only termination
was dedenting; perhaps:

return ( do(x, y):
print x
print y
), ( do(x, y):
print y
print x
)

i.e. with parentheses around the do-expression (if you need
to put anything more after it) might help here.
Then, merge map, filter, and reduce into the list type, so we can play

Why? So you can't e.g. reduce an arbitrary iterator (e.g., genererator),
tuple, array.array, ..., any more? We'd be better off without them, IMHO.
I see no advantage, over e.g. itertools, in associating these syntactically
to sequences (any kind or all kinds) or even iterators.


Alex
 
J

james anderson

Andreas said:
In functional languages you at least have no limitation whatsoever on
the depth of tail calls. Is the same true for macros?

any macro which cannot be implemented as a single quasiquoted form is likely
to be implemented by calling a function which computes the expansion. the only
difference between a macro function and any "normal" defined function is that
the former is not necessarily any symbol's function value. an auxiliary
function will be a function like any other function: anonymous, defined,
available in some given lexical context only. whatever. there are no intrinsic
restrictions on the computation which it performs. it need only admit to the
reality, that the environment is that of the compiler. eg, definitions which
are being compiled in the given unit "exist" if so specified only.

i am curious whether the availability of tail call elimination can have any
effect on the space performance of a function which is, in general, being
called to compute expressions for inclusion in a larger form. my intuition
says it would not matter.
Apart from that, can one have higher-order macros? With mutual recursion
between a macro and its argument?

what would that mean? a macro-proper's argument is generally an s-expression,
and the macro function proper is not bound to a symbol and not necessarily
directly funcallable, but i suppose one could come up with use cases for
mutual recursion among the auxiliary functions.

the generated expressions, on the other hand, often exhibit mutual references.
in this regard, one might want, for example to look at j.schmidt's meta
implementation.[1] perhaps, in some sense, the mutual references which it
generates could be considered "higher-order", but that doesn't feel right.

there's also the issue, that there is nothing which prevents a macro function
from interpreting some aspects of the argument expression as instructions for
operations to be performed at compile-time. eg. constant folding. depending on
how the macro might establish constancy, i'm not sure what "order" that is.
That is, can you write a fixpoint
operator on macros?

why one would ever think of doing that is beyond me, but given the standard y
operator definition [0],

? (DEFUN Y (F)
( (LAMBDA (G) #'(LAMBDA (H) (FUNCALL (FUNCALL F (FUNCALL G G)) H)))
#'(LAMBDA (G) #'(LAMBDA (H) (FUNCALL (FUNCALL F (FUNCALL G G))
H)))))

Y

should one feel compelled to do so, one might resort to something like

? (defmacro print* (&rest forms)
`(progn ,@(funcall (y #'(lambda (fn)
#'(lambda (forms)
(unless (null forms)
(cons `(print ,(first forms))
(funcall fn (rest forms)))))))
forms)))

PRINT*
? (macroexpand '(print* (list 1 2) "asdf" 'q))
(PROGN (PRINT (LIST 1 2)) (PRINT "asdf") (PRINT 'Q))
T
? (print* (list 1 2) "asdf" 'q)

(1 2)
"asdf"
Q
Q
?
I'm not saying that any of this would be overly useful. Just trying to
refute Dirk's rather general statement about macros subsuming HOF's.

hmm... i never thought of it that way.


[0] http://www.nhplace.com/kent/Papers/Technical-Issues.html
[1] http://www.cliki.net/Meta
 
A

Alex Martelli

Doug said:
David said:
There's something pathological in my posting untested code. One more
try:

def categorize_jointly(preds, it):
results = [[] for _ in preds]
for x in it:
results[all(preds)(x)].append(x)
return results

|Come on. Haskell has a nice type system. Python is an application of
|Greespun's Tenth Rule of programming.

Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell
does them much better, for example... and so does Python.
What is your basis for that statement? I personally like the way Lisp
does it much better, and I program in both Lisp and Python. With Python
it's not immediately apparent if you are passing in a simple variable
or a HOF. Whereas in lisp with #' it's immediately obvious that you are
receiving or sending a HOF that will potentially alter how the call
operates.

IMO, that syntax is far clearner.

I think it's about a single namespace (Scheme, Python, Haskell, ...) vs
CLisp's dual namespaces. People get used pretty fast to having every
object (whether callable or not) "first-class" -- e.g. sendable as an
argument without any need for stropping or the like. To you, HOFs may
feel like special cases needing special syntax that toots horns and
rings bells; to people used to passing functions as arguments as a way
of living, that's as syntactically obtrusive as, say, O'CAML's mandate
that you use +. and not plain + when summing floats rather than ints
(it's been a couple years since I last studied O'CAML's, so for all I
know they may have changed that now, but, it IS in the book;-).

No doubt they could make a case that float arithmetic has potentially
weird and surprising characteristics and it's a great idea to make it
"immediately obvious" that's it in use -- and/or the case that this
allows stronger type inference and checking than SML's or Haskell's
use of plain + here allows. Rationalization is among the main uses
for the human brain, after all -- whatever feature one likes because
of habit, one can make SOME case or other for;-).


Alex
 
P

Pascal Bourguignon

Andrew Dalke said:
That to me is a solid case of post hoc ergo proper. The
words "1st" and "rst" are equally as short and easier to
memorize. And if terseness were very important, then
what about using "." for car and ">" for cdr? No, the reason
is that that's the way it started and it will stay that way
because of network effects -- is that a solid engineering
reason? Well, it depends, but my guess is that he wouldn't
weight strongly the impact of social behaviours as part of
good engineering. I do.

Right, network effect. And attachment to historic heritage. C has B
and "Hello World!". COBOL has real bugs pined in log books. Lisp has
704' CAR and CDR.
 
P

Pascal Bourguignon

Andrew Dalke said:
Or is there a requirement that it be constrained to display
systems which can only show ASCII? (Just like a good
Lisp editor almost requires the ability to reposition a
cursor to blink on matching open parens. Granted, that
technology is a few decades old now while Unicode isn't,
but why restrict a new language to the display systems
of the past instead of the present?)

Because the present is composed of the past. You have to be
compatible, otherwise you could not debug a Deep Space 1 probe
160 million km away, (and this one was only two or three years old).



Indeed. It looks easier to understand to my untrained eye.
I disagree that "+" shouldn't work on strings because that
operation isn't commutative -- commutativity isn't a feature
of + it's a feature of + on a certain type of set.

Mathematicians indeed overload operators with taking into account
their precise properties. But mathematicians are naturally
intelligent. Computers and our programs are not. So it's easier if
you classify operators per properties; if you map the semantics to the
syntax, this allow you to apply transformations on your programs based
on the syntax without having to recover the meaning.
 
E

Edi Weitz

[Followup-To ignored because I don't read comp.lang.python]

I think it's about a single namespace (Scheme, Python, Haskell, ...)
vs CLisp's dual namespaces. People get used pretty fast to having
every object (whether callable or not) "first-class" --
e.g. sendable as an argument without any need for stropping or the
like. To you, HOFs may feel like special cases needing special
syntax that toots horns and rings bells; to people used to passing
functions as arguments as a way of living, that's as syntactically
obtrusive as, say, O'CAML's mandate that you use +. and not plain +
when summing floats rather than ints

In Common Lisp (not "CLisp", that's an implementation) functions /are/
first-class and sendable as an argument "without any need for
stropping or the like." What exactly are you talking about?

Edi.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,172
Messages
2,570,934
Members
47,474
Latest member
AntoniaDea

Latest Threads

Top