Python syntax in Lisp and Scheme

D

David Eppstein

Matthias said:
ctually, I meant more lazy-like-lazy-in-Haskell. Infinite data
structures and such. "primes" being a list representing _all_ prime
numbers for instance. You can build this as soon as you have closures
but making the construction easy to use for the application programmer
might be a challenge without macros. But I don't know what
"properties" are in Python, possibly they are built for exactly that.

You mean like as in
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/117119
?
 
M

Matthias Blume

Two words: code duplication.

Yes, anything that can be done with macros can also be done with
functions, but if you do it with functions, you will end up with more
code, and that code will be duplicated in every single source location
in which that abstraction it utilized.

Three words and a hyphen: Higher-Order Functions.

Most of the things that macros can do can be done with HOFs with just
as little source code duplication as with macros. (And with macros
only the source code does not get duplicated, the same not being true
for compiled code. With HOFs even executable code duplication is
often avoided -- depending on compiler technology.)

I say "most" because there are some things that macros can do which
are difficult with HOFs -- but those things have to do with
compile-time calculations. A good example that was given to me during
the course of a similar discussion here on netnews is that of a
parser-generator macro: it could check at compile time that the input
grammar is, e.g., LL(1). With HOFs this check would have to happen at
runtime, or at link-time at the earliest. (Some languages, e.g., SML
can do arbitrary calculations at what we normally call "link time".)

Many static guarantees can be obtained by relying on reasonably
powerful static type systems, but I know of none that is powerful
enough and practical as well as general at the same time which would
be able to check for LL(1)-ness. That's why macros sometimes "beat"
HOFs. For most things they don't.
With a macro, the abstraction is defined once, and the source code
reflects that abstraction everywhere that abstraction is used
throughout your program. For large projects this could be hundreds of
source locations.

Sure. Same goes for HOFs.
Without a macro, you have multiple points of maintenance. If your
abstraction changes, you have to edit scores or hundreds of source
locations. With a macro, you redefine a single form, in one source
location, and recompile the dependent code.

All the same with HOFs.
Finally, there is one thing that macros can do that ordinary functions
cannot do easily - change the language's rules for functional
evaluation.

Well, no, not really. You can define new syntactic forms in terms of
old ones, and the evaluation rules end up being determined by those of
the old ones. Again, with HOFs you can always get the same
effect -- at the expense of an extra lambda here and there in your
source code.
This can only be accomplished with functions if you're
willing to write a set of functions that defer evaluation, by, say
parsing input, massaging it appropriately, and then passing it to the
compiler. At that point, however, you've just written your own macro
system, and invoked Greenspun's 10th Law.

This is false. Writing your own macro expander is not necessary for
getting the effect. The only thing that macros give you in this
regard is the ability to hide the lambda-suspensions. To some people
this is more of a disadvantage than an advantage because, when not
done in a very carefully controlled manner, it ends up obscuring the
logic of the code. (Yes, yes, yes, now someone will jump in an tell
me that it can make code less obscure by "canning" certain common
idioms. True, but only when not overdone.)

Matthias
 
A

Alex Martelli

Pascal said:
David said:
Pascal Costanza said:
I don't know a lot about Python, so here is a question. Is something
along the following lines possible in Python?

(with-collectors (collect-pos collect-neg)
(do-file-lines (l some-file-name)
(if (some-property l)
(collect-pos l)
(collect-neg l))))

I actually needed something like this in some of my code...

Not using simple generators afaik. The easiest way would probably be to
append into two lists:

collect_pos = []
collect_neg = []
for l in some_file_name:
if some_property(l):
collect_pos.append(l)
else:
collect_neg.append(l)

...but this means that

collect = []
for l in some_file_name
if some_property:
collect.append(l)

...is another solution for the single collector case. Now we have two

Of course, it is; it is by definition equivalent to the list comprehension:

collect = [l for l in some_file_name if some_property(l)]

(you do need to call some_property with l as the argument, which you're
not doing above, but I think that's what you meant to do; also, I suspect
you want to loop on open(some_file_name) -- the lines of the file, as
opposed to its name -- but that's apparently a distraction by David).
ways to do it. Isn't this supposed to be a bad sign in the context of
Python? I am confused...

While in Python it's deemed _preferable_ that there be "one obvious way
to do it" (for any given "it"), it's a _preference_, and there are many
cases where it just can't practically hold (practicality beats purity).

It starts with the fact that both 2+3 and 3+2 are equally-obvious ways
to sum these two numbers -- it would be quite impractical to make Python
addition non-commutative to avoid the issue;-).

If you needed to do this a lot of times you could encapsulate it into a
function of some sort:

def posneg(filter,iter):
results = ([],[])
for x in iter:
results[not filter(x)].append(x)
return results

collect_pos,collect_neg = posneg(some_property, some_file_name)

What about dealing with an arbitrary number of filters? ...
(predicate-collect '(-5 -4 -3 -2 -1 0 1 2 3 4 5)
(function evenp)
(lambda (n) (< n 0))
(lambda (n) (> n 3)))
(-4 -2 0 2 4)
(-5 -3 -1)
(5)
(1 3)

I think I would code this as follows:

def collect_by_first_predicate(finite_sequence, *predicates):
# a utility predicate that's always satisfied
def always(any): return True
# len(predicates) + 1 lists to collect the various results
results = [ [] for i in range(len(predicates)+1) ]
collectors = [ (pred, result.append)
for pred, result in zip(predicates+(always,), results) ]
for item in finite_sequence:
for pred, collect in collectors:
if pred(item):
collect(item)
break
return results

print collect_by_first_predicate(range(-5, 6),
lambda n: n%2==0,
lambda n: n<0,
lambda n: n>3)

this does emit, as desired,

[[-4, -2, 0, 2, 4], [-5, -3, -1], [5], [1, 3]]

however, this approach has some very serious limitations for certain
applications: in particular, both the sequence AND the number of
predicates absolutely HAVE to be finite. There even isn't any way
to _express_ an infinite (unbounded) family of predicates as separate
arguments to a function; so if you wanted to use such an approach to
classify the items of finite_sequence by a family of predicates where,
e.g., pred(i) asserts that the item is >10**i, for all i>0, you could not
use this approach (nor, I suspect, could you use your predicate-collect
approach for such purposes -- or am I missing something?). To accept
an infinite family of predicates I would have to change the arguments
(so e.g. the second would be an iterator for said infinite family) to
start with. The first 'sequence' argument COULD syntactically be an
infinite (unbounded) iterator, e.g. itertools.count() [the sequence
of all naturals, 0 upwards] -- but then the "for item in finite_sequence:"
loop inside the function would not terminate.

Pushing aside, for the moment, the issue of infinite numbers of
predicates, let's consider how best to deal with a finite number N
of predicates applied to a potentially unbounded/infinite sequence.
As such sequences, in Python, are represented by iterators, we
will presumably need to return N+1 iterators to represent the
"subsequences" -- one per predicate plus the "none of the above" one.

This is actually still somewhat tricky in terms of definition.
Suppose the predicates were, e.g.:
lambda n: n>0
lambda n: n<0
and the unbounded sequence contained no 0's. Then, if and when
the called tried to get the "next item" of the "none of the above"
(third) returned iterator -- that would never return, forevermore
looping on the input sequence and looking uselessly for the 0
that would escape both of the predicates. Alas, lazy computation
on unbound sequences DOES do this kind of things to you, at times.
As we can't solve the halting problem, we can't prevent it either,
in general. So, I think we just need to live with the possibility
that any of the returned iterators may "hang" forever, depending
on the predicates and the (unbounded) input sequence.

Apart from this problem, there's another interesting issue. When
the .next method gets called on any of the iterators we return,
which has no knowledge yet of its next item, we'll go through the
next items of the input sequence -- but that will produce items
to be returned as part of OTHER iterators, in general, before it
yields the next item for "this one". So, SOME mechanism must
remember those for the future (inevitably this may require more
memory than we have, in which case, ka-boom, but this is just a
finite-machine special case of the above paragraph, in a way;-).

The natural way to do this in Python is to use class instances,
one per returned iterator, giving each of them a list to store
the "forthcoming" items of the corresponding iterator. Might as
well make the instances the iterators themselves (just give them
the .next method, and a "def __iter__(self): return self" to
mark them as iterators).

def collect_by_first_predicate(sequence, *predicates):
seq_iter = iter(sequence)
def getanother():
item = seq_iter.next()
for collector in collectors:
if collector.pred(item):
collector.append(item)
return
class prediter(object):
def __iter__(self): return self
def __init__(self, pred):
self.memory = []
self.append = self.memory.append
self.pred = pred
def next(self):
while not self.memory: getanother()
return self.memory.pop(0)
def always(any): return True
collectors = [ prediter(pred) for pred in predicates+(always,) ]
return collectors

print map(list, collect_by_first_predicate(range(-5, 6),
lambda n: n%2==0,
lambda n: n<0,
lambda n: n>3)
)

This, too, emits [[-4, -2, 0, 2, 4], [-5, -3, -1], [5], [1, 3]] , of
course (we do have to turn the returned iterators into lists to be
able to print them easily, but, here, we do know they're finite:).


Of course, both of these implementations can easily be turned into
a collect_by_all_predicates (where an item is ranged in all collectors
corresponding to a predicate it satisfies, rather than in just one
of them) by simply omitting the return 'cut' after the collector.append
or collect(item) call in the respective for loop over collectors.


Back to the issue of potentially unbounded families of predicates
(and, thus, collectors), I think that sits badly with the concept of a
'catch-all predicate' -- if the family of predicates IS indeed
unbounded, the catch-all will never come into play. And I'm not
sure I want to _return_ the resulting potentially unbounded family
of collectors as an iterator, necessarily -- I see use cases for
an 'indexable' ("random-access", so to speak, vs the 'sequentially
accessed' nature of iterators) return value. So I've cobbled
together a sequence class that's basically a potentially expandable
list -- you can change the "return collectors" to
"return iter(collectors)" if you disagree with my qualms, of course.
So, here's a first sketch at a possible approach...:


class pseudo_seq_iter(object):
def __iter__(self): return self
def __init__(self, seq):
self.seq = seq
self.i = -1
def next(self):
self.i += 1
try: return self.seq[self.i]
except IndexError: raise StopIteration

def collect_by_first_predicate(sequence, predicates):
seq_iter = iter(sequence)
pred_iter = iter(predicates)
def always(any): return True

class collectors_sequence(list):
def __getitem__(self, i):
if i>=0:
while i>=len(self) and (not self or self[-1].pred is not
always):
addapred()
if i>=len(self): raise IndexError, i
return list.__getitem__(self, i)
def __iter__(self): return pseudo_seq_iter(self)
collectors = collectors_sequence()

def addapred():
try: pred = pred_iter.next()
except StopIteration: pred = always
collectors.append(prediter(pred))

def getanother():
item = seq_iter.next()
for collector in collectors:
if collector.pred(item):
collector.append(item)
return

class prediter(object):
def __iter__(self): return self
def __init__(self, pred):
self.memory = []
self.append = self.memory.append
self.pred = pred
def next(self):
while not self.memory: getanother()
return self.memory.pop(0)
return collectors

print map(list, collect_by_first_predicate(range(-5, 6),
[ lambda n: n%2==0,
lambda n: n<0,
lambda n: n>3
]
))

class divisible_by(object):
def __init__(self, maxn=None):
self.maxn = maxn
def __getitem__(self, i):
if i<0 or (self.maxn and i>self.maxn): raise IndexError, i
return lambda x: (x%(i+2)) == 0
def __iter__(self): return pseudo_seq_iter(self)

divisibles = collect_by_first_predicate(range(30), divisible_by(100))
for i in range(2,18):
result = list(divisibles[i-2])
if result: print i, result




I _have_ put some arbitrary limit on the divisible_by instance I
pass in the example, because, otherwise (as previously indicated)
trying to loop on an empty iterator among the 'divisibles' might
not terminate. In this case, this can be because the "for collector
in collectors" loop in 'getanother' never terminates, if predicates
doesn't, and some item never satisfies any predicate; or because
the 'while not self.memory' loop in prediter.next does not terminate,
if 'getanother' can never find an item to add to that collector's
memory, and the sequence does not terminate either; i.e., we have
two possible 'infinities' to cause trouble here. Anyway, this emits:

[alex@lancelot alex]$ python ac.py
[[-4, -2, 0, 2, 4], [-5, -3, -1], [5], [1, 3]]
2 [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28]
3 [3, 9, 15, 21, 27]
5 [5, 25]
7 [7]
11 [11]
13 [13]
17 [17]

which seems correct. Still, I think this could be enhanced to
avoid _some_ of the avoidable nontermination cases -- e.g. by
letting getanother know _what_ generator is hungry for more,
it can avoid making more generators after that one, and just
stash away as-yet-unclassified items for possible future needs.

I don't particularly like the following, because the management
of stashed_away is finicky and error-prone, but, as a first cut,
it does work to refactor getanother into:

def try_collecting(item, for_whom):
for collector in collectors:
if collector.pred(item):
collector.append(item)
return collector
elif collector is for_whom:
return None

stashed_away = []
def getanother(for_whom):
for item in stashed_away[:]:
put_where = try_collecting(item, for_whom)
if put_where is not None:
stashed_away.remove(item)
if put_where is for_whom: return
for item in seq_iter:
put_where = try_collecting(item, for_whom)
if put_where is None:
stashed_away.append(item)
elif put_where is for_whom: return
raise StopIteration

and change the statement calling getanother (in prediter.next) to:

while not self.memory: getanother(for_whom=self)

This lets us change the divisibles creation, as intended, to:

divisibles = collect_by_first_predicate(range(30), divisible_by())

i.e., with the family of predicates being unbounded (as long as
the sequence isn't also unbounded at the same time), while still
allowing attempts to loop on empty items of divisibles.

If I were to work on this further, I think I'd to so by wrapping
seq_iter into an "iterator with buffering for 'rejected for now,
show again later' items" class instance; encapsulating the
"stashing away" inside said class would simplify getanother again,
while retaining the essential behavior.


Alex
 
J

james anderson

Matthias said:
(e-mail address removed) (Raffael Cavallaro) writes:

...
...

Three words and a hyphen: Higher-Order Functions.

Most of the things that macros can do can be done with HOFs with just
as little source code duplication as with macros. (And with macros
only the source code does not get duplicated, the same not being true
for compiled code. With HOFs even executable code duplication is
often avoided -- depending on compiler technology.)

is the no advantage to being able to do either - or both - as the occasion dictates?

i'd be interested to read examples of things which are better done with HOF
features which are not available in CL. sort of the flip-side to the example
of compile-time calculation and code generation. taking into account that
generic functions are, at least to some extent, the equivalent of value-domain macro-expansion.

?
 
M

Marcin 'Qrczak' Kowalczyk

is the no advantage to being able to do either - or both - as the
occasion dictates?

The main disadvantage of macros is that they either force the syntax
to look like Lisp, or the way to present code snippets to macros is
complicated (Template Haskell), or they are as limited as C preprocessor
(which can't examine parameters).

I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.

I have yet to find out whether there can be a macro system I would accept,
and for now I prefer to improve the syntax of HOFs. Smalltalk and Ruby use
anonymous functions a lot (despite being imperative) because they are
aesthetic; in particular a nullary function looks just like a piece of
code in [] or {}. Python's lambda is more rare, perhaps because it looks
worse: a scientific word, body limited to a single expression. Named local
functions are not a sufficient building block for control structures either:
they can't assign to local variables of outer functions.
 
D

David Mertz

|> def posneg(filter,iter):
|> results = ([],[])
|> for x in iter:
|> results[not filter(x)].append(x)
|> return results
|> collect_pos,collect_neg = posneg(some_property, some_file_name)

|What about dealing with an arbitrary number of filters?

Easy enough:

def categorize_exclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
break
return results

Or if you want to let things fall in multiple categories:

def categorize_inclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
return results

Or if you want something to satisfy ALL the filters:

def categorize_compose(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
results[compose(filters)(x)].append(x)
return results

The implementation of 'compose()' is left as an exercise to readers :).
Or you can buy my book, and read the first chapter.

Yours, David...
 
D

Dirk Thierbach

james anderson said:
Matthias Blume wrote:
is the no advantage to being able to do either - or both - as the
occasion dictates?

I can't parse this sentence, but of course you can also use HOFs in Lisp
(all flavours). The interesting part is that most Lisp'ers don't seem
to use them, or even to know that you can use them, and use macros instead.

The only real advantage of macros over HOFs is that macros are guaranteed
to to executed at compile time. A good optimizing compiler (like GHC
for Haskell) might actually also evaluate some expressions including
HOFs at compile time, but you have no control over that.
i'd be interested to read examples of things which are better done
with HOF features which are not available in CL.

HOFs can of course be used directly in CL, and you can use macros to
do everything one could use HOFs for (if you really want).

The advantage of HOFs over macros is simplicity: You don't need additional
language constructs (which may be different even for different Lisp
dialects, say), and other tools (like type checking) are available for
free; and the programmer doesn't need to learn an additional concept.

- Dirk
 
D

David Mertz

|> def posneg(filter,iter):
|> results = ([],[])
|> for x in iter:
|> results[not filter(x)].append(x)
|> return results
|> collect_pos,collect_neg = posneg(some_property, some_file_name)

|What about dealing with an arbitrary number of filters?

Easy enough:

def categorize_exclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
break
return results

Or if you want to let things fall in multiple categories:

def categorize_inclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
return results

Or if you want something to satisfy ALL the filters:

def categorize_compose(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
results[compose(filters)(x)].append(x)
return results

The implementation of 'compose()' is left as an exercise to readers :).
Or you can buy my book, and read the first chapter.

Yours, David...
 
D

David Mertz

My answer sucked in a couple ways.

(1) As Bengt Ricther pointed out up-thread, I should have changed David
Eppstein's names 'filter' and 'iter' to something other than the
built-in names.

(2) The function categorize_compose() IS named correctly, but it doesn't
DO what I said it would. If you want to fulfill ALL the filters, you
don't to compose them, but... well, 'all()' them:

| def categorize_jointly(preds, it):
| results = [[] for _ in len(preds)]
| for x in it:
| results[all(filters)(x)].append(x)
| return results

Now if you wonder what the function 'all()' does, you could download:

http://gnosis.cx/download/gnosis/util/combinators.py

But the relevant part is:

from operator import mul, add, truth
apply_each = lambda fns, args=[]: map(apply, fns, [args]*len(fns))
bools = lambda lst: map(truth, lst)
bool_each = lambda fns, args=[]: bools(apply_each(fns, args))
conjoin = lambda fns, args=[]: reduce(mul, bool_each(fns, args))
all = lambda fns: lambda arg, fns=fns: conjoin(fns, (arg,))

For 'lazy_all()', look at the link.

See, Python is Haskell in drag.

Yours, David...
 
K

Kenny Tilton

Hannu said:
The problem with the example arises from the fact that indentation
is used for human readability, but parens are used by the parser.

And by the editor, meaning the buggy code with the extra parens had not
been written in a parens-aware editor (or the coder had stuck a parens
on without kicking off a re-indentation).
A clash between these two representations can lead to subtle bugs
like this one. But remove one of the representations, and there
can't be clashes.

No need. Better yet, with parens you do not have to do the indentation
yourself, you just have to look at what you are typing. Matching parens
highlight automatically as you close up nested forms (it's kinda fun
actually), and then a single key chard re-indents (if you have been
refactoring and things now belong at diff indentation levels).

I used to spend a /lot/ of time on indentation (in other languages). No
more. That is just one of the advantages of all those parentheses.

kenny
 
P

Pascal Bourguignon

Marcin 'Qrczak' Kowalczyk said:
The main disadvantage of macros is that they either force the syntax
to look like Lisp, or the way to present code snippets to macros is
complicated (Template Haskell), or they are as limited as C preprocessor
(which can't examine parameters).

I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.

I don't understand what you're complaining about.

When you have macros such as loop that allow you to write stuff like:

(loop for color in '(blue white red)
with crosses = :crosses
collect (rgb color) into rgb-list
maximize (blue-component color) into max-blue
until (color-pleases-user color)
finally return (vlues color rgb-list max-blue))

where are the parentheses at EVERY level you're complaining about?
where is the lisp-like syntax?


Lisp is not commie-stuff, nobody forces you to program your macros
following any hypothetical party line.
 
A

Alexander Schmolck

(I'm ignoring the followup-to because I don't read comp.lang.python)

Well, I supposed this thread has spiralled out of control already anyway:)
Indentation-based grouping introduces a context-sensitive element into
the grammar at a very fundamental level. Although conceptually a
block is indented relative to the containing block, the reality of the
situation is that the lines in the file are indented relative to the
left margin. So every line in a block doesn't encode just its depth
relative to the immediately surrounding context, but its absolute
depth relative to the global context.

I really don't understand why this is a problem, since its trivial to
transform python's 'globally context' dependent indentation block structure
markup into into C/Pascal-style delimiter pair block structure markup.

Significantly, AFAICT you can easily do this unambiguously and *locally*, for
example your editor can trivially perform this operation on cutting a piece of
python code and its inverse on pasting (so that you only cut-and-paste the
'local' indentation). Prima facie I don't see how you loose any fine control.
Additionally, each line encodes this information independently of the other
lines that logically belong with it, and we all know that when some data is
encoded in one place may be wrong, but it is never inconsistent.

Sorry, I don't understand this sentence, but maybe you mean that the potential
inconsitency between human and machine interpretation is a *feature* for Lisp,
C, Pascal etc!? If so I'm really puzzled.
There is yet one more problem. The various levels of indentation encode
different things: the first level might indicate that it is part of a
function definition, the second that it is part of a FOR loop, etc. So on
any line, the leading whitespace may indicate all sorts of context-relevant
information.

I don't understand why this is any different to e.g. ')))))' in Lisp. The
closing ')' for DEFUN just looks the same as that for IF.
Yet the visual representation is not only identical between all of these, it
cannot even be displayed.

I don't understand what you mean. Could you maybe give a concrete example of
the information that can't be displayed? AFAICT you can have 'sexp'-movement,
markup and highlighting commands all the same with whitespace delimited block
structure.
Is this worse than C, Pascal, etc.? I don't know.

I'm pretty near certain it is better: In Pascal, C etc. by and large block
structure delimitation is regulated in such a way that what has positive
information content for the human reader/programmer (indentation) has zero to
negative information content for the compiler and vice versa. This is a
remarkably bad design (and apart from cognitive overhead obviously also causes
errors).

Python removes this significant problem, at as far as I'm aware no real cost
and plenty of additional gain (less visual clutter, no waste of delimiter
characters ('{','}') or introduction of keywords that will be sorely missed as
user-definable names ('begin', 'end')).

In Lisp the situtation isn't quite as bad, because although most of the parens
are of course mere noise to a human reader, not all of them are and because of
lisp's simple but malleable syntactic structure a straighforward replacement
of parens with indendation would obviously result in unreadable code
(fragmented over countless lines and mostly in past the 80th column :).

So unlike C and Pascal where a fix would be relatively easy, you would need
some more complicated scheme in the case of Lisp and I'm not at all sure it
would be worth the hassle (especiallly given that efforts in other areas would
likely yield much higher gains).

Still, I'm sure you're familiar with the following quote (with which I most
heartily agree):

"[P]rograms must be written for people to read, and only incidentally for
machines to execute."

People can't "read" '))))))))'.
Worse than Lisp, Forth, or Smalltalk? Yes.

Possibly, but certainly not due to the use of significant whitespace.


'as
 
A

Alexander Schmolck

Marco Antoniotti said:
Why do I feel like crying? :{

Could it be because you've actually got some rational argument against
significant whitespace a la python?!

'as
 
T

Thomas F. Burdick

Marcin 'Qrczak' Kowalczyk said:
I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.

You find delimited words more difficult than symbols? For literate
people who use alphabet-based languages, I find this highly suspect.
Maybe readers of only ideogram languages might have different
preferences, but we are writing in English here...

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
 
D

Doug Tolton

Imagine a group of, say, a dozen programmers, working together by
typical Agile methods to develop a typical application program of
a few tens of thousands of function points -- developing about
100,000 new lines of delivered code plus about as much unit tests,
and reusing roughly the same amount of code from various libraries,
frameworks, packages and modules obtained from the net and/or from
commercial suppliers. Nothing mind-boggling about this scenario,
surely -- it seems to describe a rather run-of-the-mill case.

Now, clearly, _uniformity_ in the code will be to the advantage
of the team and of the project it develops. Extreme Programming
makes a Principle out of this (no "code ownership"), but even if
you don't rate it quite that highly, it's still clearly a good
thing. Now, you can impose _some_ coding uniformity (within laxer
bounds set by the language) _for code originally developed by the
team itself_ by adopting and adhering to team-specific coding
guidelines; but when you're reusing code obtained from outside,
and need to adopt and maintain that code, the situation is harder.
Either having that code remain "alien", by allowing it to break
all of your coding guidelines; or "adopting" it thoroughly by,
in practice, rewriting it to fit your guidelines; is a serious
negative impact on the team's productivity.

Alex, this is pure un-mitigated non-sense. Python's Metaclasses are
far more dangerous than Macro's. Metaclasses allow you to globally
change the underlying semantics of a program. Macros only allow you
to locally change the Syntax. Your comparison is spurious at best.

Your argument simply shows a serious mis-understanding of Macros.
Macros as has been stated to you *many* times are similar to
functions. They allow a certain type of abstraction to remove
extraneous code.

Based on your example you should be fully campaigning against
Metaclasses, FP constructs in python and Functions as first class
objects. All of these things add complexity to a given program,
however they also reduce the total number of lines. Reducing program
length is to date the only effective method I have seen of reducing
complexity.

If you truly believe what you are saying, you really should be
programming in Java. Everything is explicit, and most if not all of
these powerful constructs have been eschewed, because programmers are
just too dumb to use them effectively.


Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")
 
M

Marcin 'Qrczak' Kowalczyk

When you have macros such as loop that allow you to write stuff like:

(loop for color in '(blue white red)
[...]

Well, some people say the "loop" syntax is not very lispish - it's unusual
that it uses many words and few parentheses. It still uses only words and
parentheses, no other punctuation, and it introduces one pair of parentheses
for its one nesting level.

A richer alphabet is often more readable. Morse code can't be read as fast
as Latin alphabet because it uses too few different symbols. Japanese say
they won't abandon Kanji because it's more readable as soon as you know it -
you don't have to compose words from many small pieces which look alike
but each word is distinct. Of course *too* large alphabet requires long
learning and has technical difficulties, but Lisp expressions are too
little distinctive for my taste.

I know I can implement infix operators with Lisp macros, but I even don't
know how they feel because nobody uses them (do I have to explicitly open
infix region and explicitly escape from it to regular syntax?), and
arithmetic is not enough. All Lisp code I've read uses lots of parentheses
and they pile up at the end of each large subexpression so it's hard to
match them (an editor is not enough, it won't follow my eyes and won't
work with printed code).

Syntax is the thing I like the least in Lisp & Scheme.
 
C

Corey Coughlin

I was never very fond of lisp. I guess I mean scheme technically, I
took the Ableson and Sussman course back in college, so that's what I
learned of scheme, lisp in general I've mostly used embedded in other
things. In general, it always seemed to me that a lot of the design
choices in lisp are driven more by elegance and simplicity than
usability. When it comes to programming languages, I really want the
language to be a good tool, and to do as much of the work for me as
possible. Using parentheses and rpn everywhere makes lisp very easy
to parse, but I'd rather have something easy for me to understand and
hard for the computer to parse. (Not to mention car, cdr, cadr, and
so on vs. index notation, sheesh.) That's why I prefer python, you
get a nice algebraic syntax with infix and equal signs, and it's easy
understand. Taking out ';' at the ends of lines and indenting for
blocks helps me by removing the clutter and letting me see the code.
And yes, I'm sure you can write macros in lisp to interpret infix
operators and indexing and whatever you want, but learning a core
language that's wildly non-intuitive so that I can make it more
intuitive never seemed like a good use of my time. Python is
intuitive to me out of the box, and it just keeps getting better, so I
think I'll stick with it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,280
Messages
2,571,395
Members
48,096
Latest member
charlessmith

Latest Threads

Top