Python syntax in Lisp and Scheme

D

Daniel P. M. Silva

Andrew said:
[...]
What I said was that Python is *not* an application of
Greespun's Tenth Rule of programming because 1) it isn't
bug-ridden, and 2) because Python explores ideas which
which had no influence on Lisp's development -- user
studies of non-professional programmers.

Do you know where I can find those studies? I'm very intested in their
findings :)

By the way, what's a non-professional programmer?
Where are the user studies which suggested () over [], or that
"car" is better than "first"/"1st" or that "cdr" is better than
"rest"/"rst"?

Yes, I know that the early teletypes might not have had
[ or ], and that car and cdr come from register names on
the machine Lisp was first implemented on. If that's
indeed the justification then there may be a Lisp-ish language
which is equally as powerful, equally as elegant, etc *and*
which is slightly easier to learn and type. But it wasn't chosen,
and it won't be used because of good social reasons: a huge
existing code base and people who now have Lisp "in their
fingers" and don't want to retrain for the slight advantage
that others might get.

Well, if you count scheme as a lisp...

Welcome to DrScheme, version 205.3-cvs1oct2003.
Language: Pretty Big (includes MrEd and Advanced).
[first [list 1 2 3 '[4 5]]]
1

- Daniel
 
V

Vis Mike

v...
How could you have both noncongruent argument lists, and multiple
dispatch?

C++ seems to manage it somehow.

#include <stdio.h>

void foo(int x, int y) { printf("1\n"); }
void foo(double x, int y) { printf("2\n"); }
void foo(char* x) { printf("3\n"); }

main() {
foo(1,2);
foo(1.2,2);
foo("foo");
}

compiles and runs without complaint.

E.[/QUOTE]

Ahh, but overloading only works at compile time:

void foo( SomeBaseObject* object );
void foo( SomeDerivedObject* object );

doesn't work if you're using a base class pointer for all your derived
classes.

Mike
 
A

Andrew Dalke

Doug Tolton:
Graham does admit in that the reasons for the choice were mostly
historical. However, he uses them because he likes the fact that they
are shorter than first and rest.

Not in that essay I referenced. And I deliberately mentioned
"1st" and "rst" as alternatives to "car" and "cdr" which are exactly
the same length and are easier to remember. The fact that "first"
and "rest" are longer doesn't immediately mean that there are
no other viable alternatives.

BTW, paulgraham.com/arcfaq.html says that car/cdr remain
because they are composable, as in "cadr". Is that the same
as 2nd?

Ahh, the FAQ also says that [ and ] are "less directional"
than ( and ), which I can understand. I don't understand
the objection with < and > ; they "lose because they don't
wrap around enough to enclose expressions long than tokens."
That makes no sense to me. Is is that they aren't tall enough?

Couldn't a good development environment depict the delimiters
as, say Unicode characters 27E8 and 27E9?
http://www.unicode.org/charts/PDF/U27C0.pdf
Those look like large "<" and ">"

Or is there a requirement that it be constrained to display
systems which can only show ASCII? (Just like a good
Lisp editor almost requires the ability to reposition a
cursor to blink on matching open parens. Granted, that
technology is a few decades old now while Unicode isn't,
but why restrict a new language to the display systems
of the past instead of the present?)

Heh-heh. "Why not build Arc on top of Java/Parrot/.NET?"
"We're trying to make something for the long term in Arc,
something that will be useful to people in, say, 100 years."

Then build it on MMIX! :)
If you read his design goals for Arc you will note that he is a big fan
of very terse operators.

Indeed. It looks easier to understand to my untrained eye.
I disagree that "+" shouldn't work on strings because that
operation isn't commutative -- commutativity isn't a feature
of + it's a feature of + on a certain type of set.

He says that "programmers will be able to declare that
strings should be represented as traditional sequences of bytes."
which leads me to wonder about its Unicode support.

What's unicode support like in general for Lisp? Found an
answer in http://www.cliki.net/Unicode Support Digging
some more, it looks like CLisp uses .. UCS-4 and Unicode 3.2
(from clisp.cons.org). But do regexps work on unicode strings?
How portable are unicode strings? I figure they must be in
order to handle XML well. ... "ACL does not support 4 byte
Unicode scalar values" says franz.com. www.cl-xml.org says
"The processor passes 1749 of the 1812 tests .. when the base
implementation supports sixteen-bit charactrs." and that
MCL, LispWorks and the Allegro 5.0.1 international version
support 16-bit Unicode while Allegro ascii only supports 8bit.
So some have UCS-2 and some UCS-4.

Is there consensus on the Unicode API?

On the XML path, I found cl-xml. Looking at the bugs section in
http://pws.prserv.net/James.Anderson/XML/documentation/cl-xml.html
It says "the implementation form for HTTP support is determined
at compilation time." Is it really true that even HTTP handling is
different on the different implementations?

And the section under "porting" is .. strange. It looks like to
use the XML API for a given Lisp I need to know enough
about the given implementation to configure various settings,
so if I wanted to learn Lisp by writing, say, a simple XML-RPC
client then I have to learn first what it means "to complete
definitions in the following files" and the details of "defsystem",
"package", "STREAM-READER / -WRITER", etc.

That reminds me of my confusion testing a biolisp package.
I needed to edit the file before it worked; something to do
with commenting/uncommenting the right way to handle
packages. I prefer to start with working code.

Andrew
(e-mail address removed)
 
A

Andrew Dalke

Daniel P. M. Silva:
Do you know where I can find those studies? I'm very intested in their
findings :)

Sure. The research was done for ABC. ABC's home page is
http://homepages.cwi.nl/~steven/abc/
ABC is an interactive programming language and environment for
personal computing, originally intended as a good replacement for
BASIC. It was designed by first doing a task analysis of the
programming task.

There's a publication list at
http://homepages.cwi.nl/~steven/abc/publications.html

Guido, the author of Python, was involved in that project. For his
commentary on ABC's influence on Python see:
http://www.python.org/doc/essays/foreword.html
By the way, what's a non-professional programmer?

The people I generally work for. Research scientists,
usually computational chemists and computational biologists,
who need to write code but don't consider themselves to be
software developers and haven't had more than a semester
or two of formal training and would rather do more science
then spend time reading books on language practice or
theory, even if by doing so it made them more productive
in the long run.
Welcome to DrScheme, version 205.3-cvs1oct2003.
Language: Pretty Big (includes MrEd and Advanced).
[first [list 1 2 3 '[4 5]]]
1

Indeed? Well I just found a mention on Paul Graham's site
that he excluded [] over () because it didn't provide enough
directionality.

Again, where's the studies? :)

Andrew
(e-mail address removed)
 
D

Dave Benjamin

Daniel P. M. Silva:

Sure. The research was done for ABC. ABC's home page is
http://homepages.cwi.nl/~steven/abc/
ABC is an interactive programming language and environment for
personal computing, originally intended as a good replacement for
BASIC. It was designed by first doing a task analysis of the
programming task.

Interestingly enough:

"The language is strongly-typed, but without declarations. Types are
determined from context."
- http://ftp.cwi.nl/abc/abc.intro

Sounds like type inference to me.

Also:

"There is no GOTO statement in ABC, and expressions do not have
side-effects."
- http://homepages.cwi.nl/~steven/abc/teaching.html

Hints both at the statement/expression dichotomy of Python and the issue
that side-effects make it difficult to reason about a program, one of the
most important assertions made by functional proponents (IMHO).

Dave
 
A

Andrew Dalke

Dave Benjamin:
Interestingly enough:

"The language is strongly-typed, but without declarations. Types are
determined from context."
- http://ftp.cwi.nl/abc/abc.intro

Sounds like type inference to me.

Sound like dynamic typing to me. Python is strongly-typed
but without declarations, and the type is determined as needed.
But I don't know enough about ABC to authoritatively declare
that it does/does not do type inferencing. My guess is that it
does not.
Also:

"There is no GOTO statement in ABC, and expressions do not have
side-effects."
- http://homepages.cwi.nl/~steven/abc/teaching.html

Hints both at the statement/expression dichotomy of Python and the issue
that side-effects make it difficult to reason about a program, one of the
most important assertions made by functional proponents (IMHO).

I think you're reading too much into it. The example code
doesn't look at all functional to me, as in (from the main page)

HOW TO RETURN words document:
PUT {} IN collection
FOR line IN document:
FOR word IN split line:
IF word not.in collection:
INSERT word IN collection
RETURN collection

It looks like 'line' and 'word' can take on many values,
which a sure sign of something other than fp.

Andrew
(e-mail address removed)
 
M

Matthias

He probably means "operator overloading" -- in languages where
there is a difference between built-in operators and functions,
their OOP features let them put methods on things like "+".
[...]
And in Lisp if you want to do some
other kind of arithmetic, you must make up your names for those
operators. This is considered to be a good feature.

In comp.lang.lisp there was recently a thread discussing why not all
CL-types were also CL-classes and all functions CLOS-methods (so that
operator overloading would be possible). I think the outcome was more
or less "it happened by historic accident and it's easier to write
fast compilers then". In general, taking away flexibility from the
programmer is not in the spirit of Lisp, though.
 
P

Pascal Costanza

Andrew said:
Pascal Costanza:


I have run across his pages before, and have a hard time
symphathizing with his view of things. For example, the start of
the icad essay mentions that Lisp is already "kind of unusual"
compared to C because it includes a full interpreter. But
effectively all Python programs shipped include a full interpreter
as well, and many give access to that interpreter, so I don't
see what's unusual about it. Ditto for Tcl apps. Even some of
my command-line perl apps included a way to pass in perl
code on the command line, as for a search filter.

I guess this reflects his experiences when he has learned Lisp in the
beginning of the 80's (AFAIK).

Yes, scripting languages have caught up in this regard. (However, note
that Common Lisp also includes a full compiler at runtime.)
The phrase "they had hard-headed engineering reasons for
making the syntax look so strange." reminds me of the statement
"better first rate salespeople and second rate engineers than
second rate salespeople and first rate engineers" (and better
first rate both). That's saying *nothing* about the languages;
it's saying that his viewpoint seems to exclude the idea that
there are hard-headed non-engineering reasons for doing things."

No, that's not a logical conclusion.
Consider one of those "hard-headed engineering reasons", at
http://www.paulgraham.com/popular.html

It has sometimes been said that Lisp should use first and
rest instead of car and cdr, because it would make programs
easier to read. Maybe for the first couple hours. But a hacker
can learn quickly enough that car means the first element
of a list and cdr means the rest. Using first and rest means
50% more typing. And they are also different lengths, meaning
that the arguments won't line up when they're called,

That to me is a solid case of post hoc ergo proper. The
words "1st" and "rst" are equally as short and easier to
memorize. And if terseness were very important, then
what about using "." for car and ">" for cdr? No, the reason
is that that's the way it started and it will stay that way
because of network effects -- is that a solid engineering
reason? Well, it depends, but my guess is that he wouldn't
weight strongly the impact of social behaviours as part of
good engineering. I do.

As you have already noted in another note, car and cdr can be composed.
cadr is the second element, caddr is the third, cadddr is the fourth,
and so on. cddr is the rest after the second element, cdddr is the rest
after the third element, and so on. Other abbreviations I have used
relatively often are caar, cdar, cadar.

These abbreviations seem strange to a Lisp outsider, but they are very
convenient, and they are easy to read once you have gotten used to them.
You don't actually "count" the elements in your head every time you see
these operators, but they rather become patterns that you recognize in
one go.

I don't know how this could be done with 1st, rst or hd, tl respectively.

Of course, Common Lisp also provides first, second, third, and so on, up
to ninth, and rest. It also provides nth with (nth 0 l) = (car l), (nth
1 l) = (cadr l), and so on and (nthcdr 0 l) = l, (nthcdr 1 l) = (cdr l),
(nthcdr 2 l) = (cddr l) and so on.

Pick your choice. "There is not only one way to do it." (tm)

The learning curve is steeper, but in the long run you become much more
productive.

Pascal
 
D

Dave Benjamin

Dave Benjamin:

Sound like dynamic typing to me. Python is strongly-typed
but without declarations, and the type is determined as needed.
But I don't know enough about ABC to authoritatively declare
that it does/does not do type inferencing. My guess is that it
does not.

Yeah, I'm sure you're right. Even though I've made the argument against
confusing static and strong typing myself many times, I still got caught off
guard myself. Doesn't "determined from context" sound a little different
from dynamic typing, though? I mean, to me, it reads like:

We don't declare types, ie.:
int i = 5
Instead, we determine them from context:
i = 5

What has the type, according to that language? The "i" or the "5"? How is
the type of "5" determined from context? Shouldn't it be "int", regardless of
context?
I think you're reading too much into it. The example code
doesn't look at all functional to me, as in (from the main page)
...

Nah, I think you're reading too much into my comment. I was just making an
observation. I don't think ABC is an FPL by a mile, from what I've read.

However, I *am* interested in things that people seem to value despite the
fact that they solve problems in sometimes radically different ways. Maybe
you don't see it, but I definitely see some parallels between the
idea of separating statements from expressions and the idea of separating
the imperative, mutating, side-effectful code from the immutable, declarative
functional, query-oriented, side-effect free.

I think there is a greater point to be made about all of this, and it has
something to do with time and change.

Dave
 
P

Pascal Costanza

Andrew said:
Indeed. It looks easier to understand to my untrained eye.
I disagree that "+" shouldn't work on strings because that
operation isn't commutative -- commutativity isn't a feature
of + it's a feature of + on a certain type of set.

So what's the result of ("one" - "two") then? ;)
He says that "programmers will be able to declare that
strings should be represented as traditional sequences of bytes."
which leads me to wonder about its Unicode support.

It's a myth that bytes are restricted to 8 bits. See
http://www.wikipedia.org/wiki/Byte
What's unicode support like in general for Lisp? Found an
answer in http://www.cliki.net/Unicode Support Digging
some more, it looks like CLisp uses .. UCS-4 and Unicode 3.2
(from clisp.cons.org). But do regexps work on unicode strings?
How portable are unicode strings? I figure they must be in
order to handle XML well. ... "ACL does not support 4 byte
Unicode scalar values" says franz.com. www.cl-xml.org says
"The processor passes 1749 of the 1812 tests .. when the base
implementation supports sixteen-bit charactrs." and that
MCL, LispWorks and the Allegro 5.0.1 international version
support 16-bit Unicode while Allegro ascii only supports 8bit.
So some have UCS-2 and some UCS-4.

Is there consensus on the Unicode API?

No, not yet. ANSI CL was finalized in 1994.
On the XML path, I found cl-xml. Looking at the bugs section in
http://pws.prserv.net/James.Anderson/XML/documentation/cl-xml.html
It says "the implementation form for HTTP support is determined
at compilation time." Is it really true that even HTTP handling is
different on the different implementations?

Again, not part of ANSI CL. Don't judge a standardized language with the
measures of a single-vendor language - that's a different subject.
(Apart from that, Jython also doesn't provide everything that Python
provides, right?)

Pick the one Common Lisp implementation that provides the stuff you
need. If no Common Lisp implementation provides all the stuff you need,
write your own libraries or pick a different language. It's as simple as
that.
And the section under "porting" is .. strange. It looks like to
use the XML API for a given Lisp I need to know enough
about the given implementation to configure various settings,
so if I wanted to learn Lisp by writing, say, a simple XML-RPC
client then I have to learn first what it means "to complete
definitions in the following files" and the details of "defsystem",
"package", "STREAM-READER / -WRITER", etc.

That reminds me of my confusion testing a biolisp package.
I needed to edit the file before it worked; something to do
with commenting/uncommenting the right way to handle
packages. I prefer to start with working code.

You can ask these things in comp.lang.lisp or in one of the various
mailing lists. Common Lispniks are generally very helpful.


Pascal
 
I

Ingvar Mattsson

[SNIP]
Incidentally, I regard objections to "the whitespace thing" in Python
and objections to "the parenthesis thing" in Lisp as more or less the
same. People who raise these objections are usually just saying "Ick!
This looks so unfamiliar to me!" in the language of rationalizations.
I guess a philosopher would say that I am an emotivist about notation
criticisms.

My main problem with "indentation controls scoping" is taht I've
actually had production python code die because of whitespace being
mangled in cutting&pasting between various things. It looks a bit odd,
but after having written BASIC, Pascal, APL, Forth, PostScript, Lisp,
C and Intercal looking "odd" only requires looking harder. Killing a
production system due to whitespace-mangling isn't.

And, yes, I probably write more Python code than lisp code in an
average week.

//Ingvar
 
?

=?iso-8859-1?q?Bj=F6rn_Lindberg?=

Andrew Dalke said:
If I want some real world numbers on program length, I do it myself:
http://pleac.sourceforge.net/
I wrote most of the Python code there

Still, since you insist, I went to the scorecard page and changed
the weights to give LOC a multipler of 1 and the others a multiplier
of 0. This is your definition of succinctness, yes? This table
is sorted (I think) by least LOC to most.

So:
- Why aren't you using Ocaml?
- Why is Scheme at the top *and* bottom of the list?
- Python is right up there with the Lisp/Scheme languages
- ... and with Perl.

Isn't that conclusion in contradiction to your statements
that 1) "Perl is *far* more compact than Python is" and 2)
the implicit one that Lisp is significantly more succinct than
Python? (As you say, these are small projects .. but you did
point out this site so implied it had some relevance.)

Apart from the usual problems with micro benchmarks, there are a few
things to consider regarding the LOC counts on that site:

* Declarations. Common Lisp gives the programmer the ability to
optimize a program by adding declarations to it. This is purely
optional, and something you normally don't do until you discover a
bottelneck in your code. For instance, it is possible to add type
declarations so that the compiler can generate more efficient
code. In a normal program, the declarations (if any) will
constitute an extremely small part of the program, but since the
micro benchmarks in the shootout are focused on speed of
execution, and they are so small, all of them contains a lot of
declarations, which will increase LOC.

* In many languages, any program can be written on a single
line. This goes for Lisp, ut also for C and other languages. This
means that the LOC count is also affected by formatting. For
instance, in the Ackermann's function benchmark, the Ackermann
function is written like this in the C code:

int Ack(int M, int N) { return(M ? (Ack(M-1,N ? Ack(M,(N-1)) : 1)) : N+1); }

That is, 1 LOC, although most people would probably write it in
anything between 5-10 lines.

* I don't think the LOC saving qualities of Lisp is made justice in
micro benchmarks. The reason Lisp code is so much shorter than the
equivalent code in other languages is because of the abstractive
powers of Lisp, which means that the difference will be more
visible the larger the program is.


Björn
 
J

james anderson

Matthias said:
He probably means "operator overloading" -- in languages where
there is a difference between built-in operators and functions,
their OOP features let them put methods on things like "+".
[...]
And in Lisp if you want to do some
other kind of arithmetic, you must make up your names for those
operators. This is considered to be a good feature.

In comp.lang.lisp there was recently a thread discussing why not all
CL-types were also CL-classes and all functions CLOS-methods (so that
operator overloading would be possible). I think the outcome was more
or less "it happened by historic accident and it's easier to write
fast compilers then".

that is not an accurate restatement of the conclusion which i recall. i
suggest that more accurate summary would be:

1. should one need operators which "look" like the standard operators, but
which have a different defined semantics, one places their names in a package
which is isolated from :common-lisp, and either codes with reference to that
package or exports them from that package and codes with reference to a
package which inherits those symbols in preference to those exported from the
:common-lisp package.

2. one does not want to specialize the standard operators other than in the
ways which the standard permits, as not only other applications, but also the
implementation itself may depend on that they have the semantics which the
standard specifies.

In general, taking away flexibility from the
programmer is not in the spirit of Lisp, though.

one might argue, that the standard should have specified that a conforming
implementation not depend on the definitions named by symbols in the
:common-lisp package itself, but instead use it's internal functions. in order
to be convincing, the argument would need to identify use cases which option
(1.) does not support.

one can even rename the :common-lisp package and provide their one. one should
not, however, expect all programs to tolerate such a change.

....
 
?

=?iso-8859-1?q?Bj=F6rn_Lindberg?=

Andrew Dalke said:
(e-mail address removed):

Or the people who prefer the awesome power that is Lisp and
Scheme don't find the limited syntax to be a problem.

All evidence points to the fact that Lisp syntax is no worse than
Algol-style syntax. As Joe explained, other syntaxes have been used
for Lisp many times over the years, but lispers seem to prefer the
s-exp one. If anything, one could draw the conclusion that s-exp
syntax must be /better/ than Algol-style syntax since the programmers
who have a choice which of them to use -- for the same language --
apparently choose s-exp syntax. You really have no grounds to call
Lisp syntax limited.


Björn
 
J

james anderson

i realize that this thread is hopelessly amorphous, but this post did
introduce some concrete issues which bear concrete responses...

Andrew said:
...

What's unicode support like in general for Lisp? Found an
answer in http://www.cliki.net/Unicode Support Digging
some more, it looks like CLisp uses .. UCS-4 and Unicode 3.2
(from clisp.cons.org). But do regexps work on unicode strings?
How portable are unicode strings? I figure they must be in
order to handle XML well. ... "ACL does not support 4 byte
Unicode scalar values" says franz.com. www.cl-xml.org says
"The processor passes 1749 of the 1812 tests .. when the base
implementation supports sixteen-bit charactrs." and that
MCL, LispWorks and the Allegro 5.0.1 international version
support 16-bit Unicode while Allegro ascii only supports 8bit.
So some have UCS-2 and some UCS-4.

Is there consensus on the Unicode API?

there are several problems with a "uniform" unicode implementation. if you
look through the info-mcl archives you will find a thread where i tried to
achieve some clarity as to what was necessary.

i got only as far as the realization that, in order to be of any use, unicode
data management has to support the eventual primitive string operations. which
introduces the problem that, in many cases, these primitive operations
eventually devolve to the respective os api. which, if one compares apple and
unix apis are anything but uniform. it is simply not possible to provide them
with the same data and do anything worthwhile. if it is possible to give some
concrete pointers to how other languages provide for this i would be grateful.

given this situation. i posted several suggestions as to how they might
represent unicode and effect encoding and decoding such that variations were
at least managed in a uniform manner. i received one (1) answer, to the effect
that "that sound's ok to me." so i left the implementation the way it was, to
signal an error upon discovering surrogate pairs. i've yet to have anyone
suggest that that impedes processing. to be quite honest, that surprises me
and i have no idea what people do with surrogate pairs.

this is effectively the same level of support as java, and i have to admit, i
don't understand what people really do with them in java either. the string
representation is effectively utf-16, so anything outside of the base plane is
not a first-class object. in which environment the "consensus" above should
actually be better spelled "chimera".
On the XML path, I found cl-xml. Looking at the bugs section in
http://pws.prserv.net/James.Anderson/XML/documentation/cl-xml.html
It says "the implementation form for HTTP support is determined
at compilation time." Is it really true that even HTTP handling is
different on the different implementations?

yes, there are several available common-lisp implementations for http clients
and servers. they offer significant trade-offs in api complexity,
functionality, resource requirements and performance. you do need to pick one
according to your application needs and declare which one you have chosen. for
a default implementation of client functionality, cl-xml, as any other lisp
application, must take into acount that some necesary stream and
network-related functions are available through implementation-specific
libraries only. again, as for other common-lisp libraries, for the
implementation to which it has been ported, it does this automatically.
And the section under "porting" is .. strange. It looks like to
use the XML API for a given Lisp I need to know enough
about the given implementation to configure various settings,
so if I wanted to learn Lisp by writing, say, a simple XML-RPC
client then I have to learn first what it means "to complete
definitions in the following files" and the details of "defsystem",
"package", "STREAM-READER / -WRITER", etc.

if one needs to _port_ it to a new lisp, yes. perhaps you skipped over the
list of lisps to which it has been ported. if you look at the #+/-
conditionalization, you may observe that the differences are not significant.
That reminds me of my confusion testing a biolisp package.
I needed to edit the file before it worked; something to do
with commenting/uncommenting the right way to handle
packages. I prefer to start with working code.

it is refreshing, that you describe it as "your" confusion. there was one
correspondent who, at the outset, judging from their initial enquiries, was
looking at their first '(' ever, but wrote in short order about processing
12MB files. should one have problems using a given common lisp library,
concrete questions and illustrations of points which are unclear are always
more productive than vague chacterizations.

....
 
I

Ingvar Mattsson

Joe> Now I'm *really* confused. I thought method overloading involved
Joe> having a method do something different depending on the type of
Joe> arguments presented to it. CLOS certainly does that.

He probably means "operator overloading" -- in languages where
there is a difference between built-in operators and functions,
their OOP features let them put methods on things like "+".

Lisp doesn't let you do that, because it turns out to be a bad idea.
When you go reading someone's program, what you really want is for
the standard operators to be doing the standard and completely
understood thing.

Though if one *really* wants to have +, -, * and / as generic
functions, I imagine one can use something along the lines of:

(defpackage "GENERIC-ARITHMETIC"
:)shadow "+" "-" "/" "*")
:)use "COMMON-LISP"))

(in-package "GENERIC-ARITHMETIC")
(defgeneric arithmetic-identity (op arg))

(defmacro defarithmetic (op)
(let ((two-arg
(intern (concatenate 'string "TWO-ARG-" (symbol-name op))
"GENERIC-ARITHMETIC"))
(cl-op (find-symbol (symbol-name op) "COMMON-LISP")))
`(progn
(defun ,op (&rest args)
(cond ((null args) (arithmetic-identity ,op nil))
((null (cdr args))
(,two-arg (arithmetic-identity ,op (car args))
(car args)))
(t (reduce (function ,two-arg)
(cdr args)
:initial-value (car args)))))
(defgeneric ,two-arg (arg1 arg2))
(defmethod ,two-arg ((arg1 number) (arg2 (number)))
(,cl-op arg1 arg2)))))

Now, I have (because I am lazy) left out definitions of the generic
function ARITHMETIC-IDENTITY (general idea, when fed an operator and
NIL, it returns the most generic identity, when fed an operator and an
argument, it can return a value that is more suitable) and there's
probably errors in the code, too.

But, in principle, that should be enough of a framework to build from,
I think.

//Ingvar
 
E

Edi Weitz

What's unicode support like in general for Lisp? [...] But do
regexps work on unicode strings?

Unicode support isn't part of the CL standard but the standard is
flexible enough to make it easy for implementations to integrate
Unicode characters and strings seamlessly. You've mentioned a couple
of integrations which do that.

As for regex support - that's not a part of the standard either, but
there a couple of libraries available - see

<http://www.cliki.net/Regular Expression>

If the library is written in Lisp (as opposed to being an FFI wrapper
around a C library) you can be fairly sure that it works with Unicode:

[19]> (code-char 1000)
#\COPTIC_CAPITAL_LETTER_HORI
[20]> (defparameter *target* (make-string 2 :initial-element *))
*TARGET*
[21]> (cl-ppcre::scan "^(.){2}$" *target*)
0 ;
2 ;
#(1) ;
#(2)
[22]> (cl-ppcre::scan `:)greedy-repetition 2 2 ,(code-char 1000)) *target*)
0 ;
2 ;
#() ;
#()

(This is CL-PPCRE with CLISP.)

Edi.
 
K

Kenny Tilton

Andrew Dalke wrote:

What I said was that Python is *not* an application of
Greespun's Tenth Rule of programming because 1) it isn't
bug-ridden, and 2) because Python explores ideas which
which had no influence on Lisp's development -- user
studies of non-professional programmers.

I wouldn't take the Greenspun crack too seriously. That's about
applications recreating Lisp, not languages copying Lisp features. It's
just a reaction to Python (a perfectly nice little scripting language)
trying to morph into a language with the sophistication of Lisp.

As for non-professional programmers, the next question is whether a good
language for them will ever be anything more than a language for them.
Perhaps Python should just stay with the subset of capabalities that
made it a huge success--it might not be able to scale to new
sophistication without destroying the base simplicity.

Another question is whether Lisp would really be such a bad program for
them.

You presume that only Lisp gurus can learn Lisp because of the syntax.
But methinks a number of folks using Emacs Elisp and Autocad's embedded
Lisp are non-professionals. And let's not forget Symbolic Composer
(music composition) or Mirai (?) the 3D modelling/animation tool, both
of which are authored at the highest level with Lisp.

Logo (a language aimed at children, including very young children) cuts
both ways: it's a Lisp, but it dumped a lot of the parens, but then
again it relies more on recursion.

You (Alex?) also worry about groups of programmers and whether what is
good for the gurus will be good for the lesser lights. What you are
saying is that the guru will dazzle the dorks with incomprehensible
gobbledygook. That does happen, but those kinds of gurus should be fired.

To the contrary, macros in the hand of truly talented developers allow
the gurus to build Lisp up to a higher-level language with new
domain-specific constructs to empower the rest of the team.

You dissed Brooks in an earlier article (in favor of a Redmund Clone, of
all things) but you should go back and read him again. Especially No
Silver Bullet and NSB Revisited. He has a lot of great insights in
there. (Your Redmond boy is counting LOC to assess languages, apparently
because he can understand counting LOC. hmmm....)

Brooks talks about productivity coming from greater expressive power,
from having the language more capable of expressing things at the same
level at which the programmer is thinking. (He also touts Interlisp at
one point!) But in NSB he says languages have reached the conceptual
sophistication of their users. What Brooks missed (despite his awareness
of Interlisp, which he dug because of its interactivity) is what few
people understand, again, that macros let you build a domain-specific
HLL on top of Lisp.

On a suffciently large project (well, there's another point: with Lisp
one does not hire ten people (unless one is doing three projects)) the
team should be divided into those who will be building the
domain-specific embedded language and those who will be using it.
Ideally the latter could be not just non-professional programmers, but
even non-programmers.
Where are the user studies which suggested () over [], or that
"car" is better than "first"/"1st" or that "cdr" is better than
"rest"/"rst"?

Studies. You use that word so much. A good study is hard to find. You
loved McConnel's LOC nonsense, but it is worthless. Ooooh, numbers! Look
at all the numbers! [why [dont you] [just] [type out [a form [with [lots
of brackets]]] and [see what [they [look [like]]]]?

They [ook [ike He[[.
 
A

Andreas Rossberg

Dirk said:
I should have added: As long as it should execute at compile time, of
course.


I don't see the problem. Maybe you have an example? I am sure the
Lisp'ers here can come up with a macro solution for it.

I'm not terribly familiar with the details of Lisp macros but since
recursion can easily lead to non-termination you certainly need tight
restrictions on recursion among macros in order to ensure termination of
macro substitution, don't you? Or at least some ad-hoc depth limitation.

- Andreas

--
Andreas Rossberg, (e-mail address removed)-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
 
G

Grzegorz Chrupala

Daniel P. M. Silva said:
By the way, what's a non-professional programmer?

How about a person whose profession is not programming, but who often
writes computer programs?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,172
Messages
2,570,934
Members
47,474
Latest member
AntoniaDea

Latest Threads

Top