Python from Wise Guy's Viewpoint

D

Dirk Thierbach

Pascal Costanza said:
Remi Vanicat wrote:

BTW, the same thing is true for any language with type inference. In
Haskell, there are to methods and objects. But to test a function, you
can write

test_func f = if (f 1 == 1) && (f 2 == 42) then "ok" else "fail"

The compiler will infer that test_func has type

test_func :: (Integer -> Integer) -> String

(I am cheating a bit, because actually it will infer a more general type),
so you can use it to test any function of type Integer->Integer, regardless
if you have written it already or not.
Doesn't this mean that the occurence of such compile-time errors is only
delayed, in the sense that when the test suite grows the compiler starts
to issue type errors?

As long as you parameterize over the functions (or objects) you want to
test, there'll be no compile-time errors. That's what functioncal
programming and type inference are good for: You can abstract everything
away just by making it an argument. And you should know that, since
you say that you know what modern type-systems can do.


But the whole case is moot anyway, IMHO: You write the tests because
you want them to fail until you have written the correct code that
makes them pass, and it is not acceptable (especially if you're doing
XP) to continue as long as you have failing tests. You have to do the
minimal edit to make all the tests pass *right now*, not later on.

It's the same with compile-time type errors. The only difference is
that they happen at compile-time, not at test-suite run-time, but the
necessary reaction is the same: Fix your code so that all tests (or
the compiler-generated type "tests") pass. Then continue with the next
step.

I really don't see why one should be annoying to you, and you strongly
prefer the other. They're really just the same thing. Just imagine
that you run your test suite automatically when you compile your
program.

- Dirk
 
D

Dirk Thierbach

Kenny Tilton said:
Big deal. From Robert C. Martin:

http://www.artima.com/weblogs/viewpost.jsp?thread=4639

"I've been a statically typed bigot for quite a few years....I scoffed
at the smalltalkers who whined about the loss of flexibility. Safety,
after all, was far more important than flexibility -- and besides, we
can keep our software flexible AND statically typed, if we just follow
good dependency management principles.

"Four years ago I got involved with Extreme Programming. ...

"About two years ago I noticed something. I was depending less and less
on the type system for safety. My unit tests were preventing me from
making type errors. The more I depended upon the unit tests, the less I
depended upon the type safety of Java or C++ (my languages of choice).

Note that he is speaking about languages with a very bad type system.
As has been said in this thread a few times, there are statically
typed languages and there are statically typed languages. Those two
can differ substantially from each other.

Here's a posting from Richard MacDonald in comp.software.extreme-programming,
MID <[email protected]>:

: Eliot, I work with a bunch of excellent programmers who came from AI to
: Smalltalk to Java. We despise Java. We love Smalltalk. Some months ago we
: took a vote and decided that we were now more productive in Java than we
: had ever been in Smalltalk. The reason is the Eclipse IDE. It more than
: makes up for the lousy, verbose syntax of Java. We find that we can get
: Eclipse to write much of our code for us anyway.
:
: Smalltalk is superior in getting something to work fast. But refactoring
: takes a toll on a dynamically typed language because it doesn't provide
: as much information to the IDE as does a statically-typed language (even
: a bad one). Let's face it. If you *always* check callers and implementors
: in Smalltalk, you can catch most of the changes. But sometimes you
: forget. With Eclipse, you can skip this step and it still lights up every
: problem with a big X and helps you refactor to fix it
:
: In Smalltalk, I *needed* unit tests because Smalltalk allowed me to be
: sloppy. In Eclipse, I can get away without writing unit tests and my code
: miraculously often works the first time I get all those Xs eliminated.
:
: Ok, I realize I have not addressed your question yet...
:
: No question but that a "crappy statically typed" (*) language can get you
: into a corner where you're faced with lousy alternatives. But say I
: figure out a massive refactoring step that gets me out of it. In
: Smalltalk, I would probably fail without a bank of unit tests behind me.
: In Eclipse, I could probably make that refactoring step in less time and
: with far great certainty that it is correct. I've done it before without
: the safety net of tests and been successful. No way I would ever have
: been able to do that as efficiently in Smalltalk. (I once refactored my
: entire Smalltalk app in 3 days and needed every test I had every written.
: I have not done the equivalent in Java, but I have complete confidence I
: could do it just as well if not much better.)
:
: As far as productivity, we still write unit tests. But unit test
: maintenance takes a lot of time. In Smalltalk, I would spend 30% of my
: time coding within the tests. I tested at all levels, i.e., low-level,
: medium, and integration, since it paid off when searching for bugs. But
: 30% is too much. With Eclipse, we're able to write good code with just a
: handful of high-level tests. Often we simply write the answer as a test
: and do the entire app with this one test. The reason is once again that
: the IDE is visually showing us right where we broke my code and we don't
: have to run tests to see it.
:
: (*) I suggest we use 3 categories: (1) dynamically typed, (2) statically
: typed, (3) lousy statically typed. Into the latter category, toss Java
: and C++. Into (2), toss some of the functional languages; they're pretty
: slick. Much of the classic typing wars are between dynamic-typists
: criticizing (3) vs. static-typists working with (2).
:
: P.S. I used to be one of those rabid dynamic defenders. I'm a little
: chastened and wiser now that I have a fantastic IDE in my toolkit.

- Dirk
 
P

Pascal Costanza

Dirk said:
BTW, the same thing is true for any language with type inference. In
Haskell, there are to methods and objects. But to test a function, you
can write

test_func f = if (f 1 == 1) && (f 2 == 42) then "ok" else "fail"

The compiler will infer that test_func has type

test_func :: (Integer -> Integer) -> String

(I am cheating a bit, because actually it will infer a more general type),
so you can use it to test any function of type Integer->Integer, regardless
if you have written it already or not.

OK, I have got it. No, that's not what I want. What I want is:

testxyz obj = (concretemethod obj == 42)

Does the code compile as long as concretemethod doesn't exist?
As long as you parameterize over the functions (or objects) you want to
test, there'll be no compile-time errors. That's what functioncal
programming and type inference are good for: You can abstract everything
away just by making it an argument. And you should know that, since
you say that you know what modern type-systems can do.

Yes, I know that. I have misunderstood the claim. Does the code I
propose above work?
But the whole case is moot anyway, IMHO: You write the tests because
you want them to fail until you have written the correct code that
makes them pass, and it is not acceptable (especially if you're doing
XP) to continue as long as you have failing tests. You have to do the
minimal edit to make all the tests pass *right now*, not later on.

It's the same with compile-time type errors. The only difference is
that they happen at compile-time, not at test-suite run-time, but the
necessary reaction is the same: Fix your code so that all tests (or
the compiler-generated type "tests") pass. Then continue with the next
step.

The type system might test too many cases.
I really don't see why one should be annoying to you, and you strongly
prefer the other. They're really just the same thing. Just imagine
that you run your test suite automatically when you compile your
program.

I don't compile my programs. Not as a distinct conscious step during
development. I write pieces of code and execute them immediately. It's
much faster to run the code than to explicitly compile and/or run a type
checker.

This is a completely different style of developing code.

Pascal
 
P

Pascal Costanza

Dirk said:
To me, it certainly looked like you did in the beginning. Maybe your
impression that other people say that one should always use static
type systems is a similar misinterpretation?

Please recheck my original response to the OP of this subthread. (How
much more "in the beginning" can one go?)
Anyway, formulations like "A has less expressive power than B" aresvery
close to "B is always better than A". It's probably a good idea to
avoid such formulations if this is not what you mean.

"less expressive power" means that there exist programs that work but
that cannot be statically typechecked. These programs objectively exist.
By definition, I cannot express them in a statically typed language.

On the other hand, you can clearly write programs in a dynamically typed
language that can still be statically checked if one wants to do that.
So the set of programs that can be expressed with a dynamically typed
language is objectively larger than the set of programs that can be
expressed with a statically typed language.

It's definitely a trade off - you take away some expressive power and
you get some level of safety in return. Sometimes expressive power is
more important than safety, and vice versa.

It's not my problem that you interpret some arbitrary other claim into
this statement.

Pascal
 
P

Pascal Costanza

Dirk said:
Maybe that's where the problem is. One doesn't need to reduce the
"expressive power". I don't know your particular application, but what
you seem to need is the ability to dynamically change the program
execution. There's more than one way to do that.

Of course there is more than one way to do anything. You can do
everything in assembler. The important point is: what are the convenient
ways to do these things? (And convenience is a subjective matter.)

Expressive power is not Turing equivalence.
I may be wrong, but I somehow have the impression that it is difficult
to see other ways to solve a problem if you haven't done it in that
way at least once.

No, you need several attempts to get used to a certain programming
style. These things don't fall from the sky. When you write your first
program in a new language, it is very likely that you a) try to imitate
what you have done in other languages you knew before and b) that you
don't know the standard idioms of the new language.

Mastering a programming language is a very long process.
So you see that with different tools, you cannot do
it in exactly the same way as with the old tools, and immediately you
start complaining that the new tools have "less expressive power",
just because you don't see that you have to use them in a different
way. The "I can do lot of things with macros in Lisp that are
impossible to do in other languages" claim seems to have a similar
background.

No, you definitely can do a lot of things with macros in Lisp that are
impossible to do in other languages. There are papers that show this
convincingly. Try
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf for a
start. Then continue, for example, with some articles on Paul Graham's
website, or download and read his book "On Lisp".
I could complain that Lisp or Smalltalk have "less expressive power"
because I cannot declare algebraic datatypes properly,

I don't see why this shouldn't be possible, but I don't know.
I don't have
pattern matching to use them efficiently,
http://www.cliki.net/fare-matcher

and there is no automatic
test generation (i.e., type checking) for my datatypes.
http://www.plt-scheme.org/software/mrflow/

The only way out is IMHO to learn as many languages as possible, and
to learn as many alternative styles of solving problems as possible.

Right.


Pascal
 
A

Adrian Hey

Kenny said:
Big deal.

Yes it is a very big deal. I suspect from your choice of words
you have a closed mind on this issue, so there's no point in me
wasting my time trying to explain why.

<snip quote from someone who doesn't understand static typing at
all if the references to Java and C++ are anything to go by>
Lights out for static typing.

That's complete bollocks. There are more than enough sufficiently
enlightened people to keep static typing alive and well, thank you
very much. If you chose to take advantage of it that's your loss.

Regards
 
A

Andreas Rossberg

Pascal said:
"less expressive power" means that there exist programs that work but
that cannot be statically typechecked. These programs objectively exist.
By definition, I cannot express them in a statically typed language.

On the other hand, you can clearly write programs in a dynamically typed
language that can still be statically checked if one wants to do that.
So the set of programs that can be expressed with a dynamically typed
language is objectively larger than the set of programs that can be
expressed with a statically typed language.

Well, "can be expressed" is a very vague concept, as you noted yourself.
To rationalize the discussion on expressiveness, there is a nice
paper by Felleisen, "On the Expressive Power of Programming Languages"
which makes this terminology precise.

Anyway, you are right of course that any type system will take away some
expressive power (particularly the power to express bogus programs :)
but also some sane ones, which is a debatable trade-off).

But you completely ignore the fact that it also adds expressive power at
another end! For one thing, by allowing you to encode certain invariants
in the types that you cannot express in another way. Furthermore, by
giving more knowledge to the compiler and hence allow the language to
automatize certain tedious things. Overloading is one obvious example
that increases expressive power in certain ways and crucially relies on
static typing.

So there is no inclusion, the "expressiveness" relation is unordered wrt
static vs dynamic typing.

- Andreas

--
Andreas Rossberg, (e-mail address removed)-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
 
E

Erann Gat

(e-mail address removed) (Erann Gat) wrote in message

Just to be clear, I do not believe "bug" => "type error". However, I do
claim that "type error" (in reachable code) => "bug".

But that just begs the question of what you consider a type error. Does
the following code contain a type error?

(defun rsq (a b)
"Return the square root of the sum of the squares of a and b"
(sqrt (+ (* a a) (* b b))))

How about this one?

(defun rsq1 (a b)
(or (ignore-errors (rsq a b)) 'FOO))

or:

(defun rsq2 (a b)
(or (ignore-errors (rsq a b)) (error "Foo")))

Here's the way I see it:
(1) type errors are extremely common;

In my experience they are quite rare.
(2) an expressive, statically checked type system (ESCTS) will identify
almost all of these errors at compile time;

And then some. That's the problem.
(3) type errors flagged by a compiler for an ESCTS can pinpoint the source
of the problem whereas ad hoc assertions in code will only identify a
symptom of a type error;

Really? If there's a type mismatch how does the type system know if the
problem is in the caller or the callee?
(4) the programmer does not have to litter type assertions in a program
written in a language with an ESCTS;

But he doesn't have to litter type assertions in a program written in a
language without an ESCTS either.
(5) an ESCTS provides optimization opportunities that would otherwise
be unavailable to the compiler;

That is true. Whether this benefit outweighs the drawbacks is arguable.
(6) there will be cases where the ESCTS requires one to code around a
constraint that is hard/impossible to express in the ESCTS (the more
expressive the type system, the smaller the set of such cases will be.)

The question is whether the benefits of (2), (3), (4) and (5) outweigh
the occasional costs of (6).

Yes, that's what it comes down to. There are both costs and benefits.
The balance probably tips one way in some circumstances, the other way in
others.

E.
 
D

Dirk Thierbach

Pascal Costanza said:
Dirk Thierbach wrote:
Of course there is more than one way to do anything. You can do
everything in assembler. The important point is: what are the convenient
ways to do these things? (And convenience is a subjective matter.)

Yes. The point is: It may be as convenient to do in one language as in
the other language. You just need a different approach.
No, you definitely can do a lot of things with macros in Lisp that are
impossible to do in other languages.

We just had this discussion here, and I am not going to repeat it.
I know Paul Graham's website, and I know many examples of what you
can do with macros. Macros are a wonderful tool, but you really can
get most of what you can do with macros by using HOFs. There are
some things that won't work, the most important of which is that
you cannot force calculation at compile time, and you have to hope
that the compiler does it for you (ghc actually does it sometimes.)

- Dirk
 
D

Dirk Thierbach

Pascal Costanza said:
Dirk Thierbach wrote:
OK, I have got it. No, that's not what I want. What I want is:

testxyz obj = (concretemethod obj == 42)

Does the code compile as long as concretemethod doesn't exist?

No. Does your test pass as long as conretemthod doesn't exist? It doesn't,
for the same reason.
The type system might test too many cases.

I have never experienced that, because every expression that is valid
code will have a proper type.

Can you think of an example (not in C++ or Java etc.) where the type
system may check too many cases?
I don't compile my programs. Not as a distinct conscious step during
development. I write pieces of code and execute them immediately.

I know. I sometimes do the same with Haskell: I use ghc in interactive
mode, write a piece of code and execute it immediately (which means it
gets compiled and type checked). When it works, I paste it into
the file. If there was a better IDE, I wouldn't have to do that,
but even in this primitive way it works quite well.
It's much faster to run the code than to explicitly compile and/or
run a type checker.

Unless your modules get very large, or you're in the middle of some
big refactoring, compiling or running the type checker is quite fast.
This is a completely different style of developing code.

I have known this style of developing code for quite some time :)

- Dirk
 
J

John Atwood

Pascal Costanza said:
Mastering a programming language is a very long process.


No, you definitely can do a lot of things with macros in Lisp that are
impossible to do in other languages. There are papers that show this
convincingly. Try
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf for a
start. Then continue, for example, with some articles on Paul Graham's
website, or download and read his book "On Lisp".

That's a great paper; however, see Steele's later work:
http://citeseer.nj.nec.com/steele94building.html


John
 
P

Pascal Costanza

Andreas said:
Well, "can be expressed" is a very vague concept, as you noted yourself.
To rationalize the discussion on expressiveness, there is a nice paper
by Felleisen, "On the Expressive Power of Programming Languages" which
makes this terminology precise.

I have skimmed through that paper. It states the following in the
conclusion section:

"The most important criterion for comparing programming languages showed
that an increase in expressive power may destroy semantic properties of
the core language that programmers may have become accustomed to
(Theorem 3.14). Among other things, this invalidation of operational
laws through language extensions implies that there are now more
distinctions to be considered for semantic analyses of expressions in
the core language. On the other hand, the use of more expressive
languages seems to facilitate the programming process by making programs
more concise and abstract (Conciseness Conjecture). Put together, this
result says that

* an increase in expressive power is related to a decrease of the set of
``natural'' (mathematically appealing) operational equivalences."

This seems to be compatible with my point of view. (However, I am not
really sure.)
Anyway, you are right of course that any type system will take away some
expressive power (particularly the power to express bogus programs :)
but also some sane ones, which is a debatable trade-off).

Thanks. ;)
But you completely ignore the fact that it also adds expressive power at
another end! For one thing, by allowing you to encode certain invariants
in the types that you cannot express in another way. Furthermore, by
giving more knowledge to the compiler and hence allow the language to
automatize certain tedious things.

I think you are confusing things here. It gets much clearer when you
separate compilation/interpretation from type checking, and see a static
type checker as a distinct tool.

The invariants that you write, or that are inferred by the type checker,
are expressions in a domain-specific language for static program
analysis. You can only increase the expressive power of that
domain-specific language by adding a more elaborate static type system.
You cannot increase the expressive power of the language that it reasons
about.

An increase of expressive power of the static type checker decreases the
expressive power of the target language, and vice versa.

As a sidenote, here is where Lisp comes into the game: Since Lisp
programs can easily reason about other Lisp programs, because there is
no distinction between programs and data in Lisp, it should be pretty
straightforward to write a static type checker for Lisp programs, and
include them in your toolset.

It should also be relatively straightforward to make this a relatively
flexible type checker for which you can increase/decrease the level of
required conformance to the (a?) type system.

This would mean that you could have the benefits of both worlds: when
you need static type checking, you can add it. You can even enforce it
in a project, if the requirements are strict in this regard in a certain
setting. If the requirements are not so strict, you can relax the static
type soundness requirements, or maybe even go back to dynamic type checking.

In fact, such systems already seem to exist. I guess that's what soft
typing is good for, for example (see MrFlow). Other examples that come
to mind are Qi and ACL2.

Why would one want to switch languages for a single feature?

Note that this is just brainstorming. I don't know whether such an
approach can really work in practice. There are probably some nasty
details that are hard to solve.
Overloading is one obvious example
that increases expressive power in certain ways and crucially relies on
static typing.

Overloading relies on static typing? This is news to me. What do you mean?
So there is no inclusion, the "expressiveness" relation is unordered wrt
static vs dynamic typing.

No, I don't think so.


Pascal
 
D

Dirk Thierbach


Certainly an improvement, but no way to declare datatypes (i.e.,
pattern constructors) yet:

There also needs be improvements to the infrastructure to build
pattern constructors, so that you may build pattern constructors and
destructors at the same time (much like you do when you define ML
types).

The following might also be a show-stopper (I didn't test it,
but it doesn't look good):

; FIXME: several branches of an "or" pattern can't share variables;
; variables from all branches are visible in guards and in the body,
; and previous branches may have bound variables before failing.
; This is rather bad.

The following comment is also interesting:

Nobody reported using the matcher -- ML/Erlang style pattern
matching seemingly isn't popular with LISP hackers.

Again, the way to get the benefits of "more expressive languages"
like ML in Lisp seems to be to implement part of them on top of Lisp :)

I couldn't find any details on this page (it says "coming soon"), but
the name suggest a dataflow analyzer. As I have already said, the
problem with attaching static typing and inference to an arbitrary
language is that it is difficult to get it working without changing
the language design. Pure functional features make type inference
easy, imperative features make them hard. Full dataflow analysis might
help, but I'd have to look more closely to see if it works out.

- Dirk
 
M

Marshall Spight

Pascal Costanza said:
Expressive power is not Turing equivalence.

Agreed.

So, does anyone have a formal definition of "expressive power?"
Metrics? Examples? Theoretical foundations?

It seems like a hard concept to pin down. "Make it possible
to write programs that contain as few characters as possible"
strikes me as a really bad definition; it suggests that
bzip2-encoded C++ would be really expressive.


Marshall
 
D

Dirk Thierbach

Andreas Rossberg said:
Pascal Costanza wrote:
Anyway, you are right of course that any type system will take away some
expressive power (particularly the power to express bogus programs :)
but also some sane ones, which is a debatable trade-off).

Yep. It turns out that you take away lots of bogus programs, and the
sane programs that are taken away are in most cases at least questionable
(they will be mostly of the sort: There is a type error in some execution
branch, but this branch will never be reached), and can usually be
expressed as equivalent programs that will pass.

"Taking away possible programs" is not the same as "decreasing expressive
power".
So there is no inclusion, the "expressiveness" relation is unordered wrt
static vs dynamic typing.

That's the important point.

- Dirk
 
P

Pascal Costanza

Dirk said:
No. Does your test pass as long as conretemthod doesn't exist? It doesn't,
for the same reason.

As long as I am writing only tests, I don't care. When I am in the mood
of writing tests, I want to write as many tests as possible, without
having to think about whether my code is acceptable for the static type
checker or not.
I have never experienced that, because every expression that is valid
code will have a proper type.

Can you think of an example (not in C++ or Java etc.) where the type
system may check too many cases?

Here is one:

(defun f (x)
(unless (< x 200)
(cerror "Type another number"
"You have typed a wrong number")
(f (read)))
(* x 2))

Look up
http://www.lispworks.com/reference/HyperSpec/Body/f_cerror.htm#cerror
before complaining.



Pascal
 
M

Marshall Spight

Pascal Costanza said:
See the example of downcasts in Java.

Downcasts in Java are not a source of problems.
They may well be indicative of a theoretical
hole (in fact I'm pretty sure they are,) but they
are not something that actually causes
problems in the real world.


Marshall
 
P

Pascal Costanza

Dirk said:
Yep. It turns out that you take away lots of bogus programs, and the
sane programs that are taken away are in most cases at least questionable
(they will be mostly of the sort: There is a type error in some execution
branch, but this branch will never be reached)

No. Maybe you believe me when I quote Ralf Hinze, one of the designers
of Haskell:

"However, type systems are always conservative: they must necessarily
reject programs that behave well at run time."

found at
http://web.comlab.ox.ac.uk/oucl/research/areas/ap/ssgp/slides/hinze.pdf

Could you _please_ just accept that statement? That's all I am asking for!


Pascal
 
D

Dirk Thierbach

Pascal Costanza said:
As long as I am writing only tests, I don't care. When I am in the mood
of writing tests, I want to write as many tests as possible, without
having to think about whether my code is acceptable for the static type
checker or not.

Then just do it. Write as many tests as you care; when you're ready,
compile them and run the type checker.

Where's the problem? One thing that is certainly missing for Haskell or
OCaml is a good IDE that supports as many styles of programming
as possible, but even with the primitive command line versions at the
moment you can do what you want in this case.

And anyway, this would be a question of the IDE, not of static typing.
Here is one:

(defun f (x)
(unless (< x 200)
(cerror "Type another number"
"You have typed a wrong number")
(f (read)))
(* x 2))

Done. But I still miss the point. OTOH, Lisp certainly doesn't check
types, so I don't see how a Lisp program can be an example of the
type system checking too many cases. OTOH, this example is tied to
a specific Lisp debugger feature. I think it would be better to give
an example in a statically typed language.

I could probably rewrite the code with an approximation to cerror
(with the restriction that non-local control structures don't
translate one to one), but even then I don't see why the type system
would test too many cases for this example.

- Dirk
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,172
Messages
2,570,934
Members
47,478
Latest member
ReginaldVi

Latest Threads

Top