Python from Wise Guy's Viewpoint

A

Adrian Hey

Pascal said:
Maybe you haven't written the kind of programs yet that a static type
system can't handle.

Your right, I haven't. I would say the overwhelming majority of programs
"out there" fall into this category. I am aware that some situations are
difficult to handle in a statically typed language. An obvious example
in Haskell would be trying to type a function which interpreted strings
representing arbitrary haskell expressions and returned their value..

eval :: String -> ??

If this situation is to be dealt with at all, some kind of dynamic
type system seems necessary. I don't think anybody is denying that
(certainly not me).
I don't deny that static type systems can be a useful supplement to a
dynamic type system in certain contexts.

I don't think anybody who read your posts would get that impression :)
There is an important class of programs - those that can reason about
themselves and can change themselves at runtime - that cannot be
statically checked.

Yes indeed. Even your common or garden OS falls into this category I
think, but that doesn't mean you can't statically type check individual
fragments of code (programs) that run under that OS. It just means
you can't statically type check the entire system (OS + application
programs).
Your claim implies that such code should not be written,

What claim? I guess you mean the one about dynamic typing being a
useful supplement to, but not a substitute for, static typing.

If so, I don't think it implies that at all.
at least not "most of the time" (whatever that means).

Dunno who you're quoting there, but it isn't me.
Why? Maybe I am missing an important insight about such programs
that you have.

Possibly, but it seems more likely that you are simply misrepresenting
what I (and others) have written in order to create a straw man to demolish.

Regards
 
P

Pascal Costanza

Dirk said:
Nobody forces you to use a static type system. Languages, with their
associated type systems, are *tools*, and not religions. You use
what is best for the job.

_exactly!_

That's all I have been trying to say in this whole thread.

Marshall Spight asked
http://groups.google.com/groups?selm=MoEkb.821534$YN5.832338@sccrnsc01
why one would not want to use a static type system, and I have tried to
give some reasons.

I am not trying to force anyone to use a dynamically checked language. I
am not even trying to convince anyone. I am just trying to say that
someone might have very good reasons if they didn't want to use a static
type system.
You cannot take an arbitrary language and attach a good static type
system to it. Type inference will be much to difficult, for example.
There's a fine balance between language design and a good type system
that works well with it.

Right. As I said before, you need to reduce the expressive power of the
language.
If you want to use Smalltalk or CLOS with dynamic typing and unit
tests, use them. If you want to use Haskell or OCaml with static typing
and type inference, use them. None is really "better" than the other.
Both have their advantages and disadvantages. But don't dismiss
one of them just because you don't know better.

dito

Thank you for rephrasing this in a probably better understandable way.

Pascal
 
P

prunesquallor

Matthias Blume said:
In fact, you should never need to "solve the halting problem" in order
to statically check you program. After all, the programmer *already
has a proof* in her mind when she writes the code! All that's needed
:)-) is for her to provide enough hints as to what that proof is so
that the compiler can verify it. (The smiley is there because, as we
are all poinfully aware of, this is much easier said than done.)


I'm having trouble proving that MYSTERY returns T for lists of finite
length. I an idea that it would but now I'm not sure. Can the
compiler verify it?

(defun kernel (s i)
(list (not (car s))
(if (car s)
(cadr s)
(cons i (cadr s)))
(cons 'y (cons i (cons 'z (caddr s))))))

(defconstant k0 '(t () (x)))

(defun mystery (list)
(let ((result (reduce #'kernel list :initial-value k0)))
(cond ((null (cadr result)))
((car result) (mystery (cadr result)))
(t (mystery (caddr result))))))
 
P

Pascal Costanza

Andreas said:
Huh? On page 2 Cardelli defines typed vs. untyped. Table 1 on page 5
clearly identifies Lisp as an untyped (but safe) language. He also
speaks of statical vs. dynamical _checking_ wrt safety, but where do you
find a definition of dynamic typing?

Hmm, maybe I was wrong. I will need to check that again - it was some
time ago that I have read the paper. Oh dear, I am getting old. ;)

Thanks for pointing this out.


Pascal
 
P

Pascal Costanza

Adrian said:
Your right, I haven't. I would say the overwhelming majority of programs
"out there" fall into this category.

Do you have empirical evidence for this statement? Maybe your sample set
is not representative?
I don't think anybody who read your posts would get that impression :)

Well, then they don't read close enough. In my very posting wrt to this
topic, I have suggested soft typing as a good compromise. See
http://groups.google.com/[email protected]

Yes, you can certainly tell that I am a fan of dynamic type systems. So
what? Someone has asked why one would want to get rid of a static type
system, and I am responding.

(Thanks for the smiley. ;)
What claim?

"Most code [...] should be [...] checked for type errors at compile time."

Dunno who you're quoting there, but it isn't me.


Pascal
 
R

Ralph Becket

No. The fallacy in this reasoning is that you assume that "type error"
and "bug" are the same thing. They are not. Some bugs are not type
errors, and some type errors are not bugs. In the latter circumstance
simply ignoring them can be exactly the right thing to do.

Just to be clear, I do not believe "bug" => "type error". However, I do
claim that "type error" (in reachable code) => "bug". If at some point
a program P' (in L') may eventually abort with an exception due to an
ill typed function application then I would insist that P' is buggy.

Here's the way I see it:
(1) type errors are extremely common;
(2) an expressive, statically checked type system (ESCTS) will identify
almost all of these errors at compile time;
(3) type errors flagged by a compiler for an ESCTS can pinpoint the source
of the problem whereas ad hoc assertions in code will only identify a
symptom of a type error;
(4) the programmer does not have to litter type assertions in a program
written in a language with an ESCTS;
(5) an ESCTS provides optimization opportunities that would otherwise
be unavailable to the compiler;
(6) there will be cases where the ESCTS requires one to code around a
constraint that is hard/impossible to express in the ESCTS (the more
expressive the type system, the smaller the set of such cases will be.)

The question is whether the benefits of (2), (3), (4) and (5) outweigh
the occasional costs of (6).

-- Ralph
 
A

Adam Warner

Hi Matthias Blume,
Care to give an example?

(setf *debugger-hook*
(lambda (condition value)
(declare (ignorable condition value))
(invoke-restart (psychic))))

(defun psychic ()
(let* ((*read-eval* nil)
(input (ignore-errors (read))))
(format t "Input ~S is of type ~S.~%" input (type-of input))))

(loop (psychic))

This can only be statically compiled in the most trivial sense where every
input object type is permissible (i.e. every object is of type T).

Regards,
Adam
 
J

Joachim Durchholz

Pascal said:
Writing programs that inspect and change themselves at runtime.

That's just the first part of the answer, so I have to make the second
part of the question explicit:

What is dynamic metaprogramming good for?

I looked into the papers that you gave the URLs on later, but I'm still
missing a compelling reason to use MOP. As far as I can see from the
papers, MOP is a bit like pointers: very powerful, very dangerous, and
it's difficult to envision a system that does the same without the power
and danger but such systems do indeed exist.


(For a summary, scroll to the end of this post.)


Just to enumerate the possibilities in the various URLs given:

- Prioritized forwarding to components
(I think that's a non-recommended technique, as it either makes the
compound object highly dependent on the details of its constituents,
particularly if a message is understood by many contituents - but
anyway, here goes:) Any language that has good support for higher-order
functions can to this directly.

- Dynamic fields
Frankly, I don't understand why on earth one would want to have objects
with a variant set of fields. I could do the same easily by adding a
dictionary to the objects, and be done with it (and get the additional
benefit that the dictionary entries will never collide with a field name).
Conflating the name spaces of field names and dictionary keys might
offer some syntactic advantages (callers don't need to differentiate
between static and dynamic fields), but I fail to imagine any good use
for this all... (which may, of course, be lack of imagination on my
side, so I'd be happy to see anybody explain a scenario that needs
exactly this - and then I'll try to see how this can be done without MOP
*g*).

- Dynamic protection (based on sender's class/type)
This is a special case of "multiple views" (implement protection by
handing out a view with a restricted subset of functions to those
classes - other research areas have called this "capability-based
programming").

- Multiple views
Again, in a language with proper handling for higher-order functions
(HOFs), this is easy: a view is just a record of accessor functions, and
a hidden reference to the record for which the view holds. (If you
really need that.)
Note that in a language with good HOF support, calls that go through
such records are syntactically indistinguishable from normal function
calls. (Such languages do exist; I know for sure that this works with
Haskell.)

- Protocol matching
I simply don't understand what's the point with this: yes of course this
can be done using MOP, but where's the problem that's being simplified
with that approach?

- Collection of performance data
That's nonportable anyway, so it can be built right into the runtime,
and with less gotchas (if measurement mechanisms are integrated into the
runtime, they will rather break than produce bogus data - and I prefer a
broken instrument to one that will silently give me nonsense readings,
thank you).

- Result caching
Languages with good HOF support usually have a "memo" or "memoize"
function that does exactly this.

- Coercion
Well, of all things, this really doesn't need MOP to work well.

- Persistency
(and, as the original author forgot: network proxies - the issues are
similar)
Now here's a thing that indeed cannot be retrofitted to a language
without MOP.
(Well, performance counting can't be retrofitted as well, but that's
just a programmer's tool that I'd /expect/ to be part of the development
system. I have no qualms about MOP in the developer system, but IMHO it
should not be part of production code, and persistence and proxying for
remote objects are needed for running productive systems.)


For the first paper, this leaves me with a single valid application for
a MOP. At which point I can say that I can require that "any decent
language should have this built in": not in the sense that every
run-time system should include a working TCP/IP stack, but that every
run-time system should include mechanisms for marshalling and
unmarshalling objects (and quite many do).


On to the second paper (Brant/Foote/Johnson/Roberts).

- Image stripping
I.e. finding out which functions might be called by a given application.
While this isn't Smalltalk-specific, it's specific to dynamic languages,
so this doesn't count: finding the set of called functions is /trivial/
in a static language, since statically-typed languages don't usually
offer ways to construct function calls from lexical elements as typical
dynamic languages do.

- Class collaboration, interaction diagrams
Useful and interesting tools.
Of course, if the compiler is properly modularized, it's easy to write
them based on the string representation, instead of using reflective
capabilities.

- Synchronized methods, pre/postcondition checking
Here, the sole advantage of having an implementation in source code
instead of in the run-time system seems to be that no recompilation is
necessary if one wishes to change the status (method is synchronized or
not, assertions are checked or not).
Interestingly, this is not a difference between MOP and no MOP, it's a
difference between static and dynamic languages.
Even that isn't too interesting. For example, I have worked with Eiffel
compilers, and at least two of them do not require any recompilation if
you want to enable or disable assertion checking (plus, at least for one
compiler, it's possible to switch checking on and off on a per-program,
per-class, or even per-function basis), so this isn't the exclusive
domain of dynamic languages.
Of course, such things are easier to add as an afterthought if the
system is dynamic and such changes can be done with user code - but
since language and run-time system design are as much about giving power
as guarantees to the developer, and giving guarantees necessarily
entails restricting what a developer can do, I'm entirely unconvinced
that a dynamic language is the better way to do that.

- Multimethods
Well, I don't see much value in them anyway...


.... On to Andreas Paepcke's paper.
I found it more interesting than the other two because it clearly spells
out what MOPs are intended to be good for.

One of the main purposes, in Paepcke's view, is making it easier to
write tools. In fact reflective systems make this easier, because all
the tricky details of converting source code into an internal data
object have already been handled by the compiler.
On the other hand, I don't quite see why this should be more difficult
for a static language.
Of course, if the language designer "just wanted to get it to compile",
anybody who wants to write tools for the language has to rewrite the
parser and decorator, simply because the original tools are not built
for separating these phases (to phrase it in a polite manner). However,
in the languages where it's easy to "get it to compile" without
compromising modularity, I have seen lots of user-written tools, too. I
think the main difference is that when designing a run-time system for
introspection, designers are forced to do a very modular compiler design
- which is a Good Thing, but you can do a good design for a
non-introspective language just as well :)

In other words, I don't think that writing tools provides enough reason
for introspection: the goals can be attained in other ways, too.


The other main purpose in his book is the ability to /extend/ the
language (and, as should go without saying, without affecting code that
doesn't use the extensions).
He claims it's good for experimentation (to which I agree, but I
wouldn't want or need code for language experimentation in production code).

Oh, I see that's already enough of reasons by his book... not by mine.



Summary:
========

Most reasons given for the usefulness of a MOP are irrelevant. The
categories here are (in no particular order):
* Unneeded in a language without introspection (the argument becomes
circular)
* Easily replaced by good higher-order function support
* Programmer tools (dynamic languages tend to be better here, but that's
more of a historical accident: languages with a MOP are usually highly
dynamic, so a good compiler interface is a must - but nothing prevents
the designers of static languages from building their compilers with a
good interface, and in fact some static languages have rich tool
cultures just like the dynamic ones)

A few points have remained open, either because I misunderstood what the
respective author meant, or because I don't see any problem in handling
the issues statically, or because I don't see any useful application of
the mechanism. The uses include:
* Dynamic fields
* Protocol matching
* Coercion

And, finally, there's the list of things that can be done using MOP, but
where I think that they are better handled as part of the run-time system:
* (Un-)Marshalling
* Synchronization
* Multimethods

For (un-)marshalling, I think that this should be closed off and hidden
from the programmer's powers because it opens up all the implementation
details of all the objects. Anybody inspecting source code will have to
check the entire sources to be sure that a private field in a record is
truly private, and not accessed via the mechanisms that make user-level
implementation of (un-)marshalling possible.
Actually, all you need is a builtin pair of functions that convert some
data object from and to a byte stream; user-level code can then still
implement all the networking protocol layers, connection semantics etc.

For synchronization, guarantees are more important than flexibility. To
be sure that a system has no race conditions, I must be sure that the
locking mechanism in place (whatever it is) will work across all
modules, regardless of author. Making libraries interoperate that use
different locking strategies sounds like a nightmare to me - and if
everybody must use the same locking strategy, it should be part of the
language, not part of a user-written MOP library.
However, that's just a preliminary view; I'd be interested in hearing
reports from people who actually encountered such a situation (I
haven't, so I may be seeing problems where there aren't any).

For multimethods, I don't see that they should be part of a language
anyway - but that's a discussion for another thread that I don't wish to
repeat now (and this post is too long already).


Rambling mode OFF.

Regards,
Jo
 
J

Joachim Durchholz

Pascal said:
This was just one obvious example in which you need a workaround to make
the type system happy. There exist others.

Then give these examples, instead of presenting us with strawman examples.
I know what modern type systems do.

Then I don't understand your point of view.

Regards,
Jo
 
J

Joachim Durchholz

Pascal said:
Yes, because the need might arise to change the invariants at runtime,
and you might not want to stop the program and restart it in order just
to change it.

Then it's not an invariant.
Or the invariant is something like "foo implies invariant_1 and not foo
implies invariant_2", where "foo" is the condition that changes over the
lifetime of the object.

Invariants are, by definition, the properties of an object that will
always hold.


Or are you talking about system evolution and maintenance?
That would be an entirely new aspect in the discussion, and you should
properly forewarn us so that we know for sure what you're talking about.


Regards,
Jo
 
C

Craig Brozefsky

Joachim Durchholz said:
And, finally, there's the list of things that can be done using MOP,
but where I think that they are better handled as part of the run-time
system:
* (Un-)Marshalling
* Synchronization
* Multimethods

The MOP is an interface to the run-time system for common object
services. I do not understand your position that these would be
better handled by the run-time.
For (un-)marshalling, I think that this should be closed off and
hidden from the programmer's powers because it opens up all the
implementation details of all the objects.

What if I want to (un-)marshall from/to something besides a byte
stream, such as an SQL database? I don't want one of the object
services my system depends on to be so opaque because a peer thought I
would be better off that way. Then again, I have never understand the
desire to hide things in programming languages.
Anybody inspecting source code will have to check the entire sources
to be sure that a private field in a record is truly private, and
not accessed via the mechanisms that make user-level implementation
of (un-)marshalling possible.

If you look at the MOP in CLOS, you can use the slot-value-using-class
method to ensure that getting/setting the slot thru any interface will
trigger the appropriate code. It does not matter, private, public,
wether they use SLOT-VALUE or an accessor. This is also useful for
transaction mgmt.

The MOP is an interface to the run-time's object services.
 
D

Dennis Lee Bieber

Pascal Bourguignon fed this fish to the penguins on Thursday 23 October
2003 11:33 am:
The only untyped languages I know are assemblers. (ISTR that even
intercal can't be labelled "untyped" per se).

Are we speaking about assembler here?
REXX might qualify (hmmm, I think DCL is also untyped).


notstring = 5
string = "5"

what = string + notstring
when = notstring + string

say "5 + '5' (notstring + string)" when
say "'5' + 5 (string + notstring)" what

s. = "empty"
s.string = 2.78

n. = 3.141592654
n.notstring = "Who?"

say "s.5" s.5
say "s.'5'" s."5"
say "s.1" s.1
say "s.'2'" s."2"
say "s.string" s.string
say "s.notstring" s.notstring
say "n.string" n.string
say "n.notstring" n.notstring

[wulfraed@beastie wulfraed]$ rexx t.rx
5 + '5' (notstring + string) 10
'5' + 5 (string + notstring) 10
s.5 2.78
s.'5' empty5
s.1 empty
s.'2' empty2
s.string 2.78
s.notstring 2.78
n.string Who?
n.notstring Who?

Apparently literal strings are not allowed in the stem look-up,
resulting in the stem default of empty followed by the concatenated
literal.



--
 
R

Ralph Becket

Pascal Costanza said:
Read the literature on XP.

What, all of it?

Why not just enlighten me as to the error you see in my contention
about writing unit tests beforehand?
I am sorry, but in my book, assertions are automatically checked.

*But* they are not required.
*And* if they are present, they can only flag a problem at runtime.
*And* then at only a single site.
They same holds for assertions as soon as they are run by the test suite.

That is not true unless your test suite is bit-wise exhaustive.
...and I don't think you understand much about dynamic compilation. Have
you ever checked some not-so-recent-anymore work about, say, the HotSpot
virtual machine?

Feedback directed optimisation and dynamic FDO (if that is what you
are suggesting is an advantage of HotSpot) are an implementation
techonology and hence orthogonal to the language being compiled.

On the other hand, if you are not referring to FDO, it's not clear
to me what relevance HotSpot has to the point under discussion.
You are only talking about micro-efficiency here. I don't care about
that, my machine is fast enough for a decent dynamically typed language.

Speedups (and resource consumption reduction in general) by (in many
cases) a factor or two or more consitute "micro-efficiency"?
Have you checked this?

Do you mean have I used a profiler to search for bottlenecks in programs
in a statically type checked language? Then the answer is yes.

Or do you mean have I observed a significant speedup when porting from
C# or Python to Mercury? Again the answer is yes.
Weak and dynamic typing is not the same thing.

Let us try to draw some lines and see if we can agree on *something*.

UNTYPED: values in the language are just bit patterns and all
operations, primitive or otherwise, simply twiddle the bits
that come their way.

DYNAMICALLY TYPED: values in the language carry type identifiers, but
any value can be passed to any function. Some built-in functions will
raise an exception if the type identifiers attached to their arguments
are of the wrong sort. Such errors can only be identified at runtime.

STATICALLY TYPED: the compiler carries out a proof that no value of the
wrong type will ever be passed to a function expecting a different type,
anywhere in the program. (Note that with the addition of a universal
type and a checked runtime dynamic cast operator, one can add dynamically
typed facilities to a statically typed language.)

The difference between an untyped program that doesn't work (it produces
the wrong answer) and a dynamically typed program with a type bug (it
may throw an exception) is so marginal that I'm tempted to lump them both
in the same boat.
No. The original question asked in this thread was along the lines of
why abandon static type systems and why not use them always. I don't
need to convince you that a proposed general solution doesn't always
work, you have to convince me that it always works.

Done: just add a universal type. See Mercury for example.
[...]
The burden of proof is on the one who proposes a solution.

What? You're the one claiming that productivity (presumably in the
sense of leading to a working, efficient, reliable, maintainable
piece of code) is enhanced by using languages that *do not tell you
at compile time when you've made a mistake*!

-- Ralph
 
K

kosh

- Dynamic fields
Frankly, I don't understand why on earth one would want to have objects
with a variant set of fields. I could do the same easily by adding a
dictionary to the objects, and be done with it (and get the additional
benefit that the dictionary entries will never collide with a field name).
Conflating the name spaces of field names and dictionary keys might
offer some syntactic advantages (callers don't need to differentiate
between static and dynamic fields), but I fail to imagine any good use
for this all... (which may, of course, be lack of imagination on my
side, so I'd be happy to see anybody explain a scenario that needs
exactly this - and then I'll try to see how this can be done without MOP
*g*).

From what I understand zope uses this extensively in how you do stuff with the
ZODB. For example when rendering an object it looks for the closest callable
item called index_html. This means you can add an object to a folder that is
called index_html and is callable and it just works. I have a lot of objects
where it is not defined in the code what variables they will have and at
runtime these objects can be added. At least in python you can replace a
method with a callable object and this is very useful to do.

Overall when working with zope I can't imagine not doing it that way. It saves
a lot of time and it makes for very maintainable apps. You can view your
program as being transparently persistent so you override methods with
objects just like you normally would be inheriting from a class and then
overriding methods in it. I really like using an OODB for apps and one of the
interesting things is that you end up refactoring objects in your database
just like you would normally refactor code and it is pretty much the same
process.
 
P

Paul F. Dietz

Ralph said:
Here's the way I see it:
(1) type errors are extremely common;
(2) an expressive, statically checked type system (ESCTS) will identify
almost all of these errors at compile time;
(3) type errors flagged by a compiler for an ESCTS can pinpoint the source
of the problem whereas ad hoc assertions in code will only identify a
symptom of a type error;
(4) the programmer does not have to litter type assertions in a program
written in a language with an ESCTS;
(5) an ESCTS provides optimization opportunities that would otherwise
be unavailable to the compiler;
(6) there will be cases where the ESCTS requires one to code around a
constraint that is hard/impossible to express in the ESCTS (the more
expressive the type system, the smaller the set of such cases will be.)

However,

(7) Developing reliable software also requires extensive testing to
detect bugs other than type errors, and
(8) These tests will usually detect most of the bugs that static
type checking would have detected.

So the *marginal* benefit of static type checking is reduced, unless you
weren't otherwise planning to test your code very well.

BTW, is (3) really justified? My (admittedly old) experience with ML
was that type errors can be rather hard to track back to their sources.

Paul
 
P

Pascal Costanza

Joachim said:
Or are you talking about system evolution and maintenance?
That would be an entirely new aspect in the discussion, and you should
properly forewarn us so that we know for sure what you're talking about.

Did I forget to mention this in the specifications? Sorry. ;)

Yes, I want my software to be adaptable to unexpected circumstances.

(I can't give you a better specification, by definition.)

Pascal
 
P

Pascal Costanza

Ralph said:
What, all of it?

Why not just enlighten me as to the error you see in my contention
about writing unit tests beforehand?

Maybe we are talking at cross-purposes here. I didn't know about ocaml
not requiring target code to be present in order to have a test suite
acceptable by the compiler. I will need to take a closer look at this.
That is not true unless your test suite is bit-wise exhaustive.

Assertions cannot become out-of-date. If an assertion doesn't hold
anymore, it will be flagged by the test suite.
Feedback directed optimisation and dynamic FDO (if that is what you
are suggesting is an advantage of HotSpot) are an implementation
techonology and hence orthogonal to the language being compiled.

On the other hand, if you are not referring to FDO, it's not clear
to me what relevance HotSpot has to the point under discussion.

Maybe we both understand language implementation, and it is irrelevant?
Speedups (and resource consumption reduction in general) by (in many
cases) a factor or two or more consitute "micro-efficiency"?

Yes. Since this kind of efficiency is just one of many factors when
developing software, it might not be the most important one and might be
outweighed by advantages a certain loss of efficiency buys you elsewhere.
The difference between an untyped program that doesn't work (it produces
the wrong answer) and a dynamically typed program with a type bug (it
may throw an exception) is so marginal that I'm tempted to lump them both
in the same boat.

Well, but that's a wrong perspective. The one that throws an exception
can be corrected and then continued exactly at the point of the
execution path when the exception was thrown.
[...]
The burden of proof is on the one who proposes a solution.

What? You're the one claiming that productivity (presumably in the
sense of leading to a working, efficient, reliable, maintainable
piece of code) is enhanced by using languages that *do not tell you
at compile time when you've made a mistake*!

No, other people are claiming that one should _always_ use static type
sytems, and my claim is that there are situations in which a dynamic
type system is better.

If you claim that something (anything) is _always_ better, you better
have a convincing argument that _always_ holds.

I have never claimed that dynamic type systems are _always_ better.

Pascal
 
D

Dirk Thierbach

Right. As I said before, you need to reduce the expressive power of the
language.

Maybe that's where the problem is. One doesn't need to reduce the
"expressive power". I don't know your particular application, but what
you seem to need is the ability to dynamically change the program
execution. There's more than one way to do that. And MOPs (like
macros) are a powerful tool and sometimes quite handy, but it's also
easy to shoot yourself severly into your own foot with MOPs if you're
not careful, and often there are better solutions than using MOPs (for
example, apropriate flexible datatypes).

I may be wrong, but I somehow have the impression that it is difficult
to see other ways to solve a problem if you haven't done it in that
way at least once. So you see that with different tools, you cannot do
it in exactly the same way as with the old tools, and immediately you
start complaining that the new tools have "less expressive power",
just because you don't see that you have to use them in a different
way. The "I can do lot of things with macros in Lisp that are
impossible to do in other languages" claim seems to have a similar
background.

I could complain that Lisp or Smalltalk have "less expressive power"
because I cannot declare algebraic datatypes properly, I don't have
pattern matching to use them efficiently, and there is no automatic
test generation (i.e., type checking) for my datatypes. But there
are ways to work around this, so when programming in Lisp or Smalltalk,
I do it in the natural way that is appropriate for these languages,
instead of wasting my time with silly complaints.

The only way out is IMHO to learn as many languages as possible, and
to learn as many alternative styles of solving problems as possible.
Then pick the one that is apropriate, and don't say "this way has
most expressive power, all others have less". In general, this will
be just wrong.

- Dirk
 
K

Kenny Tilton

Ralph said:
STATICALLY TYPED: the compiler carries out a proof that no value of the
wrong type will ever be passed to a function expecting a different type,
anywhere in the program.

Big deal. From Robert C. Martin:

http://www.artima.com/weblogs/viewpost.jsp?thread=4639

"I've been a statically typed bigot for quite a few years....I scoffed
at the smalltalkers who whined about the loss of flexibility. Safety,
after all, was far more important than flexibility -- and besides, we
can keep our software flexible AND statically typed, if we just follow
good dependency management principles.

"Four years ago I got involved with Extreme Programming. ...

"About two years ago I noticed something. I was depending less and less
on the type system for safety. My unit tests were preventing me from
making type errors. The more I depended upon the unit tests, the less I
depended upon the type safety of Java or C++ (my languages of choice).

"I thought an experiment was in order. So I tried writing some
applications in Python, and then Ruby (well known dynamically typed
languages). I was not entirely surprised when I found that type issues
simply never arose. My unit tests kept my code on the straight and
narrow. I simply didn't need the static type checking that I had
depended upon for so many years.

"I also realized that the flexibility of dynamically typed langauges
makes writing code significantly easier. Modules are easier to write,
and easier to change. There are no build time issues at all. Life in a
dynamically typed world is fundamentally simpler.

"Now I am back programming in Java because the projects I'm working on
call for it. But I can't deny that I feel the tug of the dynamically
typed languages. I wish I was programming in Ruby or Python, or even
Smalltalk.

"Does anybody else feel like this? As more and more people adopt test
driven development (something I consider to be inevitable) will they
feel the same way I do. Will we all be programming in a dynamically
typed language in 2010? "

Lights out for static typing.

kenny
 
D

Dirk Thierbach

Pascal Costanza said:
No, other people are claiming that one should _always_ use static type
sytems, and my claim is that there are situations in which a dynamic
type system is better.

If you claim that something (anything) is _always_ better, you better
have a convincing argument that _always_ holds.

I have never claimed that dynamic type systems are _always_ better.

To me, it certainly looked like you did in the beginning. Maybe your
impression that other people say that one should always use static
type systems is a similar misinterpretation?

Anyway, formulations like "A has less expressive power than B" aresvery
close to "B is always better than A". It's probably a good idea to
avoid such formulations if this is not what you mean.

- Dirk
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,172
Messages
2,570,934
Members
47,478
Latest member
ReginaldVi

Latest Threads

Top