Pascal Costanza said:
I think you are confusing things here. It gets much clearer when you
separate compilation/interpretation from type checking, and see a static
type checker as a distinct tool.
The invariants that you write, or that are inferred by the type checker,
are expressions in a domain-specific language for static program
analysis. You can only increase the expressive power of that
domain-specific language by adding a more elaborate static type system.
You cannot increase the expressive power of the language that it reasons
about.
Sorry, but that reply of yours somewhat indicates that you haven't really
used modern type systems seriously.
All decent type systems allow you to define your own types. You can express
any domain-specific abstraction you want in types. Hence the type language
gives you additional expressive power wrt the problem domain.
An increase of expressive power of the static type checker decreases the
expressive power of the target language, and vice versa.
That's a contradiction, because the type system is part of the "target"
language. You cannot separate them, because the type system is more then
just a static analysis phase - you can program it.
As a sidenote, here is where Lisp comes into the game: Since Lisp
programs can easily reason about other Lisp programs, because there is
no distinction between programs and data in Lisp, it should be pretty
straightforward to write a static type checker for Lisp programs, and
include them in your toolset.
It is not, because Lisp hasn't been designed with types in mind. It is
pretty much folklore that retrofitting a type system onto an arbitrary
language will not work properly. For example, Lisp makes no distinction
between tuples and lists, which is crucial for type inference.
It should also be relatively straightforward to make this a relatively
flexible type checker for which you can increase/decrease the level of
required conformance to the (a?) type system.
This would mean that you could have the benefits of both worlds: when
you need static type checking, you can add it. You can even enforce it
in a project, if the requirements are strict in this regard in a certain
setting. If the requirements are not so strict, you can relax the static
type soundness requirements, or maybe even go back to dynamic type
checking.
I don't believe in soft typing, since it cannot give you the same guarantees
as strong typing. If you want to mix static and dynamic typing, having
static typing as the default and *explicit* escapes to dynamic typing is the
only sensible route, IMNSHO. Otherwise, all the invariants and guarantees
typing gives you are lost.
Overloading relies on static typing? This is news to me. What do you mean?
If you want to have extensible overloading then static types are the only
way I know for resolving it. Witness Haskell for example. It has a very
powerful overloading mechanism (for which the term 'overloading' actually is
an understatement). It could not possibly work without static typing, which
is obvious from the fact that Haskell does not even have an untyped
semantics.
Erasing type information from a program that uses type abstraction to
guarantee certain post conditions will invalidate those post conditions. So
you get a program with a different meaning. It expresses something
different, so the types it contained obviously had some expressive power.
Erasing type information from a program that uses overloading simply makes
it ambiguous, i.e. takes away any meaning at all. So the types definitely
expressed something relevant.
- Andreas