What is Expressiveness in a Computer Language

J

Joe Marshall

Marshall said:
I stand corrected: if one is using C and writing self-modifying
code, then one *can* zip one's pants.

Static proofs notwithstanding, I'd prefer a dynamic check just prior to
this operation.

I want my code to be the only self-modifying thing around here.
 
J

Joe Marshall

David said:
The problem with this is that from that point on, what you're running
is neither the old nor the new program, since its state may have been
influenced by the bug before you corrected it.

Yes. The hope is that it is closer to the new program than to the old.
I find it far simpler to just restart the program after correcting
anything. If this is too difficult, I change the design to make it
less difficult.

In the most recent case where I was doing this, I was debugging
transaction rollback that involved multiple database extents. It was
somewhat painful to set up a clean database to the point where I wanted
to try the rollback, and I wanted a pristine database for each trial so
I could examine the raw bits left by a rollback. It was pretty easy to
deal with simple errors in the debugger, so I chose to do that instead.
 
P

Pascal Costanza

David said:
This makes essentially all run-time errors (including assertion failures,
etc.) "type errors". It is neither consistent with, nor any improvement
on, the existing vaguely defined usage.

Nope. This is again a matter of getting the levels right.

Consider division by zero: appropriate arguments for division are
numbers, including the zero. The dynamic type check will typically not
check whether the second argument is zero, but will count on the fact
that the processor will raise an exception one level deeper.

This is maybe better understandable in user-level code. Consider the
following class definition:

class Person {
String name;
int age;

void buyPorn() {
if (< this.age 18) throw new AgeRestrictionException();
...
}
}

The message p.buyPorn() is a perfectly valid message send and will pass
both static and dynamic type tests (given p is of type Person in the
static case). However, you will still get a runtime error.


Pascal
 
C

Chris Smith

Pascal Costanza said:
Consider division by zero: appropriate arguments for division are
numbers, including the zero. The dynamic type check will typically not
check whether the second argument is zero, but will count on the fact
that the processor will raise an exception one level deeper.

Of course zero is not appropriate as a second argument to the division
operator! I can't possibly see how you could claim that it is. The
only reasonable statement worth making is that there doesn't exist a
type system in widespread use that is capable of checking this.
This is maybe better understandable in user-level code. Consider the
following class definition:

class Person {
String name;
int age;

void buyPorn() {
if (< this.age 18) throw new AgeRestrictionException();
...
}
}

The message p.buyPorn() is a perfectly valid message send and will pass
both static and dynamic type tests (given p is of type Person in the
static case).

It appears you've written the code above to assume that the type system
can't cerify that age >= 18... otherwise, the if statement would not
make sense. It also looks like Java, in which the type system is indeed
not powerfule enough to do that check statically. However, it sounds as
if you're claiming that it wouldn't be possible for the type system to
do this? If so, that's not correct. If such a thing were checked at
compile-time by a static type check, then failing to actually provide
that guarantee would be a type error, and the compiler would tell you
so.

Whether you'd choose to call an equivalent runtime check a "dynamic type
check" is a different matter, and highlights the fact that again there's
something fundamentally different about the definition of a "type" in
the static and dynamic sense.
 
P

Pascal Costanza

Chris said:
Of course zero is not appropriate as a second argument to the division
operator! I can't possibly see how you could claim that it is. The
only reasonable statement worth making is that there doesn't exist a
type system in widespread use that is capable of checking this.

....and this is typically not even checked at the stage where type tags
are checked in dynamically-typed languages. Hence, it is not a type
error. (A type error is always what you define to be a type error within
a given type system, right?)

Note, this example was in response to David's reply that my definition
turns every runtime error into a type error. That's not the case, and
that's all I want to say.
It appears you've written the code above to assume that the type system
can't cerify that age >= 18... otherwise, the if statement would not
make sense.
Right.

It also looks like Java, in which the type system is indeed
not powerfule enough to do that check statically.
Right.

However, it sounds as
if you're claiming that it wouldn't be possible for the type system to
do this?

No. I only need an example where a certain error is not a type error in
_some_ language. I don't need to make a universal claim here.
If so, that's not correct. If such a thing were checked at
compile-time by a static type check, then failing to actually provide
that guarantee would be a type error, and the compiler would tell you
so.

That's fine, but not relevant in this specific subplot.


Pascal
 
P

Pascal Bourguignon

Marshall said:
Yes, an important question (IMHO the *more* important question
than the terminology) is what *programs* do we give up if we
wish to use static typing? I have never been able to pin this
one down at all.

Well, given Turing Machine equivalence...

I'd mention retrospective software. But you can always implement the
wanted retrospective features as a layer above the statically typed
language.

So the question is how much work the programmer needs to do to
implement a given program with static typing or with dynamic typing.

The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable? I think there might be, but I've
never been able to find a solid example of one.

More than *never* typable, you want to identify some kind of software
that is not *economically* statically typable.

Was it costlier to develop the software developed in non statically
typed programming languages than in a statically typed programming
language?
 
C

Chris Smith

Marshall said:
Yes, an important question (IMHO the *more* important question
than the terminology) is what *programs* do we give up if we
wish to use static typing? I have never been able to pin this
one down at all.

You'd need to clarify what you mean by "use static typing". Clearly, if
you use forms of static typing that currently exist for Java, there's
considerable limitation in expressiveness. If you mean a hypothetical
"perfect" type system, then there would be no interesting programs you
couldn't write, but such a system may not be possible to implement, and
certainly isn't feasible.

So it seems to me that we have this ideal point at which it is possible
to write all correct or interesting programs, and impossible to write
buggy programs. Pure static type systems try to approach that point
from the conservative side. Pure dynamic systems (and I actually think
that design-by-contract languages and such apply here just as well as
type tagging) try to approach that point via runtime verification from
the liberal side. (No, this has nothing to do with George Bush or Noam
Chomsky... :) Either side finds that the closer they get, the more
creative they need to be to avoid creating development work that's no
longer commensurate with the gains involved (whether in safety or
expressiveness).

I think I am now convinced that the answer to the question of whether
there is a class of type errors in dynamically checked languages is the
same as the answer for static type systems: no. In both cases, it's
just a question of what systems may be provided for the purposes of
verifying (more or less conservatively or selectively) certain program
properties. Type tags or other features that "look like" types are only
relevant in that they encapsulate common kinds of constraints on program
correctness without requiring the programmer to express those
constraints in any more general purpose language.

As for what languages to use right now, we are interestingly enough back
where Xah Lee started this whole thread. All languages (except a few of
the more extreme statically typed languages) are Turing complete, so in
order to compare expressiveness, we'd need some other measure that
considers some factor or factors beyond which operations are
expressible.
There are also what I call "packaging" issues, such as
being able to run partly-wrong programs on purpose so
that one would have the opportunity to do runtime analysis
without having to, say, implement parts of some interface
that one isn't interested in testing yet. These could also
be solved in a statically typed language. (Although
historically it hasn't been done that way.)

I'll note veryh briefly that the built-in compiler for the Eclipse IDE
has the very nice feature that when a particular method fails to
compile, it can still produces a result but replace the implementation
of that method with one that just throws an Error. I've taken advantage
of that on occasion, though it doesn't allow the same range of options
as a language that will just go ahead and try the buggy operations.
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable? I think there might be, but I've
never been able to find a solid example of one.

This seems to depend on the specific concept of equivalence of programs
that you have in mind.
 
D

Dirk Thierbach

David Hopwood said:
Dirk Thierbach wrote:
That's interesting, but linear typing imposes some quite severe
restrictions on programming style. From the example of 'h' on page 2,
it's clear that the reason for the linearity restriction is just to
ensure polynomial-time termination, and is not needed if we only want
to prove termination.

Yes. It's just an example of what can be actually done with a typesystem.
I think you're overestimating the difficulty of the problem. The fact
that *in general* it can be arbitrarily hard to prove termination, can
obscure the fact that for large parts of a typical program, proving
termination is usually trivial.

Yes. The problem is the small parts where it's not trivial (and the
size of this part depends on the actual problem the program is trying
to solve). Either you make the language so restrictive that these
parts are hard (see your remark above) or impossible to write, or
you'll be left with a language where the compiler cannot prove
termination of those small parts, so it's not a "type system" in the
usual sense.

Of course one could write an optional tool that automatically proves
termination of the easy parts, and leaves the rest to the programmer,
but again, that's a different thing.

- Dirk
 
K

Ketil Malde

Marshall said:
There are also what I call "packaging" issues, such as
being able to run partly-wrong programs on purpose so
that one would have the opportunity to do runtime analysis
without having to, say, implement parts of some interface
that one isn't interested in testing yet. These could also
be solved in a statically typed language. (Although
historically it hasn't been done that way.)

I keep hearing this, but I guess I fail to understand it. How
"partly-wrong" do you require the program to be?

During development, I frequently sprinkle my (Haskell) program with
'undefined' and 'error "implement later"' - which then lets me run the
implemented parts until execution hits one of the 'undefined's.

The "cost" of static typing for running an incomplete program is thus
simply that I have to name entities I refer to. I'd argue that this
is good practice anyway, since it's easy to see what remains to be
done. (Python doesn't seem to, but presumably a serious dynamically
typed system would warn about references to entities not in scope?)

-k
 
D

Dr.Ruud

Chris Smith schreef:
So it seems to me that we have this ideal point at which it is
possible to write all correct or interesting programs, and impossible
to write buggy programs.

I think that is a misconception. Even at the idealest point it will be
possible (and easy) to write buggy programs. Gödel!
 
D

David Hopwood

Marshall said:
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?

In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.

In a statically typed, Turing-complete language that does not have a
"dynamic" type, it is always *possible* to express a dynamically typed
program, but this may require a global transformation of the program
or use of an interpreter -- i.e. the "Felleisen expressiveness" of the
language is reduced.
 
M

Marshall

Ketil said:
I keep hearing this, but I guess I fail to understand it. How
"partly-wrong" do you require the program to be?

During development, I frequently sprinkle my (Haskell) program with
'undefined' and 'error "implement later"' - which then lets me run the
implemented parts until execution hits one of the 'undefined's.

The "cost" of static typing for running an incomplete program is thus
simply that I have to name entities I refer to. I'd argue that this
is good practice anyway, since it's easy to see what remains to be
done. (Python doesn't seem to, but presumably a serious dynamically
typed system would warn about references to entities not in scope?)

I'll give you my understanding of it, but bear in mind that I only
program
in statically typed languages, and I in fact do exactly what you
decribe
above: stub out unimplemented methods.

The issue is that, while stubbing out things may not be expensive,
it is not free either. There is some mental switching cost to being
in a state where one writes some code, wants to execute it, but
can't, because of type system constraints that are globally applicable
but not locally applicable to the task at hand, or to the specific
subprogram one is working on right at that moment.

Since I'm very used to doing it, it doesn't seem like an issue to
me, but programmers in dynamical languages complain bitterly
about this. It is my feeling that programming languages should
try to be inclusive, and since this feature is easily enough
accomodated, (as a 'lenient' switch to execution) it ought
to be.


Marshall
 
M

Marshall

David said:
In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.

So, how does this "dynamic" type work? It can't simply be
the "any" type, because that type has no/few functions defined
on it. It strikes me that *when* exactly binding happens will
matter. In a statically typed language, it may be that all
binding occurs at compile time, and in a dynamic language,
it may be that all binding occurs at runtime. So you might
have to change the binding mechanism as well. Does the
"dynamic" type allow this?


Marshall
 
G

George Neuner

These are value conversions, not type conversions. Basically, when you
convert a floating point number to an integer, you are not simply
promising the compiler something about the type; you are actually asking
the compiler to convert one value to another -- i.e., see to it that
whatever this is now, it /becomes/ an integer by the time we're done.
This also results in a type conversion, but you've just converted the
value to the appropriate form. There is a narrowing value conversion,
but the type conversion is perfectly safe.

We're talking at cross purposes. I'm questioning whether a strong
type system can be completely static as some people here seem to
think. I maintain that it is simply not possible to make compile time
guarantees about *all* runtime behavior and that, in particular,
narrowing conversions will _always_ require runtime checking.

If you mean "my compiler can't", then this is probably the case. If you
mean "no possible compiler could", then I'm not sure this is really very
likely at all.

Again, the discussion is about narrowing the result. It doesn't
matter how much the compiler knows about the ranges. When the
computation mixes types, the range of the result can only widen as the
compiler determines the types involved.

George
 
D

Duane Rettig

Ketil Malde said:
I keep hearing this, but I guess I fail to understand it. How
"partly-wrong" do you require the program to be?

This conclusion is false. To be "wrong", whether partly or fully,
a program needs to specifically not conform to the requirements that
the programmer gives it. You are assuming that only a completed
program can be right. Let me give a counterexample:

Consider this part of a CL program:

CL-USER(1): (defun foo (x)
(declare (optimize speed (safety 0) (debug 0))
(fixnum x)
)
(bar (the fixnum (1+ x))))
FOO
CL-USER(2):

This is of course not a complete program, because the function BAR is
not yet defined. If I try to run it, it will of course get an error.
But does that require then that I call this a "partly wrong" program?
No, of course not; it is not a program; it is a function, and for my
purposes it is completely correct (I've debugged it already :)

So what can we do with an incomplete program? Nothing? Well, no. Of
course, we can't run it (or can we?). But even before talking about
running the program, we can at least do some reasoning about it:

1. It seems to have some static type declarations (in a dynamic
langiuage? oh my :).

2. The 1+ operation limits the kinds of operations that would be
acceptable, even in the absence of static type declarations.

3. The declarations can be ignored by the CL implementation, so #2
might indeed come into play. I ensured that the declarations would
be "trusted" in Allegro CL, by declaring high speed and low safety.

4. I can compile this function. Note that I get a warning about the
incompleteness of the program:

CL-USER(2): (compile 'foo)
Warning: While compiling these undefined functions were referenced: BAR.
FOO
NIL
NIL
CL-USER(3):

5. I can disassemble the function. Note that it is a complete
function, despite the fact that BAR is undefined:

CL-USER(3): (disassemble 'foo)
;; disassembly of #<Function FOO>
;; formals: X
;; constant vector:
0: BAR

;; code start: #x406f07ec:
0: 83 c0 04 addl eax,$4
3: 8b 5e 12 movl ebx,[esi+18] ; BAR
6: b1 01 movb cl,$1
8: ff e7 jmp *edi
CL-USER(4):

6. I can _even_ run the program! Yes, I get an error:

CL-USER(4): (foo 10)
Error: attempt to call `BAR' which is an undefined function.
[condition type: UNDEFINED-FUNCTION]

Restart actions (select using :continue):
0: Try calling BAR again.
1: Return a value instead of calling BAR.
2: Try calling a function other than BAR.
3: Setf the symbol-function of BAR and call it again.
4: Return to Top Level (an "abort" restart).
5: Abort entirely from this (lisp) process.
[1] CL-USER(5):

7. But note first that I have many options, including retrying the
call to BAR. What was this call to BAR? I can reason about that as
well, by getting a backtrace:

[1] CL-USER(5): :zoom
Evaluation stack:

(ERROR #<UNDEFINED-FUNCTION @ #x406f3dfa>)
->(BAR 11)
[... EXCL::%EVAL ]
(EVAL (FOO 10))
(TPL:TOP-LEVEL-READ-EVAL-PRINT-LOOP)
(TPL:START-INTERACTIVE-TOP-LEVEL
#<TERMINAL-SIMPLE-STREAM [initial terminal io] fd 0/1 @ #x40120e5a>
#<Function TOP-LEVEL-READ-EVAL-PRINT-LOOP> ...)
[1] CL-USER(6):

8. If I then define BAR, with or without compiling it, and then take
that option to retry the call:

[1] CL-USER(6): (defun bar (x) (1- x))
BAR
[1] CL-USER(7): :cont
10
CL-USER(8):

Hmm, I was able to complete the program dynamically? Who ever heard
of doing that? :)
During development, I frequently sprinkle my (Haskell) program with
'undefined' and 'error "implement later"' - which then lets me run the
implemented parts until execution hits one of the 'undefined's.

Why do it manually? And what do you do when you've hit the undefined?
Start the program over?
The "cost" of static typing for running an incomplete program is thus
simply that I have to name entities I refer to. I'd argue that this
is good practice anyway, since it's easy to see what remains to be
done.

Good practice, yes, but why not have the language help you to practice
it?
 
G

Greg Buchholz

Chris said:
Very impressive. It looks right to me and simple enough to
understand. I must find the time to learn a modern FP language. Can
you write a fold for this that prints the data as a binary tree of
triples? I have to believe it isn't that hard....

{- Refactoring this as a fold is left as an exercise to the reader -}

data Clark a b c = Nil | Cons a (Clark b c (a,a)) deriving Show

clark =
(Cons 42 (Cons 3.14 (Cons "abc"
(Cons (1,2) (Cons (1.2,3.4) (Cons ("foo","bar")
(Cons ((9,8),(7,6)) (Cons ((0.1,0.2),(0.3,0.4))
(Cons (("w","x"),("y","z")) Nil)))))))))

main = print (toTree clark)

data Tree a = Empty | Node a (Tree a) (Tree a) deriving Show

toTree :: Clark a b c -> Tree (a,b,c)
toTree (Cons x (Cons y (Cons z rest)))
= Node (x,y,z)
(toTree (fst (lift_c rest)))
(toTree (snd (lift_c rest)))
toTree _ = Empty

lift_c :: Clark (a,a) (b,b) (c,c) -> (Clark a b c, Clark a b c)
lift_c Nil = (Nil,Nil)
lift_c (Cons (x,y) rest) = (Cons x (fst (lift_c rest)),
Cons y (snd (lift_c rest)))
 
J

Joe Marshall

Marshall said:
Yes, an important question (IMHO the *more* important question
than the terminology) is what *programs* do we give up if we
wish to use static typing? I have never been able to pin this
one down at all.

It would depend on the type system, naturally.

It isn't clear to me which programs we would have to give up, either.
I don't have much experience in sophisticated typed languages. It is
rather easy to find programs that baffle an unsophisticated typed
language (C, C++, Java, etc.).

Looking back in comp.lang.lisp, I see these examples:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))

(defun blackhole (argument)
(declare (ignore argument))
#'blackhole)

But wait a sec. It seems that these were examples I invented in
response to the same question from you!

The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?

Certainly! As soon as you can reflect on the type system you can
construct programs that type-check iff they fail to type-check.
The C++ type system is Turing complete, although in practical terms
it limits how much processing power it will spend on types at
compile time.

I think templates only have to expand to seven levels, so you are
severely limited here.
I don't think so. Even with a Turing complete type system, a program's
runtime behavior is still something different from its static behavior.
(This is the other side of the "types are not tags" issue--not only
is it the case that there are things static types can do that tags
can't, it is also the case that there are things tags can do that
static types can't.)

I agree. The point of static checking is to *not* run the program. If
the type system gets too complicated, it may be a de-facto interpreter.
 
D

Diez B. Roggisch

The C++ type system is Turing complete, although in practical terms
I think templates only have to expand to seven levels, so you are
severely limited here.

You can adjust the iteration-depth. However, as turing-complete implies the
potential to create infinite loops - it enforces _some_ boundary. Otherwise
the user hitting C-c will do :)

Diez
 
Q

QCD Apprentice

Joe said:
It would depend on the type system, naturally.

It isn't clear to me which programs we would have to give up, either.
I don't have much experience in sophisticated typed languages. It is
rather easy to find programs that baffle an unsophisticated typed
language (C, C++, Java, etc.).

Looking back in comp.lang.lisp, I see these examples:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))

(defun blackhole (argument)
(declare (ignore argument))
#'blackhole)

But wait a sec. It seems that these were examples I invented in
response to the same question from you!



Certainly! As soon as you can reflect on the type system you can
construct programs that type-check iff they fail to type-check.

Sorry, but can you elaborate on this last point a little? I think I
missed something.
 
D

David Hopwood

Marshall said:
So, how does this "dynamic" type work?

It can't simply be the "any" type, because that type has no/few
functions defined on it.

It isn't. From the abstract of the above paper:

[...] even in statically typed languages, there is often the need to
deal with data whose type cannot be determined at compile time. To handle
such situations safely, we propose to add a type Dynamic whose values are
pairs of a value v and a type tag T where v has the type denoted by T.
Instances of Dynamic are built with an explicit tagging construct and
inspected with a type safe typecase construct.

"Gradual typing" as described in
<http://www.cs.rice.edu/~jgs3847/pubs/pubs/2006/siek06:_gradual.pdf> is
another alternative. The difference between gradual typing and a
"dynamic" type is one of convenience rather than expressiveness --
gradual typing does not require explicit tagging and typecase constructs.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,996
Messages
2,570,238
Members
46,826
Latest member
robinsontor

Latest Threads

Top