Chris Uppal said:
If I understand your position correctly, wouldn't you be pretty much forced to
reject the idea of the length of a Java array being part of its type ?
I've since abandoned any attempt to be picky about use of the word
"type". That was a mistake on my part. I still think it's legitimate
to object to statements of the form "statically typed languages X, but
dynamically typed languages Y", in which it is implied that Y is
distinct from X. When I used the word "type" above, I was adopting the
working definition of a type from the dynamic sense. That is, I'm
considering whether statically typed languages may be considered to also
have dynamic types, and it's pretty clear to me that they do.
If you
want to keep the word "type" bound to the idea of static analysis, then --
since Java doesn't perform any size-related static analysis -- the size of a
Java array cannot be part of its type.
Yes, I agree. My terminology has been shifting constantly throughout
this thread as I learn more about how others are using terms. I realize
that probably confuses people, but my hope is that this is still
superior to stubbornly insisting that I'm right.
That's assuming that you would want to keep the "type" connected to the actual
type analysis performed by the language in question. Perhaps you would prefer
to loosen that and consider a different (hypothetical) language (perhaps
producing identical bytecode) which does do compile time size analysis.
In the static sense, I think it's absolutely critical that "type" is
defined in terms of the analysis done by the type system. Otherwise,
you miss the definition entirely. In the dynamic sense, I'm unsure; I
don't have any kind of deep understanding of what's meant by "type" in
this sense. Certainly it could be said that there are somewhat common
cross-language definitions of "type" that get used.
But then you get into an area where you cannot talk of the type of a value (or
variable) without relating it to the specific type system under discussion.
Which is entirely what I'd expect in a static type system.
Personally, I would be quite happy to go there -- I dislike the idea that a
value has a specific inherent type.
Good!
It would be interesting to see what a language designed specifically to support
user-defined, pluggable, and perhaps composable, type systems would look like.
Presumably the syntax and "base" semantics would be very simple, clean, and
unrestricted (like Lisp, Smalltalk, or Forth -- not that I'm convinced that any
of those would be ideal for this), with a defined result for any possible
sequence of operations. The type-system(s) used for a particular run of the
interpreter (or compiler) would effectively reduce the space of possible
sequences. For instance, one could have a type system which /only/ forbade
dereferencing null, or another with the job of ensuring that mutability
restrictions were respected, or a third which implemented access control...
You mean in terms of a practical programming language? If not, then
lambda calculus is used in precisely this way for the static sense of
types.
But then, I don't see a semantically critically distinction between such space
reduction being done at compile time vs. runtime. Doing it at compile time
could be seen as an optimisation of sorts (with downsides to do with early
binding etc). That's particularly clear if the static analysis is /almost/
able to prove that <some sequence> is legal (by its own rules) but has to make
certain assumptions in order to construct the proof. In such a case the
compiler might insert a few runtime checks to ensure that it's assumptions were
valid, but do most of its checking statically.
I think Marshall got this one right. The two are accomplishing
different things. In one case (the dynamic case) I am safeguarding
against negative consequences of the program behaving in certain non-
sensical ways. In the other (the static case) I am proving theorems
about the impossibility of this non-sensical behavior ever happening.
You mention static typing above as an optimization to dynamic typing;
that is certainly one possible applications of these theorems.
In some sense, though, it is interesting in its own right to know that
these theorems have been proven. Of course, there are important doubts
to be had whether these are the theorems we wanted to prove in the first
place, and whether the effort of proving them was worth the additional
confidence I have in my software systems.
I acknowledge those questions. I believe they are valid. I don't know
the answers. As an intuitive judgement call, I tend to think that
knowing the correctness of these things is of considerable benefit to
software development, because it means that I don't have as much to
think about at any one point in time. I can validly make more
assumptions about my code and KNOW that they are correct. I don't have
to trace as many things back to their original source in a different
module of code, or hunt down as much documentation. I also, as a
practical matter, get development tools that are more powerful.
(Whether it's possible to create the same for a dynamically typed
language is a potentially interesting discussion; but as a practical
matter, no matter what's possible, I still have better development tools
for Java than for JavaScript when I do my job.)
In the end, though, I'm just not very interested in them at the moment.
For me, as a practical matter, choices of programming language are
generally dictated by more practical considerations. I do basically all
my professional "work" development in Java, because we have a gigantic
existing software system written in Java and no time to rewrite it. On
the other hand, I do like proving theorems, which means I am interested
in type theory; if that type theory relates to programming, then that's
great! That's probably not the thing to say to ensure that my thoughts
are relevant to the software development "industry", but it's
nevertheless the truth.