J
Jürgen Exner
Scott said:So you claim Java and Objective C are "simply bugs in the theory."
They are certainly off topic in CLPM.
jue
Scott said:So you claim Java and Objective C are "simply bugs in the theory."
Chris said:It seems to me that most (all ? by definition ??) kinds of reasoning where we
want to invoke the word "type" tend to have a form where we reduce values (and
other things we want to reason about) to equivalence classes[*] w.r.t the
judgements we wish to make, and (usually) enrich that structure with
more-or-less stereotypical rules about which classes are substitutable for
which other ones. So that once we know what "type" something is, we can
short-circuit a load of reasoning based on the language semantics, by
translating into type-land where we already have a much simpler calculus to
guide us to the same answer.
(Or representative symbols, or... The point is just that we throw away the
details which don't affect the judgements.)
Chris said:The immutability comes from the fact (perhaps implicit in these
textbooks, or perhaps they are not really texts on formal type theory)
that types are assigned to expressions,
Gabriel said:(e-mail address removed)-sb.de writes:
| Gabriel Dos Reis wrote:
| > |
| > | (Unfortunately, you can hardly write interesting programs in any safe
| > | subset of C.)
| >
| > Fortunately, some people do, as living job.
|
| I don't think so. Maybe the question is what a "safe subset" consists
| of. In my book, it excludes all features that are potentially unsafe.
if you equate "unsafe" with "potentially unsafe", then you have
changed gear and redefined things on the fly, and things that could
be given sense before ceases to have meaning. I decline following
such an uncertain, extreme, path.
I would suggest you give more thoughts to the claims made in
http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00298.html
Scott said:So you claim Java and Objective C are "simply bugs in the theory."
Joachim Durchholz write:
No. I'm not sure about Pascal,
but (Standard) ML surely has none.
NLFFI?
Same with Haskell as defined by its spec.
OCaml has a couple of clearly
marked unsafe library functions, but no unsafe feature in the language
semantics itself.
Tautology. Every language is "safe unless you use unsafe constructs".
(Unfortunately, you can hardly write interesting programs in any safe
subset of C.)
Huh? There is a huge, fundamental difference: namely whether a type
system is sound or not.
I don't think so. Maybe the question is what a "safe subset" consists
of. In my book, it excludes all features that are potentially unsafe.
Pascal said:It's also relevant how straightforward it is to distinguish between safe
and unsafe code, how explicit you have to be when you use unsafe code,
how likely it is that you accidentally use unsafe code without being
aware of it, what the generally accepted conventions in a language
community are, etc. pp.
That language is not a subset, if at all, it's the other way round, but
I'd say they are rather incomparable. That is, they are different
languages.
I have to agree.
\sarcasm One step further, and somebody starts calling C a "latently
memory-safe language", because a real programmer "knows" that his code
is in a safe subset... And where he is wrong, dynamic memory page
protection checks will guide him.
Joachim Durchholz said:That doesn't *define* what's a type or what isn't!
Joachim said:That doesn't *define* what's a type or what isn't!
If it's impossible to assign types to all expressions of a program in a
language, that does mean that there's no useful type theory for the
program, but it most definitely does not mean that there are no types in
the program.
I can still sensibly talk about sets of values, sets of allowable
operations over each value, and about relationships between inputs and
outputs of these operations.
So programs have types, even if they don't have a static type system.
Q.E.D.
Regards,
Jo
Chris said:At the risk of injecting too much irrelevant philosophy into the
discussion, I will with great trepdiation reply.
First in the abstrtact: No, understanding is approximating.
Agreed.
The world is inherently vague.
We make false symbolic models of the world which
are consistent, but at some level they do not reflect reality,
Yes...
because reality isn't consistent.
What?!
Only by abtracting away the inherent
infinite amout of subtlety present in the real universe can we come to
comprehensible models.
Those models can be consistent, but they are
not the universe. The models in their consistency, prove things which
are not true about the real universe.
Now in the concrete: In my work productivity is ultimately important.
Therefore, we live by the 80-20 rule in numerous variations. One of
ths things we do to achieve productivity is simplify things. In fact,
we are more interested in an unsound simplification that is right 80%
of the time, but takes only 20% of the effort to compute, than a
completely sound (100% right) model which takes 100% of the time to
compute (which is 5 times as long). We are playing the probabilities.
It's not that we don't respect the sound underpining, the model which
is consistent and establishes certain truths. However, what we want
is the rule of thumb which is far simpler and approximates the sound
model with reasonable accuracy. In particular, we accept two types of
unsoundness in the model. One, we accept a model which gives wrong
answers which are detectable. We code tests to catch those cases, and
use a different model to get the right answer. Two, we accept a model
which gets the right answer only when over-provisioned. for example,
if we know a loop will occassionaly not converge in the n-steps that
it theoretically should, we will run the loop for n+m steps until the
approximation also converges--even if that requires allocating m extra
copies of some small element than are theoretically necessary. A
small waste of a small resource, is ok if it saves a waste of a more
critical resource--the most critical resource of all being our project
timeliness.
Marshall's last point:
Flipping one coin to determine an election is not playing the
probabilities correctly. You need a plausible explanation for why the
coin should predict the right answer and a track record of it being
correct. If you had a coin that had correctly predicted the previous
42 presidencies and you had an explanation why the coin was right,
then it would be credible and I would be willing to wager that it
could also predict that the coin could tell us who the 44th president
would be. One flip and no explanation is not sufficient. (And to the
abstract point, to me that is all knowledge is, some convincing amount
of examples and a plausible explanation--anyone who thinks they have
more is appealing to a "knowledge" of the universe that I don't
accept.)
Look at where that got Russell and Whitehead.
I'm just trying to be "honest" about that fact and find ways to
compensate for my own failures.
Andrew McDonagh said:I haven't read all of this thread, I wonder, is the problem to do with
Class being mistaken for Type? (which is usually the issue)
Chris said:Hi Andrew!
Not much of this thread has to do with object oriented languages... so
the word "class" would be a little out of place.
However, it is true
that the word "type" is being used in the dynamically typed sense to
include classes from class-based OO languages (at least those that
include run-time type features), as well as similar concepts in other
languages. Some of us are asking for, and attempting to find, a formal
definition to justify this concept, and are so far not finding it.
Others are saying that the definition is somehow implicitly
psychological in nature, and therefore not amenable to formal
definition... which others (including myself) find rather unacceptable.
I started out insisting that "type" be used with its correct formal
definition, but I'm convinced that was a mistake. Asking someone to
change their entire terminology is unlikely to succeed. I'm now
focusing on just trying to represent the correct formal definition of
types in the first place, and make it clear when one or the other
meaning is being used.
Hopefully, that's a fair summary of the thread to date.
No type theory is needed.
Assume that the wide index type goes into a function and the result is
assigned to a variable fo the narrow type, and it's instantly clear that
the problem is undecidable.
Undecidability can always be avoided by adding annotations, but of
course that would be gross overkill in the case of index type widening.
Joachim said:Um... I'm not 100% sure, but I dimly (mis?)remember having read that
UnsafePerformIO also offered some ways to circumvent the type system.
No tautology - the unsafe constructs aren't just typewise unsafe ;-p
That's exactly why I replaced Luca Cardelli's "safe/unsafe"
"typesafe/not typesafe". There was no definition to the original terms
attached, and this discussion is about typing anyway.
I think you're overstating the case.
In type theory, of course, there's no such things as an "almost typesafe
language" - it's typesafe or it isn't.
Marshall said:Also: has subtyping polymorphism or not, has parametric polymorphism or
not.
David said:Not at all. Almost all subprograms provably terminate (with a fairly
easy proof), even if they use general recursion or iteration.
If it were not the case that almost all functions provably terminate,
then the whole idea would be hopeless.
If a subprogram F calls G, then
in order to prove that F terminates, we probably have to prove that G
terminates. Consider a program where only half of all subprograms are
annotated as provably terminating. In that case, we would be faced with
very many cases where the proof cannot be discharged, because an
annotated subprogram calls an unannotated one.
If, on the other hand, more than, say, 95% of subprograms provably
terminate, then it is much more likely that we (or the inference
mechanism) can easily discharge any particular proof involving more
than one subprogram. So provably terminating should be the default,
and other cases should be annotated.
I do not know how well such a type system would work in practice; it may
be that typical programs would involve too many non-trivial proofs. This
is something that would have to be tried out in a research language.
George Neuner said:Just what sort of type annotation will convince a compiler that a
narrowing conversion which could produce an illegal value will, in
fact, never produce an illegal value?
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.