M
Matthew Danish
Here's a link to a relavant system that may be worthwhile to check out:
http://www.simulys.com/guideto.htm
http://www.simulys.com/guideto.htm
Pascal Costanza said:You need some testing discipline, which is supported well by unit
testing frameworks.
Static type systems are claimed to generally improve your code. I
don't see that.
Marshall said:Provided you think to write such a test, and expend the effort
to do so. Contrast to what happens in a statically typed language,
where this is done for you automatically.
Unit tests are great; I heartily endorse them. But they *cannot*
do everything that static type checking can do. Likewise,
static type checking *cannot* do everything unit testing
can do.
Right.
So again I ask, why is it either/or? Why not both? I've had
*great* success building systems with comprehensive unit
test suites in statically typed languages. The unit tests catch
some bugs, and the static type checking catches other bugs.
Pascal Costanza said:+ Design process: There are clear indications that processes like
extreme programming work better than processes that require some kind of
specification stage. Dynamic typing works better with XP than static
typing because with dynamic typing you can write unit tests without
having the need to immediately write appropriate target code.
+ Documentation: Comments are usually better for handling documentation.
If you want your "comments" checked, you can add assertions.
+ Error checking: I can only guess what you mean by this. If you mean
something like Java's checked exceptions, there are clear signs that
this is a very bad feature.
+ Efficiency: As Paul Graham puts it, efficiency comes from profiling.
In order to achieve efficiency, you need to identify the bottle-necks of
your program. No amount of static checks can identify bottle-necks, you
have to actually run the program to determine them.
I wouldn't count the use of java.lang.Object as a case of dynamic
typing. You need to explicitly cast objects of this type to some class
in order to make useful method calls. You only do this to satisfy the
static type system. (BTW, this is one of the sources for potential bugs
that you don't have in a decent dynamically typed language.)
I don't think so.
Matthias said:By whose definition? What *is* your definition of "useful"? It is
clear to me that static typing improves maintainability, scalability,
and helps with the overall design of software. (At least that's my
personal experience, and as others can attest, I do have reasonably
extensive experience either way.)
A 100,000 line program in an untyped language is useless to me if I am
trying to make modifications -- unless it is written in a highly
stylized way which is extensively documented (and which usually means
that you could have captured this style in static types). So under
this definition of "useful" it may very well be that there are fewer
programs which are useful under dynamic typing than there are under
(modern) static typing.
There are also programs which I cannot express at all in a purely
dynamically typed language. (By "program" I mean not only the executable
code itself but also the things that I know about this code.)
Those are the programs which are protected against certain bad things
from happening without having to do dynamic tests to that effect
themselves.
(Some of these "bad things" are, in fact, not dynamically
testable at all.)
Don't fear. I will.
This assumes that there is a monotone function which maps token count
to error-proneness and that the latter depends on nothing else. This
is a highly dubious assumption. In many cases the few extra tokens
you write are exactly the ones that let the compiler verify that your
thinking process was accurate (to the degree that this fact is
captured by types). If you get them wrong *or* if you got the
original code wrong, then the compiler can tell you. Without the
extra tokens, the compiler is helpless in this regard.
To make a (not so far-fetched, btw analogy: Consider logical
statements and formal proofs. Making a logical statement is easy and
can be very short. It is also easy to make mistakes without noticing;
after all saying something that is false while still believing it to
be true is extremely easy. Just by looking at the statement it is
also often hard to tell whether the statement is right. In fact,
computers have a hard time with this task, too. Theorem-proving is
hard.
On the other hand, writing down the statement with a formal proof is
impossible to get wrong without anyone noticing because checking the
proof for validity is trivial compared to coming up with it in the
first place. So even though writing the statement with a proof seems
harder, once you have done it and it passes the proof checker you can
rest assured that you got it right. The longer "program" will have fewer
"bugs" on average.
Andrew said:Pascal Costanza:
Ummm, both are infinite and both are countably infinite, so those sets
are the same size. You're falling for Hilbert's Paradox.
Also, while I don't know a proof, I'm pretty sure that type inferencing
can do addition (and theorem proving) so is equal in power to
programming.
The size comparisons I've seen (like the great programming language
shootout) suggest that Ocaml and Scheme require about the same amount
of code to solve small problems. Yet last I saw, Ocaml is strongly typed
at compile time. How do you assume then that strongly&statically typed
languages require "considerable more code"?
Lulu of the Lotus-Eaters said:I also read c.l.functional (albeit only lightly). In the last 12
months, I have encountered dozens of complaints about over-restrictive
type sytems in Haskell, OCaml, SML, etc.
The trick is that these complaints are not phrased in precisely that
way. Rather, someone is trying to do some specific task, and has
difficulty arriving at a usable type needed in the task. Often posters
provide good answers--Durchholz included. But the underlying complaint
-really was- about the restrictiveness of the type system.
Marshall said:Huh? The explicit-downcast construct present in Java is the
programmer saying to the compiler: "trust me; you can accept
this type of parameter." In a dynamically-typed language, *every*
call is like this! So if this is a source of errors (which I believe it
is) then dynamically-typed languages have this potential source
of errors with every function call, vs. statically-typed languages
which have them only in those few cases where the programmer
explicitly puts them in.
Ralph said:This is utterly bogus. If you write unit tests beforehand, you are
already pre-specifying the interface that the code to be tested will
present.
I fail to see how dynamic typing can confer any kind of advantage here.
Are you seriously claiming that concise, *automatically checked*
documentation (which is one function served by explicit type
declarations) is inferior to unchecked, ad hoc commenting?
For one thing, type declarations *cannot* become out-of-date (as
comments can and often do) because a discrepancy between type
declaration and definition will be immidiately flagged by the compiler.
I think Fergus was referring to static error checking, but (and forgive
me if I'm wrong here) that's a feature you seem to insist has little or
no practical value - indeed, you seem to claim it is even an impediment
to productive programming. I'll leave this point as one of violent
disagreement...
I don't think you understand much about language implementation.
A strong, expressive, static type system provides for optimisations
that cannot be done any other way. These optimizations alone can be
expected to make a program several times faster. For example:
- no run-time type checks need be performed;
- data representation is automatically optimised by the compiler
(e.g. by pointer tagging);
- polymorphic code can be inlined and/or specialised according to each
application;
- if the language does not support dynamic typing then values need not
carry their own type identifiers around with them, thereby saving
space;
- if the language does support explicit dynamic typing, then only
those places using that facility need plumb in the type identifiers
(something done automatically by the compiler.)
On top of all that, you can still run your code through the profiler,
although the need for hand-tuned optimization (and consequent code
obfuscation) may be completely obviated by the speed advantage
conferred by the compiler exploiting a statically checked type system.
No! A thousand times, no!
Let me put it like this. Say I have a statically, expressively, strongly
typed language L. And I have another language L' that is identical to
L except it lacks the type system. Now, any program in L that has the
type declarations removed is also a program in L'. The difference is
that a program P rejected by the compiler for L can be converted to a
program P' in L' which *may even appear to run fine for most cases*.
However, and this is the really important point, P' is *still* a
*broken* program. Simply ignoring the type problems does not make
them go away: P' still contains all the bugs that program P did.
Yes, but your arguments are unconvincing. I should point out that
most of the people on comp.lang.functional (a) probably used weakly/
dynamically typed languages for many years, and at an expert level,
before discovering statically typed (declarative) programming and
(b) probably still do use such languages on a regular basis.
Expressive, static typing is not a message shouted from ivory towers
by people lacking real-world experience.
Why not make the argument more concrete? Present a problem
specification for an every-day programming task that you think
seriously benefits from dynamic typing. Then we can discuss the
pros and cons of different approaches.
Fergus said:In my experience, people who have difficulties in getting their programs
to typecheck usually have an inconsistent design, not a design which is
consistent but which the type checker is too restrictive to support.
Andrew said:If a few rockets blow up for testing then it's still cheaper than
quintupling the development costs.
Pascal said:For example, static type systems are incompatible with dynamic
metaprogramming. This is objectively a reduction of expressive power,
because programs that don't allow for dynamic metaprogramming can't be
extended in certain ways at runtime, by definition.
Dirk said:IMHO it helps to think about static typing as a special kind of unit
tests. Like unit tests, they verify that for some input values, the
function in question will produce the correct output values. Unlike
unit tests, they do this for a class of values, instead of testing
statistically by example. And unlike unit tests, they are pervasive:
Every execution path will be automatically tested; you don't have
to invest brain power to make sure you don't forget one.
Dirk said:IMHO it helps to think about static typing as a special kind of unit
tests. Like unit tests, they verify that for some input values, the
function in question will produce the correct output values. Unlike
unit tests, they do this for a class of values, instead of testing
statistically by example. And unlike unit tests, they are pervasive:
Every execution path will be automatically tested; you don't have
to invest brain power to make sure you don't forget one.
Type inference will automatically write unit tests for you (besides
other uses like hinting that a routine may be more general than you
thought). But since the computer is not very smart, they will test
only more or less trivial things. But that's still good, because then
you don't have to write the trivial unit tests, and only have to care
about the non-trivial ones.
Type annotations are an assertion language that you use to write down
that kind of unit tests.
Yep.
Of course you can replace the benefits of static typing by enough unit
tests. But they are different verification tools: For some kind of
problems, one is better, for other kinds, the other. There's no reason
not to use both.
Andrew said:Pascal Costanza:
Ummm, both are infinite and both are countably infinite, so those sets
are the same size. You're falling for Hilbert's Paradox.
Also, while I don't know a proof, I'm pretty sure that type inferencing
can do addition (and theorem proving) so is equal in power to
programming.
Joachim said:What is dynamic metaprogramming?
Pascal said:Have you made sure that this is not a circular argument?
Does "consistent design" mean "acceptable by a type checker" in your book?
Pascal said:Read the literature on XP.
I am sorry, but in my book, assertions are automatically checked.
They same holds for assertions as soon as they are run by the test suite.
....and I don't think you understand much about dynamic compilation.
Have you ever checked some not-so-recent-anymore work about, say, the
HotSpot virtual machine?
There are excellent programs out there that have been written with
static type systems, and there are also excellent programs out there
that have been written without static type systems. This is a clear
indication that static type systems are not a necessary condition for
writing excellent programs.
Pascal said:See the example of downcasts in Java.
Joachim said:The vast majority of practical programming languages use a type
inference system where the behavior is known to be O (N log N) or better
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.