Tests find bugs after they happen. The compiler finds bugs before
they happen.
Tests are optional. A programmer can choose (unwisely) not to write
tests; they can't choose to avoid the compiler. Tests can be
incomplete; everything gets compiled. You have to write tests; you
already have a compiler.
In practice, the tests "which you have to write anyway" are often not
written. You've worked on projects where the tests are insufficient,
yes?
As for "the higher productivity of dynamic languages", the facts have
not been presented into evidence for that. The question involves
fully defining "productivity", which must include maintenance costs
else you have not internalized the costs that strongly-typed
languages aim to avoid.
Case in point - in one Language War of PHP vs. Java for Web
applications, a PHP fanboy mentioned that they allowed exception
crashes, complete with stack traces, to appear in the browser when
the application fubared. Well, no wonder they were more
"productive". If I could bring myself to inflict crashes on the
user, I'd be far more "productive", too, even in Java.
I am *not* arguing against dynamically-typed languages, nor in favor
of Java. I am pointing out that any such comparison must account for
the consequences and costs of each approach. Next time you have to
refactor a million-plus-line software system involving dozens of
programmers, think about how type safety and other compile-time
checks can or should help, or not, vs. tests and other run-time
techniques. Look at the state of tests on that project, and how much
they cover or fail to cover.
Unfortunately your conclusion flies in the face of published
research. (I don't have references handy, sorry.)
Which conclusion? Here are my two points with a bit more explanation:
1. It is interesting that the discussion dynamic vs. static typing comes
up again and again. This indicates that people are not able or do not
want to settle it.
2. Static typing is not automatically superior to dynamic typing in
every case. (Neither is the opposite true but because of the thread's
subject I am of course arguing in favor of dynamic typing to balance
other positions).
To add a bit more explanation: I believe the reason for 1 is the former:
people /cannot/ settle this because "dynamic vs. static typing" leaves
out too many aspects which are important for success and cost of
software projects: these are at least nature of the software (and size),
people (skills and number) and process used. Yet people often search
for simple and catchy rules which explains the popularity of the topic.
By all accounts,
bugs found (and thus prevented!) at compile time are far, far cheaper
than bugs found (and thus not prevented!) in testing, which in turn
are far, far cheaper than bugs found in production (surely no one can
argue that those were prevented!).
Certainly. But the cost of bugs found during testing depends on the
process used: that the compiler does not catch a bug does not
automatically mean that it's late into the project that it is caught.
And "lateness" is the most driving factor for cost because the more time
passes the more code can be written which depends on the faulty code.
This is only about bugs; the cost
of maintenance, enhancement and refactoring apply as well. You have
to internalize all the costs to fairly compare the approaches.
Type information for e.g. method arguments certainly helps make code
readable but it is easy to write spaghetti code in statically and
dynamically typed languages. Maintainability also depends on quality of
documentation and the overall design.
Two more points to consider
1. Static typing won't detect design and architecture flaws which are
generally considered to be the most expensive to remedy.
2. Static typing is only of limited help in detecting concurrency issues
which are often hard to track down and thus expensive.
Kind regards
robert