Seebs said:
No, it isn't. It's a correctly identified type mismatch.
You keep moving the goal posts from the actual standard of a false positive
(the compiler warns that something is of the wrong type when it's not of
the wrong type) to a made-up standard (the compiler warns that something is
of the wrong type when it is indeed of the wrong type, but could be safely
converted to the right type).
It doesn't matter whether, in a given case, you *could* safely perform
the conversion. If you don't perform the conversion, and the compiler points
this out, that's not a false positive.
Moving away fast enough that their color has visibly changed.
Not really. If you use the most obvious and natural meanings *for a
statically typed language*, it is obvious that it is true.
And indeed, significantly so. In the real world, programs written in
scripting languages with runtime typing are fairly likely to throw occasional
exceptions because something is of the wrong type. In a statically typed
language, the of-the-wrong-type is something which can, by definition, be
caught at compile time.
The fundamental thing you seem to be getting stuck on is that you're assuming
that if a conversion could be made, that it should be and it should be
automatic and silent. That, however, is at odds with discussion of a
statically typed language. There's a reason we have the option of converting
things from one type to another.
No, this is not what I'm getting stuck on. I understand the technical
theory behind statically typed languages. What I'm getting "stuck" on
is this:
In a statically typed language, the of-the-wrong-type is something which
can, by definition, be caught at compile time.
Any time something is true "by definition" that is an indication that
it's not a particularly useful fact. The whole concept of "type" is a
red herring. It's like this: there are some properties of programs that
can be determined statically, and others that can't. Some of the
properties that can't be determined statically matter in practice. But
all of the properties that can be determined statically can also be
determined dynamically. The *only* advantage that static analysis has
is that *sometimes* it can determine *some* properties of a program
faster or with less effort than a dynamic approach would.
What I take issue with is the implication made by advocates of static
analysis that static analysis is somehow *inherently* superior to
dynamic analysis, that static analysis can provide some sort of
"guarantee" of reliability that actually has some sort of practical
meaning in the real world. It doesn't. The net effect of static
analysis in the real world is to make programmers complacent about
properties of programs that can only be determined at run time, to make
them think that compiling without errors means something, and that if a
program compiles without errors then there is no need for run-time
checking of any sort. *You* may not believe these things, but the vast
majority of programmers who use statically typed languages do believe
these things, even if only tacitly. The result is a world where
software by and large is a horrific mess of stack overflows, null
pointer exceptions, core dumps, and security holes.
I'm not saying that static analysis is not useful. It is. What I'm
saying is that static analysis is nowhere near as useful as its
advocates like to imply that it is. And it's better to forego static
analysis and *know* that you're flying without a net at run-time than to
use it and think that you're safe when you're really not.
rg