Andreas said:
So you do intend a special case for marker interfaces so
marker interfaces need to be explicitly "implemented", but all
others are auto-implemented if all it's method signatures exist.
explicitly implementing a non-marker interface then has only
one purpose: let the compiler make sure I didn't forget a method.
Two: you're also making a statement that you are implementing an
interface deliberately, by doing so, and thus making some sort of
commitment to observe its contract.
The "auto-implement" is intended mainly for the odd situation where an
existing class you can't edit fits some interface and you're willing to
take responsibility for it if it turns out not to actually adhere to the
contract, and try using it where that interface type is expected.
Perhaps it should require an explicit cast, to make it clear that it's
in the same general category of hack as some other cast-using code and
especially unchecked generics casts.
How would you judge the *runtime* penalty of checking the existence
of all necessary methods upon each attempted cast?
Oh, no, that checking would occur only once for each run-time type and
the results would get remembered somehow.
As Lew guessed right, it was intended for anyone who does read the
posting with google.
Then your phrasing was somewhat confused, since that paragraph,
appears to be addressing me specifically.
Btw., That doesn't need to be a gmail'er, but
could be anyone just using google's news-archive at a later time,
as I've frequently done myself in the past. There is no such thing
as a private conversation in a newsgroup.
An admirable goal, though I'm not sure why Google would distort a
message-ID in a non-Google posting.
However, putting "in case google distorts it" in a parenthetical aside
attached to "Not sure if you got that idea yourself, or from foo"
implies that either you are using Google and worried that it will mangle
your outbound post, or that you think the "you" who may have got that
idea himself (i.e. me) is using Google, and neither was the case.
Put it down to slightly confused grammar/word choice somewhere I guess.
If the bit about Google had explicitly had broader address, or had been
elsewhere in the post than right next to "you ... yourself", there would
have been no confusion.
Yes (furthermore leaving aside the "same-package" special case).
I'd even go further to do this regardless of final'ity
of the wrapped class. It doesn't seem right to me for a
class that *doesn't really* subclass a given class to still
access it's more-restricted bits.
My proposal overloaded the "extends" keyword for this -- it would
subclass a non-final class normally, as now, and wrap a final class. You
appear to want to be able to wrap without subclassing non-final classes.
For that a new syntax would be needed. Perhaps "implements" followed by
a class rather than an interface would do the job without adding new
reserved words to Java?
In some way you're right, but Joshua also made a good point
recently by quoting a sample of some new keyword-less syntax
of some "newer C++": just seeing the new syntax makes it
almost impossible to look it up in order to get to know
what it means.
I don't think this would be an issue with Object* object = foobar; or
Object! object = foobar;, somehow.
If things like that proliferated too much, sure. Or if a yucky enough
closure syntax got decided on, or people abused operator overloading.
But bad code will be written whatever tools are handed out, so the
latter isn't an argument against much of anything, and the former is an
argument against having a yucky closure syntax. Neither is an argument
in favor of a verbose non-null syntax.
As soon as some keyword or other constant verbiage is used,
it becomes possible to look it up.
If reference declarations started showing up with the odd asterisk,
bang, or other punctuation mark on it, but never primitive declarations,
people would probably be able to guess what was going on, on the basis
of "what other binary flag might be set on references but not primitives
and would be really useful besides can be/cannot be null?" I assume
there'd also be great fanfare and publicity within Java circles
attendant upon the release of such a long-awaited new feature. Someone
would have to have spent a year living under a rock, then crawled out
from under it and stumbled upon some fairly recent Java code, AND not be
especially intelligent or Java-savvy, to not either know OR manage to
guess what was up.
However, there are new problems I see with it (non-nullness) now:
Assume I wanted to use a HashMap<String,Integer*> I'd expect it
to only contain non-nulls, but get() would still produce nullable
values, as it returns null for unknown keys.
You'd have a guarantee that if you assigned the get() to a non-nullable
reference and got NPE it was because of a missing key, and you'd get an
NPE if you tried to put a null into the map with put, instead of this
causing errors later (or even silently failing in the case that get() is
expected to sometimes be null but it's assumed that null means key not
found -- the silent failure would be that you thought some key was being
inserted but later on it seemed like it hadn't, with no exception thrown).
Of course it's no magic bullet, but it would help to catch some
null-related errors closer to where the real bug was.
HashMap's implementation is ofcourse not aware of whether it's Value
type is nullable, so it has no clue as to how to declare it
still-nullable when it returns the Type parameter or null. There may
exist solutions, but I fear it's getting more and more complex,
rather than just the simple nonnull marker you started out with.
There'd need to be a way to declare a type parameter nullable, say
public V? get (K key) {
...
}
means that if it's a HashMap<String,Integer*> the get return type is
Integer rather than Integer* and can still be null. Reference and a few
other classes would need this too, but it would be rare.
We end up needing only two punctuation "warts" in declarations, and one
will only appear, infrequently, in generic classes. (So if two is one
too many, THAT one can be safely changed to something more verbose. "V
|| null" would be pretty clear to anyone who speaks fluent Java.)
String*[] noNullsArray = array; // Runtime check that no element
// is null gets generated here.
Huh!? I thought the whole point of it was to guarantee null-freeness
at compile time.
At the point where a nullable is assigned to a non-nullable, a run-time
check will be needed. Same as when casting is used; most type-safety is
verified at compile time but in a few spots it needs to have a runtime
check.
Compare it with ClassCastExceptions: they can only be thrown at explicit
casts (implicit casts are only possible where no runtime-check is necessary)
Requiring an explicit cast at a nullable->non-nullable assignment where
the compiler cannot prove by static analysis that the RHS isn't null
might be a good idea.
Object foo, bar;
Object* baz, quux, mumble;
....
if (foo == null) {
...
} else {
baz = foo; // OK; foo cannot be null here
mumble = quux; // OK
quux = bar; // Error
quux = (Object*)bar; // OK, but may throw if bar is null
}
The above is with local variables, and assumes foo, bar, and quux are
definitely assigned somewhere between the declarations and the if. With
non-local variables, the possibility of concurrent change to the
reference in another thread means the cast might be needed even on the
"baz = foo;". Allowing omission of the cast there would mean that NPEs
could pop up downstream where baz got used, but any such would indicate
a concurrency issue, so programmers could treat nulls coming OUT of
"non-null" references as prima facie evidence of a race condition in
their code. Race conditions are not something that can generally be made
easier to debug. In this case, they might be. I'd suggest letting "baz =
foo;" with no explicit cast stand, in the case foo is not local, but put
the runtime check in as if a cast were used. Then if a race condition
does cause foo to become null in between the test for null and the
assignment to baz, the NPE is thrown right then and there, inside one of
the blocks of code involved in the race. The programmer knows such an
NPE indicates the need to synchronize that area (and whatever area
nulled foo, if they can find it! But finding one of the areas easily
halves the size of the job).
(Of course, the bytecode has to be equivalent to that from present-Java
baz = foo;
if (baz == null) throw...
and also with the explicit cast
quux = bar;
if (bar == null) throw...
or it's possible for a race condition to get a null into the "non-null"
reference after the test. So, the possible-null has to go in first, then
an exception be thrown if the "non-null" reference isn't actually
non-null. Then if the exception is not thrown the reference always is
not null.)
Alternative: new method in java.utils.Arrays such that
noNullsArray=Arrays.nonNull(array);
will assign the instance only after a successful runtime nonnull check
(otherwise throw a NullPointerException immediately)
Yes, that is an alternative for arrays.
Of course we could get into hairiness with noting whether the array
reference itself can be null:
String*[]* -- which * means which?
That will need some more and better ideas, expecially also for
multidimensional arrays (i.e. arrays of arrays of arrays ...).
I like the principle, though.
MD arrays are best off wrapped in some data structure, which would
encapsulate the messy syntax out of general view. It would also tend to
localize bugs if only the elements were declared nonnull in some cases,
and method parameters and returns, but the guts allowed array references
to be null. NPEs from such nulls might occur separated in time from the
cause, but the error would definitely be in that class's implementation.
Even SD arrays are not usually preferable to ArrayList or similar.
catch (IOException _) {
... = new ...() { public void actionPerformed (ActionEvent _) {
These cases hardly ever accumulate, but if they do, there's still
"__" and "___" and ... as further dummy names.
The need for dummy names at all is rather perverse, in catch clauses and
in private nested classes of all stripes.
Also, some IDEs and compiler options produce warnings on unused
variables. My proposal would allow such warnings to be enabled and still
have warning-free code in these cases, without doing additional dummy
actions.
Sorry, my misunderstanding once again.
I think I've read that this particular syntax will indeed
be added in 7.0 (regardless of this discussion here).
Oh, goody.
I don't like that at all.
Why?
My point is that I see no relevant need to prevent anyone from
instantiating or subclassing a Utility class, e.g. like java.lang.Math.
What bad could one do with it, if it were possible?
Nothing much. It's just that that's not generally what those classes are
*for*.
Playing around I tried this ugly idiom:
class Test { static Math m; static { m.sin(m.PI); } }
interestingly the bytecode contains "getField m"
immediately followed by "pop".
Ugh.
Utility classes need a special declaration, then. Something like
"private null class Foobar" that means it can't be instantiated and it's
an error to have "Foobar x;", Foobar return values, Foobar parameters,
etc.; and of course Foobar cannot inherit from anything not declared
"null" in this way, or have non-static fields or methods. (This in turn
outlaws Foobar implementing a non-empty interface or throwing or
catching a Foobar.)
What did you mean should actually happen to checked exceptions
thrown within run() (and not caught there)?
Same as now. Thread dies and a stack dump goes to System.err.
Perhaps there would need to be a way to specify, in a method receiving a
closure parameter, whether it does or does not throw whatever the
closure throws. Closures might have a throws part of their type, and if
the closure is actually invoked in a method that method has to throw or
handle the checked exceptions the closure declares. This requires
full-blown closures, and though a closure literal could just be a
brace-delimited block of code, a closure reference declaration would be
fairly hairy, needing to specify parameter types, return type, and
throws, just like a method, in the general case.
But there's not really any way to avoid that if closures are
implemented, since all of that stuff IS part of the type.
Why not just least common denominator?
Because such an array may not be compatible with what I want to use it
for:
void foo(Number[] a) ...
foo({1,2,3,5,8,13}); // wouldn't work, because the auto-inferred
type would be Integer[] which is not castable to Number[].
When the literal is used as the RHS of an assignment (including as a
method parameter or return), the elements would have to be castable to
the type of the LHS of the assignment.
Actually, I think that's about the only time literals are used, so that
seems to eliminate the need for "least common denominator" or similar
type inference of array literals.
In this case, it's passed to a method that wants a Number[] so the
compiler figures out if all of the elements are convertible to Number
and allows it if so, generating a Number[] in the class constant pool
that is passed to the foo method in that method call.
Integer x = 8;
String y = "17";
....
foo({1, 2, 3, 5, x, 13}); // OK, x is an Integer which is a Number
foo({1, 2, 3, 5, x, 13, y}); // Error, y is a String
Actually, the last time I checked, you could assign an Integer[] to a
Number[], but would get a run-time exception if you tried to put a
Double into it via the Number[] reference. One little bit of array
type-unsafety avoided by using collections where possible.
It's a generalisation. it's about like
try { ... errors caught }
continue { ... errors not caught }
tryalso { ... errors caught again }
continue { ... errors not caught }
tryalso { ... errors caught again }
catch ...
finally ...
Uggg-leee!
I wasn't proposing to provide more than just try { } continue { } catch
{ } finally { }.
Someone else would have to find a better verbiage for it.
As a rule, the shorter the verbiage, the better, so long as it remains
readable, and modulo frequency of use.
Oh, and the finally block would still need to be executed
even if it was doSomething() that threw an exception.
No duh. Finally would retain the current semantics -- every transfer of
control out of the associated try block goes through the finally code.
That's a very very special and otherwise rare usecase (serialization helper).
Rare does not equal useless, particularly when it's difficult to add the
functionality if you're not Sun (say, because you'd have to rewrite half
the standard library to truly make use of it).
Sun did a few special tricks to prevent users from hooking in.
I do not exactly understand Sun's reasons for that, but it most
definitely hasn't just been forgotten - it has actively been
worked against by sun.
Providing a built-in ReferenceListener just strikes me as a whole lot
nicer than having to actually spawn a separate thread, calling the
blocking poll method and invoking a user-supplied Listener every time it
returned with an enqueued object.
There is no blocking poll for the ReferenceQueue, and also no way to
create one as derivative. The thread would have to run a busy loop
(yuck!).
The polling methods that block are actually named "remove" rather than
"poll", but they exist.
remove
public Reference<? extends T> remove()
throws InterruptedException
Removes the next reference object in this queue, blocking until one
becomes available.
Returns:
A reference object, blocking until one becomes available
Throws:
InterruptedException - If the wait is interrupted
is the one I'd use.
Strange. I utterly fail to see the reason behind that.
Well, I utterly fail to see the reason (if there is any) behind Sun
forcing people to make their own ReferenceQueue listener implementation
using threads, but there might (might) nonetheless be one.
But none of them (including even my WeakHashMap approach)
allow checking at compile time, as the proposed change would.
public final class Foo {
RealFoo delegate; // Package-private field. No subclassing.
public void method () {
delegate.method();
}
}
public final class Bar () {
private static final class RealFoo implements Interface {
void method () { ... }
void otherMethod () { ... }
}
...
aMethod (Foo foo) {
foo.delegate.otherMethod();
someThing.doWithInterface(foo.delegate);
}
}
works fine. It's not possible to pass a non-Foo to aMethod, because of
compile-time type checking. The delegate RealFoo can implement an
interface but users of Foo don't see it, because they don't see the
delegate field or have any real access to it. However aMethod can use
the interface by extracting the delegate. Finally, since Foo is final
the delegate cannot be exposed by somebody's subclassing of Foo. Only
playing tricks with reflection and serialization might grant access, so,
serious hoop-jumping.
Compiler checks:
* That aMethod is called with a genuine Foo.
* That nobody can mess with the RealFoo delegate outside of the package
with Foo and Bar.
* That RealFoo implements Interface.
* That someThing.doWithInterface is getting an implementation of Interface.
Pretty much everything is compiler-verified here.