Ok, now I understand the term 'alias' a little more after
seeing that it describes those limitations of references a
little better. But, as with most CS concepts, alias can mean
alot of different things in different environments.
Which is the real problem here. The concept of pointer can mean
a lot of different things in different environments. As can the
concept of reference. And amongst the different meanings,
there's a lot of overlap between pointers and references. From
a purely CS point of view, C++ pointers, C++ references, and
Java pointers could all be called "pointers"; from a purely CS
point of view, I have difficult accepting that something which
can be reseated and which can be null can be called a reference,
but this may be largely because I first encountered the term in
a CS context in C++ (although if I understand correctly,
Stroustrup picked up much of the concept, and the name, from
Algol68, where they could be reseated).
Take bash shell scripts for instance, it's just a way of
'rewriting' a command. Or, a string that represents another
string. Or, a symbol that refers to another symbol. Or, a
link. Or, a pointer. All these concepts at the abstract
level seem pretty equivalent.
Yes. I think that roughly speaking, an "alias" is a different
way of saying the same thing. It may involve two different
names for the same thing (most people, I think, would consider a
link in the file system to be an "alias"), or a short name for
an otherwise complex "expression", or perhaps even vice versa: a
calculated expression replacing a (constant?) fixed name.
"Alias" is an almost perfect characterization of references when
both the reference and its initializer are named objects, e.g.:
int i ;
int& r = i ;
In such cases, whether you use r or i is totally indifferent;
the results will be exactly the same.
This works very well with return values as well; if foo()
returns a reference to myValue, then the expression foo() works
as an "alias" to "myValue".
It becomes harder to fathom when you're dealing with const
references initialized with an rvalue. Mainly because, without
the reference, there wouldn't necessarily be any object, and the
C++ standard is very clear, a reference must designate an
individual object.
And it tends to become really shaky when you start dealing with
casts to reference types. (But at that point, so does viewing
references as pointers, since there will usually be no pointer
involved.)
And, if you want to start getting into the differences at the
technical level in any given environment, then you have to
learn the differences and the names are there simply to
provide a common vocabulary (but conceptually, do we always
think in a language? What is the essence of these concepts if
you set aside the differences in names?)
The name of something is not what matters, but its essence.
Names are only there to make communication easier (and with
the 'attached' conceptions to most names especially when going
from one computer lang/ platform/lib to another they serve to
confuse a great deal).
Agreed. In the end, C++ has two concepts, one which it inherits
from C, and calls "pointer", and the other which was created
expressedly for C++ (principally, originally, to support
operator overloading), and is called "reference". For
historical reasons, "pointers", in C++, share a number of
characteristics with other arithmetic types, such as ints.
Also, because of why they were introduced, and because C++
already had pointers, references in C++ are very, very
restricted. Most significantly, references in C++ are not
"objects", that is, they aren't integrated in the C++ object
model. At least formally, because there are contexts where they
do behave sort of like objects: they have a lifetime, for
example.
At the risk of missing something: at runtime, C++ has 3 types of
entities: objects, references and functions. Objects and
references have lifetime, functions don't. Objects and
functions have addresses, references don't. Only objects have
size. Etc., etc. The C++ standard is worded in these terms;
other wordings are possible and still give the same actual
behavior.
The object model of Java is more complex, in that you have two
major categories of data: basic types and references/objects: a
variable in Java can never be an object (as "object" is defined
in Java), and you can't "create" a reference arbitrarily---only
declare one. (References in Java behave a lot like the other
basic types, in fact.)
Not really, references behave syntactically the same in C++
and Java (except for the use of the &). Everything is by
default a reference in Java, and you can't get at the
pointers. Maybe C# is a more relevant example because with
it, you can have an 'unsafe' block and still get to the actual
underlying pointers. But, in my mind, a reference in all 3
langs is essentially the same.
The syntax for using references in Java is similar to the syntax
for using references in C++. The semantics of references in
Java is much closer to the semantics of pointers in C++,
however. If you think of references in C++ as pointers with
some syntactic suger, then Java references a similar to C++
references. If you think of references and pointers in C++ as
being two different things, which his how they are defined in
the standard, then references in Java are closer to pointers in
C++ than they are to C++ references: things like no null
references or no reseating of references are fundamental to the
C++ definition of what a reference is.
Actually, they are smart. But, I was mistaken about the
ref-counted part. The garbage collector traverses all 'live'
ptrs and deletes those that it doesn't reach.
The pointers themselves aren't necessarily smart; I'm pretty
sure that a Java implementation can use exactly the same garbage
collector, in exactly the same circumstances, as a C++
implementation. Of course, it's a question of terminology, but
I'd say that with garbage collection (required by the Java
specification, optional in C++), you don't need smart pointers
for most memory management because the memory manager itself is
intelligent. The intelligence is concentrated in one small
component which the programmer doesn't need to deal with, rather
than being spread throughout the program where the programmer is
constantly tripping over it.
What about this piece of code:
f1 + f2 is an r-value and it can be passed directly to
someFunc assuming that it accepts foo or object types.
In Java, that isn't legal code, since there's no user defined
operator overloading, and the operator+ is only defined for
basic types and java.lang.String.
In C++, a first approximation of the difference between lvalues
and rvalues is that lvalues have an address, and rvalues don't.
(In fact, rvalues of class types do have an address.) In Java,
this distinction doesn't really exist; basic types and
references don't have an address, and objects do, regardless of
how the entity was formed.
Of course, there are limitations with regards to the expressions
which can appear on the left side of an assignment operator (or
be an operand to ++, etc.). One could call this an
lvalue/rvalue distinction. But it would be somewhat different
than the same distinction in C++. And more importantly, the
Java specification doesn't use this language; it refers to
"variables" and "values" (which in many ways do correspond to
lvalues and rvalues).
Again, each language has its own vocabulary, for many historical
reasons. And because the languages are different, there isn't
necessarily a one to one relationship between the different
vocabulary.
No, references in Java are very different from pointers. You
can't actually modify a reference except by setting it equal
to another object or null.
And? That's the case for pointers in most languages, as well.
Are you saying that the difference is that you cannot obtain the
address of anything, except by allocating it dynamically?
That's an artifact of the Java object model (and affects other
languages, such as Pascal or Modula-2, which only have
"pointers", and not "references").
The underlying ptr is inaccessible.
So. It's true that in C++, you can cast the address of anything
to an unsigned char*, and hex dump it, to see the low level,
shallow representation. But that doesn't mean much; I've used
machines where the mapping between this representation and the
actual address in memory was anything but transparent.
You also cannot use multiple levels of indirection with Java
references.
Again, that's because of the Java object model, which
distinguishes between objects and variables.
There is no & (address) operator in Java as you never need to
worry about addresses. The reference abstracts that away.
Much the same way a C+ + reference does.
I'm not sure that the C++ reference does abstract that away. If
"r" has type int&, I can still apply the address of operator to
it.
Well, I think we agree on most of that except that refs allow
you to prevent deep copy on many other overloaded operators
too (copy constructor, etc) and whatever custom code you want
to write to pass something 'by reference'. So I find it hard
to believe that operator+ was the sole or even primary
motivation.
The example Stroustrup uses to introduce references in the
original TC++PL was operator<<. It's true that without
references, you can't define a copy constructor; I wonder how
that was handled before references were introduced into the
language. (I know that the reason "this" is not a reference is
because when it was introduced, there were no references, so
references were not present in the very oldest forms of the
language. That was before I was using it, however.)
Also, It is very hard to convince me that the word reference
was chosen completely independantly of the notion of 'passing
something by reference' in C (which was done with ptrs).
According to Stroustrup, the word was suggested to him by Doug
Ilroy, and the context was its use in Algol68. Don't forget
that Stroustrup knew a lot of other languages, beside C, and
didn't hesitate to incorporate a good idea, regardless of where
it came from.
The other difference we have is that I think that Java
designers chose the word 'reference' exactly because of the
semantics of reference in C ++, but simply cleaned it up a bit
and added features to it. It certainly feels a lot more like
a C++ reference than a C++ pointer simply because you don't
have to dereference or use the -> operator (and you can use
the . operator).
I don't think so. About the time Java was introduced, there was
a great deal of criticism concerning pointers, largely based on
the fact that in C (and C++), they could (and did) end up
pointing to anything, even to things that didn't exist in the
program. Most of the real problems were in fact due to the fact
that arrays, in C (and in C++) are broken, and that array
operations end up being pointer operations, with no
possibilities of bounds checking, etc. This is not something
fundamental to pointers, and in fact, is not a characteristic of
pointers in any other languages I know. But it had given
pointers a bad name. In fact, the Java concept of reference is
very, very similar to Modula-3 pointers. I'm fairly sure that
the creators of Java were familiar with Modula-3---Java adopts
several concepts directly from Modula-3, and the only reason I
can imagine that they didn't call a pointer a pointer is because
of the bad press pointers were getting at that time
(undeservedly, since the problem was really the fact that arrays
weren't first class objects).
The reference in C++ was supposed to be an improvement over
pointers. But, since it kept pointers for backwards
compatibility with C, it couldn't universally use refs.
That is not historically correct. C++ existed for a time
without references, and according to Stoustrup, references were
introduced in response to problems defining operator
overloading. There was never any time that Stroustrup (or
anyone else I've talked to) considered that it would be better
if the language didn't have pointers.
Java didn't have that problem so could add what was needed to
reference (namely setting to NULL and allowing resets) and
therby improve upon the C++ notion of reference.
One thing Java did do more or less right: arrays are real, first
class types, which behave like every other type (or at least,
like every other class type). Because of this, and because Java
doesn't support low level programming (you can't write a garbage
collector in Java), it didn't need pointer arithmetic. Once
they'd abandonned the way C handled arrays, they were free to
adopt the pointer concept from any one of a number of existing
languages---as I said, it is very similar to that of Modula-3
(from memory---it's been a long time since I last looked at
Modula-3).
When you have two ways of doing something, you have to figure
out how to limit each way (or else they both simply become
exact clones). But, if there's only one way, you can
implement the minimal featureset (and produce a cleaner design
IMHO). And, the fact that the designers of Java named it
'reference' instead of 'pointer' seems to be indicative which
of the two ideas they thought it was more similar to.
What makes you so sure that they were looking at C++, and only
C++? If you pick up any book on algorithms from the time (e.g.
Worth's "Algorithms + Data Structures = Programs"), you'll find
pointers (called pointers) used to implement the dynamic data
structures (like lists and trees). Java's authors obviously
felt the need to support this. They also almost certainly felt
the need to avoid pointer arithmetic, and arrays decaying into
pointers, which are characteristics of the C model for arrays.
But certainly nothing prevented them from adopting pointers from
some other language, and the only possible reason I can conceive
of for not calling them pointers is the bad press that (C)
pointers were getting at that time (or even earlier?: Ada calls
them "access types"---with the note that "Access values are
called ``pointers'' or ``references'' in some other
languages.").