If you have, for example, a vector of ints, where each int is a
full-fledged object (in other words, some of the ints could actually be
objects derived from an int), then I don't think the compiler has any
way of optimizing dynamic binding checks away. It cannot prove that none
of the ints in the array are objects derived from int.
It can, and some do. Obviously, it's harder for the compiler
than optimizing around std::vector, but then, optimizing around
std::vector is harder for the compiler than optimizing around
Fortran style arrays. (C-style arrays are a problem for the
compiler, because they end up being pointers.)
Also since such ints are full-fledged objects, it means that the
vector cannot store them by value. It has to store them by reference.
Right there you just *at least* doubled the memory usage of the vector
(assuming the absolutely optimal situation; in practice, however, the
memory usage probably quadrupled, or worse).
That, again, is an optimization issue. A fairly long time ago,
I stumbled upon a page by James Gosling on adapting Java for
numeric processing, and one of the issues discussed was how the
compiler could "optimize" such arrays to avoid dynamic
allocation of each object. He actually proposed an additional
keyword (IIRC) to tell the compiler that the type met certain
constraints (which the compiler then enforced), but in
principle, the compiler could determine whether this was the
case on its own. (Given that compilation in Java normally
occurs while you're running the program, you don't want to
impose optimization techniques that are too expensive. And
given that Java uses dynamic linking everywhere, done lazily,
it's very difficult for the compiler to have a view of the
complete program, which is necessary for the best optimization.)
With a suitably designed language (no dynamic linking, for
example, C++'s const), I don't think it would actually be that
difficult to detect and optimize such cases. (The page also
defined operator overloading; since Java didn't go that way, and
Gosling implied that it would be unusable for numerics if it
didn't, the page has disappeared.)
If the vector has the ability to store the ints by value rather than
by reference, then you are immediately admitting that the ints are not
full-fledged objects (because they cannot be replaced with objects
derived from an int). However, if you want any kind of efficiency in
your program, that's a practical decision to make.
I'm not convinced that the performance issue is the key,
although it's certainly easier for the compiler to optimize if
part of the "optimization" is done by the programmer. The fact
is that in any well designed program, regardless of the
language, there are different categories of objects: value
objects do not respect the same rules as entity objects, for
example, and in a "pure" OO language, you generally have to
follow some hard rules to ensure that value objects behave as
expected. Basically, C++ favors value objects over entity
objects, pure OO languages favor entity objects, often to the
point of making value objects very difficult. In some ways, the
"pure" OO can be considered an over-reaction---earlier
languages, like C or Fortran, ignored entity objects entirely.
(They're practically impossible in Fortran.) In practice,
however, all of the applications I've seen need some of both.
(Java still has final, which, used correctly, allows something
very close to a value object. Arguably, if you're making a
significant number of classes final in Java, you aren't
programming OO. But you are writing more robust and easier to
maintain code than if you skipped the final.)
I really don't see such a huge advantage in being purist about the OO
paradigm. Purism often bites back in decreased efficiency.
Decreased programmer efficiency, above all. If all you have is
a hammer, then everything looks like a nail; but a good workman
has a variety of tools in his toolbox, and uses whichever one is
appropriate for the occasion. And using a screwdriver, rather
than a hammer, on screws, makes you more efficient.