Tomasz said:
Now imagine a C++ object. It is not a black box at all. It has
*multiple interfaces*, and everything that communicates with it must
know *exactly* what at least some of them are. Like with Smalltalk
object, it receives messages. But the sender addresses messages to
different interfaces, not to the object itself. So one message "bar"
can be sent to "Foo in public context", while other message "bar" can
be sent to "Bar in private context". It is pretty meaningless to talk
about "sending messages to objects" in C++, as the receiver is not an
object (but one of its many interfaces) and the message must know
exactly what interface it is targetting.
Indeed, C++ doesn't purport to have messages or methods. The C++
specification on purpose use the term "member functions". The "message"
does not need to know what interface it is targetting - the caller
needs to know that it is satisfying all the restrictions that the
function signature and class of the object.
However to call it different interfaces is misleading too - it is a
single interface, but the interface specifies pre-conditions that must
be satisfied before you are allowed to use it. . Private/protected is
only part of that - the types of the arguments are also part of it. Yet
I assume you wouldn't claim it has multiple arguments because it acts
differently if you try to pass a string instead of an integer to the
same function? Even Ruby has pre-conditions that must be met before a
method call can be successfully completed.
One difference is that in C++ a larger set of those pre-conditions are
enforced by the compiler as part of type checking up front, whereas in
Ruby more is left up to the developer, or will result in failures
during testing instead. The other is that C++ mentality is very much to
use the compiler to enforce design: Make compilation fail if the client
of a piece of code tries to do something he/she shouldn't, such as
calling member functions that are meant for internal use only. A lot of
work on C++ meta-programming focus on adding more restrictions to push
failures forward to compile time to limit the problem space that needs
to be covered for testing, and to limit needs for runtime checks.
I think it's unfair to say that this means C++ objects are not black
box. If anything, they are more so: The typical design aim is to narrow
the interface as much as possible. private/public/protected specifiers
are tools to achieve that, exactly in order to ensure that not only
attributes, but also member functions not meant for public use gets
hidden from clients. Hiding the interface doesn't achieve anything
other than push the verification that the compiler does in a language
like C++ into the test suite, so if that is what you mean by black box
I see it as a bad thing.
Vidar