this is a similar 'argument':
in c most people think that a definition like
int foobar (const char * string);
will prevent people from modifiying string. it won't. for that
one needs
int foobar (const char * const string);
why not unify them?
the answer is obvious: because they are differnet. a focus on
clarifying
doccumentation might be wiser in this case. and, in the end,
certain people
are simply going to have to be told to rtfm - not everything can be
intuitive
AND powerful.
Actually, the first example will prevent people from modifying the
string, at least directly. Adding the second const only prevents
them from reassigning the "string" variable within the function.
The biggest problem here is that in C strings don't exist. People go
to great lengths to pretend that they do, but really, they don't.
There are only arrays of characters terminated by a null character.
If people would quit believing that strings actually exist, then the
problems with strings and C would go away.
What makes the problem worse is that C makes it really easy to
believe that strings exist. Having constructs like "Hello World!\n"
makes people think that there is such a thing as a string, but the
compiler sees that as being identical to {'H', 'e', 'l', 'l', 'o', '
', 'W', 'o', 'r', 'l', 'd', '!', '\n', '\0'}.
Now you could argue that strings actually *do* exist, and that they
*are* arrays of characters terminated by a null. I don't buy it
though. Why not? Because *anything* can be a string of characters
terminated by a null. C may be statically typed, but try declaring
an array of characters and then calling strlen(the_array). It may
give you a length, or maybe a segfault, but it will never say "hey!
that's not a string!"
Aside from the issue of C strings, I'd say the way const is used in
functions is b0rken in C. Take int foo(const int bar); Since bar is
passed by value, any modifications of it in the body of the function
are only local to the function. In my experience good C coders treat
every parameter passed by value as "const", otherwise they lose a
record of what parameters the function was supplied. Because of
this, "const" is only really useful when applied to pointers. And
the syntax for that is really confusing -- generally the syntax for
pointers is really confusing in C as a general rule. The concept of
a pointer is pretty simple -- it's an address in memory, the only
thing that makes it difficult is the syntax.
In this bit of code, you can change the variable a, which is a
pointer, making it point at something else, but you can't change the
value it points at. As for b, you can change the value it points at,
but you can't change the variable itself.
int bar(const int *a, int *const b, const int c)
{
int d, e;
//*a = 3;
*b = 4;
a = &d;
//b = &e;
//c = 5;
}
The "const char* string" works the same way, preventing you from
modifying the contents of the string, but letting you reassign the
variable. The way to read it (I believe) is "if you dereference
'string' you will get a constant character".
Anyhow, bringing this back to Ruby...
I think Modules and Classes should be separate, but I think there's a
lot of confusion there.
Classes are pretty straightforward, but modules are not. I'd say
most of the confusion comes from the fact that modules can either be
groups of functions that are mixed in to other things, or they can be
namespaces. That dual use gets confusing. Oh, and it's also
confusing that a Class is a Module in the inheritance tree.
I agree that it's confusing that including a module doesn't hide
functions that a class defines on its own. IMHO it isn't clear that
"include FooModule" is similar to inheritance. It looks like it's
similar to "attr_accessor", not like class Foo < Bar.
Adding documentation would help a bit with this issue, but it doesn't
seem like the ideal approach to me. Instead of documenting a
confusing issue, why not understand why it's confusing and fix it?
I'd suggest that for one thing, modules-as-namespaces should
disappear, there should simply be a "namespace" keyword so "Math:
I"
is in the Math namespace, not the Math module. Then, I'd say "mixin"
instead of "module". Finally to address the issue of it not
overriding methods that already exist, maybe a second parameter to
"include" named "override" that defaults to false? Sure, that could
break things when you mix something in that clobbers a "helper
method", but just having that option there makes people realize that
it doesn't always override things.
As for multiple inheritance vs. single inheritance, I think the
problem isn't a technical one, it's a human one. As Adam P Jenkins
pointed out:
By your logic, C++ and Python don't have the diamond problem
either. Both
of those languages have well defined ways in which the method to
call is
chosen, which have to do with the order in which the programmer
writes things.
So the question becomes "What is going to provide the most
flexibility with the least confusion?" I think that rules out
multiple inheritance. Ruby is a really flexible language. Even
without resorting to "eval" you can do some pretty astounding
things. But in general the approach seems to be "unless you're
trying to do something tricky, things generally behave as they
should". I would say that having multiple inheritance adds a lot of
potential for confusion without a lot of benefit. Crafty programmers
can get all(?) the benefits of MI from Ruby today, as long as they're
willing to be sneaky about it. Less crafty programmers don't have to
worry about the diamond problem. Doesn't that seem like the way
things should be?
Ben