Why C++ Is Not “Back”

L

Lynn McGuire

Google has some very questionable guidelines regarding C++ ...

... although IIRC Stroustrup (Design and Evolution) makes it clear
that multiple inheritance is in the langugage because people could
come up with real-life examples of problems where it made good sense
-- and explicily /not/ because it's something everyone would need
often, because it isn't.

I've never used it myself.

/Jorgen

I have. We have two user interface toolkits in
our Windows Win32 app. One dialog required that
both toolkits be referenced. I tried to avoid
it but could not. It is tricky and one should
use early binding on all methods to ensure that
you get the correct method.

Lynn
 
W

W Karas

This is what the Google coding guidelines for C++ says about multiple inheritance:



Multiple Inheritance



Only very rarely is multiple implementation inheritance actually useful. We allow multiple inheritance only when at most one of the base classes hasan implementation; all other base classes must be pure interface classes tagged with the Interface suffix.

Seems to me like all the arguments for not allowing a second base class, orrequiring the second base class to be "interface only", apply equally wellto the first base class.

Of course, full MI leads to a desire for virtual base classes. If Java hadkids, it wouldn't let them have hamburgers in order to avoid the bother ofbuying ketchup.
 
N

Nick Keighley

Why C++ Is Not “Back” by John Sonmez

His list of 36 C++ hiring questions is awesome.

not really
He might nail me on a third of them.

I didn't think I'd be quite that bad.

1. How many ways are there to initialize a primitive data type in C++
and what are they?

I'd say 2
int i = 27;
int j (42);
but this seems too easy.


These questions lead me to wonder how good his C++ is

12. What is a copy constructor and when is it used, especially in
comparison to the equal operator.
I assume that means "assignment operator"

14. What is the const operator and how is it used?
I wasn't aware there was a "const operator" (is it a C++ 11 thingy?)

24. What is a Vector?
a misspelling of vector


This one baffled me

28. What is short circuit evaluation? How can it be used? Why can is
be dangerous?
dunno when is it dangerous?
 
J

Jorgen Grahn

.
This one baffled me

"28. What is short circuit evaluation? How can it be used? Why can is
be dangerous?"
dunno when is it dangerous?

I've only had problems with it when I've written

if(foo && foo->bar) ...

and someone with a gap in his knowledge assumed this would crash.
But last time I heard that was in the mid-1990s.

Hopefully that was the interview answer he was looking for.

Come to think of it, maybe he was after overloading of operator&& ()
and friends? Can't say I remember how that works -- never felt the
urge to do it.

/Jorgen
 
N

Nick Keighley

On Sun, 2012-12-09, Nick Keighley wrote:

...



I've only had problems with it when I've written

  if(foo && foo->bar) ...

and someone with a gap in his knowledge assumed this would crash.
But last time I heard that was in the mid-1990s.

I've seen a comment in code recently that I think implied the author
thought there was a problem. But he wasn't clear what he thought the
problem was and he hadn't signed it (I suppose i could dig through the
config control system to find out, but life is short). For some people
it still *is* the mid-90s!
Hopefully that was the interview answer he was looking for.

Come to think of it, maybe he was after overloading of operator&& ()
and friends?  Can't say I remember how that works -- never felt the
urge to do it.

It doesn't. You can't get the effects of && and || with your own
operators. I think it's usually recommended you leave them alone
 
G

gwowen

Come to think of it, maybe he was after overloading of operator&& ()
and friends?  Can't say I remember how that works -- never felt the
urge to do it.

This was my thought too.

if(x && y){

}

might *not* be short-circuit evaluation if you don't know the types.
This is one of Scott Meyer's C++ gotchas, IIRC.
 
N

none

Just for posterity, here's the link.

http://google-
styleguide.googlecode.com/svn/trunk/cppguide.xml#Multiple_Inheritance

And as the rest of Google C++ style guide: use with care. Do not
consider these as anywhere near quality generic C++ coding
guidelines.


Yannick
 
Ö

Öö Tiib

This was my thought too.

if(x && y){

}

+1 I had same impression.
might *not* be short-circuit evaluation if you don't know the types.
This is one of Scott Meyer's C++ gotchas, IIRC.

Also it is in C++ FAQ 13.9 point 19. The people who haven't read FAQ
(or can't digest such long FAQ yet) hope either less bear traps in
C++ or automatic tools that paint the gotcha situations bright red.
Unfortunately ... complex things are difficult to make novice-friendly.
 
G

Gerhard Fiedler

Nick said:
This one baffled me

28. What is short circuit evaluation? How can it be used? Why can is
be dangerous?
dunno when is it dangerous?

Besides what others said, maybe something like this:

if( x && funcWithExpectedSideeffects() )
...

Of course the side effects expected from the function may not
materialize. I wouldn't call it dangerous, but then I'm not him :)

Gerhard
 
L

Lynn McGuire

not really


I didn't think I'd be quite that bad.

1. How many ways are there to initialize a primitive data type in C++
and what are they?

I'd say 2
int i = 27;
int j (42);
but this seems too easy.


These questions lead me to wonder how good his C++ is

12. What is a copy constructor and when is it used, especially in
comparison to the equal operator.
I assume that means "assignment operator"

14. What is the const operator and how is it used?
I wasn't aware there was a "const operator" (is it a C++ 11 thingy?)

24. What is a Vector?
a misspelling of vector


This one baffled me

28. What is short circuit evaluation? How can it be used? Why can is
be dangerous?
dunno when is it dangerous?

if ( x = y )
{
}

Is that it ?

It is very dangerous because the eye will
sometimes confuse the equals with equality.

Lynn
 
A

Alain Ketterlin

[...]
if ( x = y )
{
}

Is that it ?

It is very dangerous because the eye will
sometimes confuse the equals with equality.

No, that's not short-circuit, it's just that the semantics makes an
assignment an expression.

An example use of short circuit would be (with t of size n):

if ( p>=0 && p<n && t[p] == 42 ) ...

where simple conditions are evaluated one after the other, from left to
right, until one of them is false. Which is of great help here, since
accessing t[p] is illegal if the first two conditions are not met.

Now suppose you have something like:

ok = true;
for ( i=0 ; i<n ; i++ )
ok = ok && somefunc(t);

(somewhat silly, but useful for the example). In this case,
short-circuiting has the consequence that somefunc() will not be called
on elements placed after the first where it returns false. It may be
useful, surprising, or plain inadequate depending on what you expect
this loop to do. Also, changing the assignment to:

ok = somefunc(t) && ok;

will ensure that somefunc() is called on every element.

I would not call this dangerous (it is even useful to avoid duplication
of code), but it is not something you can ignore.

-- Alain.
 
B

BGB

Besides what others said, maybe something like this:

if( x && funcWithExpectedSideeffects() )
...

Of course the side effects expected from the function may not
materialize. I wouldn't call it dangerous, but then I'm not him :)

yeah, pretty much...

functions like this used in a chain, can lead to various "unpredictable"
outcomes related to the output state.


it is "dangerous" in the same sense as the "evil" "not strictly
left-to-right" evaluation order...

the contrast here is with languages which define the evaluation order,
typically left-to-right ordering of function arguments, and the left
side of an operator before the right side.


or, I guess it is also "dangerous" in the same sense as not having to
explicitly cast every type conversion.

double fd;
float ff;
....
ff=fd; //compiler: "OH SH!!!!"


meanwhile, then there are some of us who stare "danger" in the face, and
would rather not be bothered with having to cast these sorts of things.

(in my scripting language, it is a compromise: the language will
currently generate a warning, but will perform the conversion implicitly).
 
B

Balog Pal

What makes about as much sense as their guideline on (not) using
exceptions... At least the later versions kinda admit that their
guidelines follow being locked in messed up codebases.
yeah, 'tis an issue...

how often does using MI actually make sense?...

How you measure "often"? Or more interestingly why?

The base guideline is "do the right thing" or "use the/a right tool".
You make decision on the particular situation. Where MI will or will
not make sense. Entirely unrelated on MI's usefulness in other situations.

IME one completely sensible use of MI is where you mix different
hierarchies. Like an UI framework like MFC and the implementation of
your model and other stuff. A usual bad way is to just use classwizard
and punch in the code wherever it placed //TODO. While it is ways better
to write the logic separately, independent of an UI, and glue them
together -- possibly using trivial templates.
I suspect though that this is a reason for MI being generally absent
from many/most newer languages.

The reason is probably simpler: authors of those didn't want to mess
with object layout issues. It beats me why disallowing code from
interfaces, not only data... I'm yet to hear a on-FUD argument here.
like, it seemed like a good idea early
on, but tended not really to be generally worth the added funkiness and
implementation complexity, and the single-inheritance + interfaces model
is mostly-good-enough but is much simpler to implement.

If you put down a language that aims for just a small subset of possible
problems it is okay to restrict. Certainly if in that realm the dropped
tools are really not needed.

C++ aimed for the most general arena. And most people admit that when
you need MI you really need it, and the alternatives are pretty crappy.

Implementation-wise it is not such a big burden. More so on user side:
with MI in the mix you can't just keep casting stuff around carelessly.

But who said you should rather aim to support sloppy casting habits?
actually, it is sort of like inheritance hierarchies:
many people seem to imagine big/complex hierarchies more similar to that
of a taxonomy (with many levels and a "common ancestor" for pretty much
everything);

Those cosmic hierarchies in C++ was forced in early times due to lack of
templates (or template-based libraries). Than as that feature (plus
native RTTI) entered you could just cut the obsolete roots and tangles.
more often, I have personally rarely seen cases where more than 2 or 3
levels are needed (and very often don't have *any* parent class).

Yeah.
 
B

Balog Pal

Why C++ Is Not “Back” by John Sonmez
http://simpleprogrammer.com/2012/12/01/why-c-is-not-back/

"I love C++."
With the usual addition of BUT...

But anyway, I don't doubt he honestly thinks as he writes. The comments
on the site -- and more on Herb's point out correctly that the guy makes
impression of an incompetent C++ programmer and not likely competent SW
engineer. And one with very limited oversight too, thinking in java/C#
terms, and stating those as modern, and supposedly cooler languages.

....
His list of 36 C++ hiring questions is awesome.

The list is a good set to use in training/seminars. But pretty much
useless on a practical interview. In what kind of context would you ask
them?

Let me just copy in my post on accu-general list on this article:
12/4/2012 3:23 AM
to: (e-mail address removed)
subj: current Sutter's Mill on why-C++is-not-back article
---
http://herbsutter.com/2012/12/03/perspective-why-c-is-not-back/
http://simpleprogrammer.com/2012/12/01/why-c-is-not-back/

I agree with Herb, that the article is thoughtful and hype-free. But it
reads sooo completely wrong, resonating with 'Perils of Javaschools'.

Sure, we all know C++ is huge, and is better be avoided for many reasons
including:

- you aim for projects that are
- toy-sized
- doable by some RAID tool by just dragging elements
- or just issuing a few calls into a ready-for-use library
- have no quality requirements
- you want to avoid learning effort
- you have no access to mentors
- you hate choices and prefer to walk a single possibility road
- you can't stand code and design reviews
....


"It seems that many of the seasoned developers have forgotten why we
stopped using C++ and moved on to Java, C# and other modern languages."

Are those the "modern" languages? I watch them from the sideline for
more than a decade now, and still fail to see any good outcome. Yeah, I
see big quantity of people moving to that field either leaving old stuff
or sidestepping it altogether. But are they faring any better actually?

My simple summary is that "well, you can write crap with those languages
with ways less effort", that is a good motivation.

But writing nontrivial and WORKING stuff is a different beast. The
effort put into learning a particular language, API, system, library,
etc amortizes pretty fast. While having access to more versatile
toolset pays off. Unless the problem is a perfect fit for some single
paradigm -- but then it probably fall off the "nontrivial" selection we
made up front.

How about the guy's 36 bullet list of interview questions? Do they serve
any practical point? IMO for a junior, they are pointless to ask, as he
will be clueless for most. And for a senior I'd go for high level SW
engineering, or code review, a fallback to those likely indicating 'no
hire' was already marked...

"The big problem is that programming languages really need to get
simpler and increase the level of abstraction not reduce it."

Hm? We have those simple languages for long, what could be simpler than
lisp/scheme? Having python in the toolset do we demand any more?

But somehow the big systems seem to pick up not the simple languages --
and as you scale up, complexity management AND its verification will be
the most demanding task. "Abstraction" is a good theory, but as we map
it to practice it starts leaking all over the place. The abstractions
the language/system provides are king, as long as they cover your
problem perfectly (see toy/trivial stuff), but leaving that?

I stick with the oldest wisdom, "avoid anyone claiming he's a XYZ
programmer, find just programmers". (same as the smart/getthingsdone
selection). I believe that anyone who can write working stuff in C#
(java, ....) can have no problem learning C++ (or anything) as needed.
Certainly will use whatever fits best for the task. While those who
would fail to master C++ (practical subset) in a mentored environment,
could only drag back a project in any simple or supposedly idiot-proof
language.

If someone asks me what language to learn I normally say whichever you
can grab a good mentor for -- as the starting one. And if you aim for a
single one, better forget it right there and look for greener fields.

DUH, babbling too much, I actually wanted to ask YOUR opinion. O,O
 
B

BGB

What makes about as much sense as their guideline on (not) using
exceptions... At least the later versions kinda admit that their
guidelines follow being locked in messed up codebases.


How you measure "often"? Or more interestingly why?

"often" is how frequently one runs into a given situation while writing
a piece of code.

issues which show up very frequently, like variable declarations or
expression syntax, may weight more heavily than something seen a little
less often (such as a method declaration), which in turn may weigh more
heavily than a class declaration, ...

the weighting would then also be:
how often is a parent class needed;
how often is more than one parent needed (and would not be easily
achievable via interfaces);
how often are multiple parent classes needed (and not easily achievable
by an object containing instances of "parent" classes, rather than
direct inheritance);
....


the reason for this sort of weighting is more based on the estimated
level of "awkwardness" caused by having a less optimal solution in a
given situation.

if the added awkwardness of missing something is likely to be moderately
small, but the gains from not having to deal with it are greater, it may
make more sense to leave it out.

like, something which is mildly painful, but rare, is less of an issue
than something only slightly annoying, but happening all the time.


if, OTOH, the feature either has little cost, or otherwise justifies its
costs, then it makes more sense to retain it.

granted, yes, this is a little moot for C++, which already has MI.

The base guideline is "do the right thing" or "use the/a right tool".
You make decision on the particular situation. Where MI will or will not
make sense. Entirely unrelated on MI's usefulness in other situations.

IME one completely sensible use of MI is where you mix different
hierarchies. Like an UI framework like MFC and the implementation of
your model and other stuff. A usual bad way is to just use classwizard
and punch in the code wherever it placed //TODO. While it is ways better
to write the logic separately, independent of an UI, and glue them
together -- possibly using trivial templates.

assuming a person couldn't use MI, how much annoyance would this likely
add overall?...


in languages which lack MI, the situations where a person doesn't have
MI but is doing things which "imply" MI, typically end up as a new class
containing instances of the classes it would normally inherit from.

this will typically in-turn add, maybe, a little bit of indirection (or,
maybe, a few getters/setters and/or "glue" classes), but usually this
isn't a big deal (especially if the scenario is relatively infrequent).


for example, in cases where I have written code in Java (or even C#),
there are plenty of other things which are *considerably* more annoying
than its lack of MI.

The reason is probably simpler: authors of those didn't want to mess
with object layout issues. It beats me why disallowing code from
interfaces, not only data... I'm yet to hear a on-FUD argument here.

object layout is a bigger issue for a compiler or a VM, if you have a
feature which adds a big set of ugliness in the compiler, but only
rarely is of much obvious benefit to programmers, does really it make
sense?...

could this effort not have been better spent on other big-ugly features
which provide more obvious / immediate benefits?...


FWIW, in my case, the code for dealing with all of the fun with object
layouts and "efficiently" getting/setting fields and performing method
calls, it actually some of the more complex code in the VM (and I don't
even have "proper" MI). a person may well find that this is the sort of
thing prone to "grow hair" when implemented.

granted, my object and scoping model is admittedly a bit more "advanced"
(IOW: overly complex) than the models used by Java or C# (probably
doesn't help matters).

as well, the tradeoffs of being moderately high-performance,
thread-safe, and supporting dynamically modifiable class and object
layouts, does not make for pretty code (internally, a lot of it is
actually a bunch of jerking around with function-pointers I have come to
term "plumbing pipe logic", partly as the control-flow of the program
works partly like "pipe dream", with the green-liquid being the main
execution path, sometimes with external logic setting up the "pipes",
and sometimes they reconfigure themselves).

actually, a fair chunk of the VM works this way, partly as "simpler"
options, like "if()" conditionals, state-variables, and "switch()"
blocks, tend to be slower and are ideally kept out of the main execution
path (even if it can make some areas of code look a bit evil). ideally,
we want the execution path to head "straight towards the goal" while
minimizing "detours".


simplification is welcome when it can have reasonably good tradeoffs.


MI is more one of those things left for a future time for if/when I can
justify the effort needed to extend the FFI to better support C++
(partly has to do with parsing C++, and partly has to do with
uncertainties related to the various C++ ABIs).

as-is, the goal is mostly just "good interfacing with C", and if C++
code wants to be accessible via the FFI, it has to do so on 'extern "C"'
terms. (there is a lot of "hair" in all this, in terms of even little
things, like how struct passing/returning is handled in the AMD64 ABI, ...).

had to find working links:
http://people.freebsd.org/~obrien/amd64-elf-abi.pdf
http://mentorembedded.github.com/cxx-abi/abi.html
....


yes, a fair amount of the logic for this part is actually written in
assembler...

If you put down a language that aims for just a small subset of possible
problems it is okay to restrict. Certainly if in that realm the dropped
tools are really not needed.

yeah, in my case, this is mostly for a scripting language (and is
more-or-less a variant of ECMAScript).

its most direct use-case is for writing game monster-AI scripts and
similar, but I have also used it some for things like animation control, ...


it is used as a scripting language, but also tries to be "moderately
solid" (or at least, not a total joke) as a general-purpose language as
well (supporting static typing, class/instance OO, ... as well). it
mostly borrows similar syntax to ActionScript3.

regarding language-features, it is "loosely comparable" with C# (though
there feature-sets are not exactly 1:1, so C# has features my language
lacks and likewise).

regarding other language features, it is also more-or-less a C superset
(it includes pretty much the entire C typesystem, most of C's semantics,
and many elements of C syntax as well, and most C code should be fairly
easily portable to analogous code in the scripting language, albeit with
some syntax differences). (it should be an easier porting job than to
Java and C#, partly as the current language semantics align much more
closely).

( I once considered trying to port Quake to my scripting language as a
test, but was hard-pressed to justify the time/effort needed to do so,
as this would still be a big PITA though. )

it is also struct compatible and (with care) function-pointer compatible
with C (this part runs on black magic though).


performance is a bit worse than Java or C#, mostly because it still runs
on an interpreter (based on indirect-threaded-code), rather than a JIT,
and so still runs considerably slower than native.

(as-is, raw performance hasn't been that big of an issue though, as most
"heavy lifting" is still done by C and C++ code...).

it has not really been tested much in "real-world" usage, and would
still need to have many bugs shaken out and problems addressed (given
the types of bugs I still run into periodically, I will make no claim
that it is really).


its main nifty feature is mostly its C FFI, but considering that the C
FFI is itself a pretty big chunk of code, it isn't really a cheap feature.

the FFI actually involves a part that was originally written as a
C-compiler frontend, but it wasn't a very good C compiler (vs just using
MSVC or GCC, it was kind-of slow and very buggy) so it ended up mostly
re-purposed as an FFI tool.

I had considered reviving the C compiler (as a compiler) a few times
(probably using the same VM backend as my scripting language), but this
would itself be a big project.

C++ aimed for the most general arena. And most people admit that when
you need MI you really need it, and the alternatives are pretty crappy.

Implementation-wise it is not such a big burden. More so on user side:
with MI in the mix you can't just keep casting stuff around carelessly.

But who said you should rather aim to support sloppy casting habits?

if the class is a compound object, then a person can pass the relevant
sub-member.

Those cosmic hierarchies in C++ was forced in early times due to lack of
templates (or template-based libraries). Than as that feature (plus
native RTTI) entered you could just cut the obsolete roots and tangles.

I think they were more mostly a problem of many people just thinking
about OO in ways that don't make much sense...

many introductions (and college classes involving) OO, tend to approach
it more as if it were a taxonomy system for approaching objects (and
use, as examples, taxonomies).

a taxonomy system is IMO a poor model to use for a class hierarchy, but
sadly it is apparently the main way many (most?) people tend to
conceptualize hierarchies (well, along with "chain of command" systems,
....).

granted, approaching it more as "I can't clearly say what it is, but it
isn't really a taxonomy" doesn't really help matters much...

many "introductions" show as "examples" things which involve overly deep
nesting in a taxonomy-like manner.

Object/Creature/Animal/Vertebrate/Mammal/Hominid/Human

then people end up designing class hierarchies likewise, with lots of
classes which "don't actually really do anything".


now, try to explain that this is pointless, and all that is really
needed is something like, say:
Entity/ActorBase/Human

say:
Entity: anything which may appear in the scene-graph (defines general
scene-graph entity methods);
ActorBase: an entity which may exhibit AI-controlled behaviors (defines
various AI related methods);
Human: a specific type of entity exhibiting AI-controlled behaviors.

these sorts of differences are subtle, but relevant.

granted, technically, a person could just make Entity and/or ActorBase
be interfaces (and Human as a raw base-class), but in my case I chose to
have them as classes (mostly as there is nothing really more-sensible
for them to inherit from).

(also, Entity may actually inherit from Object, but stating this
explicitly is usually kind-of pointless...).

well, and, my actual classes are more things like:
Entity/ActorBase/monster_soldier
and:
Entity/ActorBase/passive_sheep

and similar (there is no actual "Human" class...).


or such...
 
R

Rui Maciel

BGB said:
"often" is how frequently one runs into a given situation while writing
a piece of code.

issues which show up very frequently, like variable declarations or
expression syntax, may weight more heavily than something seen a little
less often (such as a method declaration), which in turn may weigh more
heavily than a class declaration, ...

the weighting would then also be:
how often is a parent class needed;
how often is more than one parent needed (and would not be easily
achievable via interfaces);
how often are multiple parent classes needed (and not easily achievable
by an object containing instances of "parent" classes, rather than
direct inheritance);
...


the reason for this sort of weighting is more based on the estimated
level of "awkwardness" caused by having a less optimal solution in a
given situation.

if the added awkwardness of missing something is likely to be moderately
small, but the gains from not having to deal with it are greater, it may
make more sense to leave it out.

like, something which is mildly painful, but rare, is less of an issue
than something only slightly annoying, but happening all the time.


if, OTOH, the feature either has little cost, or otherwise justifies its
costs, then it makes more sense to retain it.

granted, yes, this is a little moot for C++, which already has MI.

The C++ language has a significant number of features which could also fit
your definition of not showing up very often, at least in some applications.
Some coding guidelines, similar to the one adopted by Google, explicitly ban
C++ features such as exceptions, templates, inheritance, and even dynamic
memory allocation. To the people who defined those coding guidelines, those
features would weigh nothing at all, and they would gain nothing by using
them. In fact, the reason why they've banned them is that they actually
cause problems which they wish to avoid.

Yet, it would be terribly silly to suggest that any of those features should
be removed just because someone doesn't use it, or because they may cause
problems in some applications. The same applies to multiple inheritance.

assuming a person couldn't use MI, how much annoyance would this likely
add overall?...

Replace "MI" with exceptions, templates, overloading, namespaces, STL, and
even classes, and the answer would be the same for those features as it
would be for multiple inheritance.

in languages which lack MI, the situations where a person doesn't have
MI but is doing things which "imply" MI, typically end up as a new class
containing instances of the classes it would normally inherit from.

this will typically in-turn add, maybe, a little bit of indirection (or,
maybe, a few getters/setters and/or "glue" classes), but usually this
isn't a big deal (especially if the scenario is relatively infrequent).


for example, in cases where I have written code in Java (or even C#),
there are plenty of other things which are *considerably* more annoying
than its lack of MI.

Does this say anything about MI?

object layout is a bigger issue for a compiler or a VM, if you have a
feature which adds a big set of ugliness in the compiler, but only
rarely is of much obvious benefit to programmers, does really it make
sense?...

could this effort not have been better spent on other big-ugly features
which provide more obvious / immediate benefits?...

Your comment doesn't make sense. Eliminating support for MI, or even the
time invested in developing and implementing it, wouldn't magically add
features to C++. It's a false dilemma.

FWIW, in my case, the code for dealing with all of the fun with object
layouts and "efficiently" getting/setting fields and performing method
calls, it actually some of the more complex code in the VM (and I don't
even have "proper" MI). a person may well find that this is the sort of
thing prone to "grow hair" when implemented.

granted, my object and scoping model is admittedly a bit more "advanced"
(IOW: overly complex) than the models used by Java or C# (probably
doesn't help matters).

as well, the tradeoffs of being moderately high-performance,
thread-safe, and supporting dynamically modifiable class and object
layouts, does not make for pretty code (internally, a lot of it is
actually a bunch of jerking around with function-pointers I have come to
term "plumbing pipe logic", partly as the control-flow of the program
works partly like "pipe dream", with the green-liquid being the main
execution path, sometimes with external logic setting up the "pipes",
and sometimes they reconfigure themselves).

actually, a fair chunk of the VM works this way, partly as "simpler"
options, like "if()" conditionals, state-variables, and "switch()"
blocks, tend to be slower and are ideally kept out of the main execution
path (even if it can make some areas of code look a bit evil). ideally,
we want the execution path to head "straight towards the goal" while
minimizing "detours".


simplification is welcome when it can have reasonably good tradeoffs.

The main reason why complexity is shoved under the hood is to add the
necessary abstraction that lets programmers pull off complex stunts through
the use of simple instructions, which make their lives easier and increase
their productivity. A worker doesn't complain that a tool is hard to build,
if it helps him perform better.

With multiple inheritance, a programmer is able to define a class by stating
a set of base classes. That takes a single line of code, which can be typed
in a couple of seconds.

The alternative to multiple inheritance is to either employ composition,
which isn't really applicable everytime and its use throws polymorphism out
of the window, or use single inheritance, and be forced to create a couple
of source files for each base class. Both options are significantly worse
than multiple inheritance.


if the class is a compound object, then a person can pass the relevant
sub-member.

Not all classes are compound objects, nor all class representations can be
expressed through composition. In addition, that would negate, or at least
significanly complicate, the use of polymorphism.

many "introductions" show as "examples" things which involve overly deep
nesting in a taxonomy-like manner.

Object/Creature/Animal/Vertebrate/Mammal/Hominid/Human

then people end up designing class hierarchies likewise, with lots of
classes which "don't actually really do anything".

That approach is taken as a way to explain the concept of inheritance in a
manner which is easy to understand. It isn't intended to serve as an
example of OO best practices. If, instead of that approach, the concept of
inheritance was presented as is more often employed in the real world,
through design patterns, then the concept would be significantly harder to
understand.

It's like riding a bicycle: just because you use training wheels to learn
how to ride one, it doesn't mean that you expect to see them being used in
the tour of France.



Rui Maciel
 
B

BGB

The C++ language has a significant number of features which could also fit
your definition of not showing up very often, at least in some applications.
Some coding guidelines, similar to the one adopted by Google, explicitly ban
C++ features such as exceptions, templates, inheritance, and even dynamic
memory allocation. To the people who defined those coding guidelines, those
features would weigh nothing at all, and they would gain nothing by using
them. In fact, the reason why they've banned them is that they actually
cause problems which they wish to avoid.

Yet, it would be terribly silly to suggest that any of those features should
be removed just because someone doesn't use it, or because they may cause
problems in some applications. The same applies to multiple inheritance.

cases where full MI make sense are considerably rarer than those where
exceptions or dynamic memory allocation make sense.
Replace "MI" with exceptions, templates, overloading, namespaces, STL, and
even classes, and the answer would be the same for those features as it
would be for multiple inheritance.

most of those features are, however, more frequently useful than full MI.

consider, for example, a person writes far more many classes than they
write ones which have a need for MI.

Does this say anything about MI?

it is mostly about the distribution of annoyances.

many things in Java or C# are more annoying than the absence of MI, so
the lack of MI is less of an issue...


consider, for instance, having to declare arrays as:
int[] arr=new int[256];

or, for that matter, having to manually initialize fixed-size arrays in
constructors. IMO, this counts as a considerably more annoying issue...

this problem could be largely addressed, say, if a person could write:
int[256] arr;

and then have the array, you know, just magically appear without having
to use "new".


or, maybe, them interpreting "type safety" as "must cast damn near
everything".


Your comment doesn't make sense. Eliminating support for MI, or even the
time invested in developing and implementing it, wouldn't magically add
features to C++. It's a false dilemma.

it wont add anything to C++, but saves the cost of implementing it in
new (non-C++) languages (or in scripting languages, where a person may
write most of the core app in C++, but use a script-language for some of
the other "high-level" parts).


as for the logic of Google's rules, within C++, who knows?...

I would consider subsets, but mostly for the reason of easing processing
the code via automated tools (less things for tool to worry about ->
less work in implementing tool).


all this is likely a more direct reason why newer languages often tend
not to include it (they can get maybe 75% of the results, at
considerably less implementation effort, by using the "interfaces"
trick). (since, they can basically use a single-inheritance object
layout, and just use an alternate vtable to make the interfaces work).


the next other major area of "simplification" is eliminating much of the
numeric tower (all your numbers are now "double"), but unlike
eliminating MI, this one comes with a lot more immediate issues (poor
performance, wasted memory, absence of integer arithmetic, ...).

the "next best option" is probably the ILFDA model (used by the JVM),
where pretty much the entire type-tower (as far as the VM itself is
concerned) is largely collapsed down to 5 types (most other types are
essentially faked either via syntax sugar and occasional special
operations).

need unsigned integers?
special unsigned right-shift and divide operators.

need bigger (say, 128 bit) numeric and vector types?
used boxed value-type objects (kind-of lame, but good enough...).


( internally, the only "major" difference between reference types and
value-types is the use of different internal "CopyRef" and "DropRef"
methods).

example, for a DUP opcode handler:
*ctx->stack++=ctx->stack[-1].r->CopyRef();
( not exactly, but gives the general idea. )
for other cases, a person might want to sidestep this though, so:
DUP_F:
*ctx->stack++=ctx->stack[-1];
likewise for DROP and DROP_F.


the next other "simple" route is dynamic typing, but this one comes at a
larger performance-cost, and eliminating this performance cost (via type
inference), ironically makes it more complex than the use of static type
checking.

now, when a person ends up with a VM which has both static and dynamic
type-checking (as well as type-inference), complexity has gone up again,
but oh well, it adds to the programming experience at least.

The main reason why complexity is shoved under the hood is to add the
necessary abstraction that lets programmers pull off complex stunts through
the use of simple instructions, which make their lives easier and increase
their productivity. A worker doesn't complain that a tool is hard to build,
if it helps him perform better.

With multiple inheritance, a programmer is able to define a class by stating
a set of base classes. That takes a single line of code, which can be typed
in a couple of seconds.

The alternative to multiple inheritance is to either employ composition,
which isn't really applicable everytime and its use throws polymorphism out
of the window, or use single inheritance, and be forced to create a couple
of source files for each base class. Both options are significantly worse
than multiple inheritance.

well, there is another partial way around this:
allow multiple classes per file.
most non-Java languages allow this one.

alternatively, Java allows hacking around this problem using
inner-classes (declare classes inside another class, or inside of a
method, do composting, implement "polymorphism" mostly by faking it
using getters).


like, instead of:
ChildClass obja;
BaseClass objb;
....
objb=(BaseClass)obja;

a person writes:
objb=obja.getBaseClass();

which isn't really significantly more typing.


this issue with Java is partly because Java also used the simplifying
assumption of mapping the class-name directly with the file-name, which
kind-of introduces some ugly issues.


a partial way around this is by having the VM trying to load "every"
step along the QName:
"foo.bar.baz.SomeClass".

resulting in the VM trying to load say (say, "bso" = bytecode object):
foo.bso
foo/bar.bso
foo/bar/baz.bso
foo/bar/baz/SomeClass.bso

and with any luck, at least one of them is likely to contain the desired
class, or include an "import" directive which in-turn imports the
relative class.


C# gets around the issue another way:
by using static linking.


a secondary issue though that prevents similar from working nicely with
the JVM, is the way the ".class" file-format works (fixed header with a
relatively fixed file-structure), and making similar work well would
either require changing or hacking on the file-format (and/or some
mechanism for appending multiple classes together and loading them as a
single unit).

my VM doesn't have this issue, partly as the bytecode objects have a
different file structure (the objects are essentially just a single
"constant pool" structure, with any packages/classes/fields/methods/...
declared via this structure).

unlike in the JVM, methods don't refer to the constant-pool directly,
but rather via a local / per-method "literal table", which in-turn
contains references to the constant-pool.

I had previously considered "linking" these objects into a bytecode
image, but as-is, there isn't a whole lot of point in doing so (putting
them all into a big WAD file is "sufficiently good enough").


and, if you put a nifty PE/COFF or ELF wrapper around the WAD, it can
also be made callable directly from C or C++ code (or, alternatively,
much of the code is still C and/or C++, and the script-language code can
just sort of shoved in there along-side).

Not all classes are compound objects, nor all class representations can be
expressed through composition. In addition, that would negate, or at least
significanly complicate, the use of polymorphism.

there is still the use of single inheritance, and interfaces.

the "wisdom" is mostly that most cases which would normally need MI can
be forced into the use of interfaces.


it is much like eliminating pointers largely in favor of object references:
most of the time, a person doesn't really need pointers, just a
reference to a heap-allocated object.

so, the pointer can be replaced with a reference, and a lot of its
associated implementation complexities can be removed.

ironically, this can lead to the whole thing of faking pointer semantics
using a boxed values, and maybe some syntax-sugar... (it isn't really a
built-in pointer, but can at least act mostly like one... "I can't
believe its not pointer!").


That approach is taken as a way to explain the concept of inheritance in a
manner which is easy to understand. It isn't intended to serve as an
example of OO best practices. If, instead of that approach, the concept of
inheritance was presented as is more often employed in the real world,
through design patterns, then the concept would be significantly harder to
understand.

It's like riding a bicycle: just because you use training wheels to learn
how to ride one, it doesn't mean that you expect to see them being used in
the tour of France.

but, the problem is partly that you do end up with people in-practice
writing out class hierarchies this way, and also insisting that this is
the "right" and "proper" way to do things...
 
R

Rui Maciel

BGB said:
cases where full MI make sense are considerably rarer than those where
exceptions or dynamic memory allocation make sense.

It's quite the opposite, in fact. While your complain regarding multiple
inheritance is only about its perceived convenience, in some domains of
application exceptions and dynamic memory allocation do cause problems.

most of those features are, however, more frequently useful than full MI.

consider, for example, a person writes far more many classes than they
write ones which have a need for MI.

Irrelevant. The only thing that matters is that people do use multiple
inheritance in C++, in some cases extensively, and it doesn't make any sense
to argue that a certain feature should be removed just because a programmer,
for some reason, doesn't use it, or at least isn't aware he does.

it is mostly about the distribution of annoyances.

many things in Java or C# are more annoying than the absence of MI, so
the lack of MI is less of an issue...
<snip/Z>

But that's completely irrelevant. Multiple inheritance is supported in C++
from the start, and just because you find that some other programming
language has other annoyances it doesn't mean that C++'s support for
multiple inheritance is somehow bad and should be removed from the language.

it wont add anything to C++, but saves the cost of implementing it in
new (non-C++) languages (or in scripting languages, where a person may
write most of the core app in C++, but use a script-language for some of
the other "high-level" parts).

Again, that's completely irrelevant. Support for multiple inheritance in
C++ isn't more or less useful if some people find it hard to support
multiple inheritance in any other programming languages.

as for the logic of Google's rules, within C++, who knows?...

I would consider subsets, but mostly for the reason of easing processing
the code via automated tools (less things for tool to worry about ->
less work in implementing tool).


all this is likely a more direct reason why newer languages often tend
not to include it (they can get maybe 75% of the results, at
considerably less implementation effort, by using the "interfaces"
trick). (since, they can basically use a single-inheritance object
layout, and just use an alternate vtable to make the interfaces work).

I don't see your point. Interfaces only help with polymorphism, which is a
single, and very specific application of inheritance. Inheritance,
including multiple inheritance, is significantly more to do with simplifying
how new data types are defined (more specifically, how to avoid adding
redundant code and definitions, and all the bugs that come with them) than
with defining interfaces. Case in point: mixins. Interfaces don't help
with that.

the next other major area of "simplification" is eliminating much of the
numeric tower (all your numbers are now "double"), but unlike
eliminating MI, this one comes with a lot more immediate issues (poor
performance, wasted memory, absence of integer arithmetic, ...).

That is absurd. If there is anything wrong with the "numeric tower" is that
it isn't tall enough. For example, support for 16, 80 and 128-bit floating
point types, which are defined in IEEE 754, is nowhere to be seen, and for
those of us who muck around with number crunching software it would be nice
if they were available.


well, there is another partial way around this:
allow multiple classes per file.
most non-Java languages allow this one.

That means that whenever a class with N base classes is defined, for each of
those classes it is necessary to add N separate class definitions, in a
manner which is completely unnecessary, terribly inconvenient and error-
prone.


there is still the use of single inheritance, and interfaces.

the "wisdom" is mostly that most cases which would normally need MI can
be forced into the use of interfaces.

The only alternative to multiple inheritance, as far as I can tell, is a
chain of single-inheritances. If there is a composition relationship
between classes and there isn't a requirement for polymorphism or any magic
based on a IS-A relationship then including classes as data members is the
way to go, and neither single inheritance nor multiple inheritance should be
used.

but, the problem is partly that you do end up with people in-practice
writing out class hierarchies this way, and also insisting that this is
the "right" and "proper" way to do things...

That's why god invented the use of rolled-up newspapers as teaching aids.


Rui Maciel
 
B

BGB

It's quite the opposite, in fact. While your complain regarding multiple
inheritance is only about its perceived convenience, in some domains of
application exceptions and dynamic memory allocation do cause problems.

but, these are *not* the same domains you are likely to see Java or C#
or ActionScript or similar running the show, either...


it is like being like "damn, I can't run all these Flash-based web-apps
on my embedded microcontroller!..." (and elsewhere, there will be a
general response of "so what?...").

it is along similar lines to complaining about the lack of OO features
in GLSL...


most everywhere else, dynamic memory allocation is fairly normal.

Irrelevant. The only thing that matters is that people do use multiple
inheritance in C++, in some cases extensively, and it doesn't make any sense
to argue that a certain feature should be removed just because a programmer,
for some reason, doesn't use it, or at least isn't aware he does.

I was never claiming it "should" be removed, only that it lacks much
incentive for people designing new languages to add it, either.


otherwise, even if Sun had omitted it with Java, Microsoft would have
re-added it with C#, or Adobe with ActionScript, or...


its lack of re-addition by derived languages implies a lack of incentive
for people to re-add it, and ultimately there may be a good reason:
because most of what can be done, can be done "good enough" by simpler
mechanisms.


it is like the use of context-dependent syntax...
it allows a few nifty things, but comes with enough drawbacks, that
in-general language designers don't really consider it to be worth the
costs.


granted, a person could debate whether or not a language with
"interfaces which allow declaring fields and default method
implementations" would count as MI.

<snip/Z>

But that's completely irrelevant. Multiple inheritance is supported in C++
from the start, and just because you find that some other programming
language has other annoyances it doesn't mean that C++'s support for
multiple inheritance is somehow bad and should be removed from the language.

I was never saying it should...

Again, that's completely irrelevant. Support for multiple inheritance in
C++ isn't more or less useful if some people find it hard to support
multiple inheritance in any other programming languages.

and my whole thing isn't about what people do *in* C++, but what
tradeoffs people might make in the design and implementation of new
languages.

C++ already exists, so it isn't really an issue for C++, because its
core design is already pretty much settled.


so, when a person writes code in C++, they can use what features it
provides, and when writing code in another language, they will use what
features that language provides.


the only real major exception here is copy-paste porting between
languages, but this is its own "barrel of fun"...

I don't see your point. Interfaces only help with polymorphism, which is a
single, and very specific application of inheritance. Inheritance,
including multiple inheritance, is significantly more to do with simplifying
how new data types are defined (more specifically, how to avoid adding
redundant code and definitions, and all the bugs that come with them) than
with defining interfaces. Case in point: mixins. Interfaces don't help
with that.

apparently people do use mixins in Java:
http://csis.pace.edu/~bergin/patterns/multipleinheritance.html

now, whether or not it is ugly or worthwhile to do so is another matter...

That is absurd. If there is anything wrong with the "numeric tower" is that
it isn't tall enough. For example, support for 16, 80 and 128-bit floating
point types, which are defined in IEEE 754, is nowhere to be seen, and for
those of us who muck around with number crunching software it would be nice
if they were available.

funny enough, my language actually does have a bigger type-tower than C,
with said 16 and 128 bit floating point types, as well as built-in
numeric vector types, ...


this doesn't mean everyone necessarily agrees though, especially
considering that there are languages which do basically collapse the
numeric tower down to doubles-only.

however, given that nearly all commonly-used languages have a numeric
tower, and many which originally lacked one (such as JavaScript,
lingering descendents of id Software's QuakeC language, ...) later added
numeric towers, implies they are "worth the cost".

That means that whenever a class with N base classes is defined, for each of
those classes it is necessary to add N separate class definitions, in a
manner which is completely unnecessary, terribly inconvenient and error-
prone.

yes, but taking common practices as evidence, it isn't really a big
issue for most people.

The only alternative to multiple inheritance, as far as I can tell, is a
chain of single-inheritances. If there is a composition relationship
between classes and there isn't a requirement for polymorphism or any magic
based on a IS-A relationship then including classes as data members is the
way to go, and neither single inheritance nor multiple inheritance should be
used.

in common practice, it doesn't make a huge difference either way.

if an "is-a" can be easily enough faked with a "has-a", then people will
generally do so if needed.

given the typical way done in languages like Java and C#, the cases
where "is-a" can't be readily faked via a "has-a" are themselves
relatively uncommon.

That's why god invented the use of rolled-up newspapers as teaching aids.

doesn't stop it from being annoyingly common though...

it is like the people who insist that flowcharts are actually a good
idea (rather than, say, entirely pointless...).
 
B

Balog Pal

I would show a middle finger to any coding guideline that bans the use
of templates or inheritance.

That is pretty self-reinforcing stuff.

1. You have a codebase that is C++ but mostly plain imperative C, no
RAII or such, and the usual return-code football.

2. Someone picks up using exceptions, that work fine until get mixed
with the old code where suddenly new exit paths cause leaks and other
problems.

3. Realizing the problem you chose to go ahead (RAII-ize the stuff and
make it exitable anywhere) or back (banning the immediate troublemakers.)

4a. You have bright and brave people who can manage the change. Putting
in big effort the codebase gets cleaned up and actually pleasant to work
with. That attracts more bright and brave people ... and eventually you
wake up.

4b. You ban exceptions for starters and roll back the recent improvement
attempt. The codebase become even more old-style. The author of the
change quits, followed by others. Your new hires are limited to similar
thinkers.

5. The codebase works like the usual snowball picking up more of the
same. People who could fix it see no future or fun working on it. The
ones start to have trouble with more and more features, so you start
banning even more.

6. repeat descending cycle from 1
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,968
Messages
2,570,153
Members
46,699
Latest member
AnneRosen

Latest Threads

Top