is generic prog becoming obfuscation?

G

gong

hi

i recently looked at alexandrescu's book on c++, and i found it pretty
much unintelligible. i have a few points which i wonder about.

1. as a developer, it is important, from a bottom line standpoint, to
make code as transparent as possible. through its lifecycle, code
inevitably becomes larger and more complex to cope with unexpected
demands. if the starting point looks like alexandrescu's code, it
will require very specialized and costly people to maintain, and will
likely be brittle, perhaps not because of the software architecture
itself (though with such unproven techniques it is likely), but rather
because nobody will be able to do the work to get the code to adapt to
its new requirements; ultimately resulting in a rewrite. the trend
towards outsourcing means that code will likely be changing hands even
more rapidly and amongst ever more diverse groups of programmers;
unnecessarily difficult code becomes even more of an encumbrance in
this situation. what is wrong with this assessment?

2. what is the payoff of the techniques in the book to real world
problems? perhaps they are of interest to generic library writers,
but thats about all i could conclude from looking at the book.

3. i have also been looking into python, which has an emphasis on
elegance, at the cost of speed. the last release of python, 2.3, had
among its release notes, a boast about a minimum of new language
features being added. i think that c++, like java, is not as
"enjoyable" a language to use because of its ever increasing
multiparadigm approach, which i think is perhaps becoming a euphemism
for bloat. in an interview, dennis ritchie said something like he
thinks c is one of the best ways to get the speed of assembler with a
minimum of language enhancements. do you think c++'s plethora of
language features and the general structure of the language and
approach are an elegant, stimulating way to code, or do you prefer
something smaller? which is more cost effective?

4. at one time, when the object oriented model was being developed, it
was hailed as the overarching technique for specifying problems.
inheritance was a crucial technique, and many textbooks and articles
endorsed the approach singlemindedly. since then, stl provided the
realization that not everything is an object, that some things are
algorithms, for example (as opposed to "functor" objects). however it
seems to me that currently generic programming is being pushed the way
object oriented programming was a few years ago. is generic
programming still an immature concept with which we will start to find
significant shortcomings in the future? when one recalls the
conceptual stretching that was once advocated in favor of o-o, i think
this is possible.

thanks for any insights.
-gong
 
V

Victor Bazarov

gong said:
i recently looked at alexandrescu's book on c++, and i found it pretty
much unintelligible. i have a few points which i wonder about.

1. as a developer, it is important, from a bottom line standpoint, to
make code as transparent as possible. through its lifecycle, code
inevitably becomes larger and more complex to cope with unexpected
demands. if the starting point looks like alexandrescu's code, it
will require very specialized and costly people to maintain, and will
likely be brittle, perhaps not because of the software architecture
itself (though with such unproven techniques it is likely), but rather
because nobody will be able to do the work to get the code to adapt to
its new requirements; ultimately resulting in a rewrite. the trend
towards outsourcing means that code will likely be changing hands even
more rapidly and amongst ever more diverse groups of programmers;
unnecessarily difficult code becomes even more of an encumbrance in
this situation. what is wrong with this assessment?

What's wrong is the presumption that most C++ programmers are idiots
that don't have any knowledge of templates and that you need some
"very specialized and costly people". Let your own programmers to
develop skills that are needed for the job.
2. what is the payoff of the techniques in the book to real world
problems? perhaps they are of interest to generic library writers,
but thats about all i could conclude from looking at the book.

Unless you work in a team of 15+ people or using somebody else's libs,
you won't see any benefits. I can understand that. But not everyone
is a loner. Code reuse is extremely important and templates are the
cornerstone of code reuse.
3. [...language comparison nonsense snipped...]
do you think c++'s plethora of
language features and the general structure of the language and
approach are an elegant, stimulating way to code, or do you prefer
something smaller? which is more cost effective?

Cost-effective in what circumstances? Cost-effective in relation to
what lifetime of the product? Cost-effective in what country? I
prefer not to give sucked out my thumb answers to questions that are
way too generic. And I bet many do too.
4. at one time, when the object oriented model was being developed, it
was hailed as the overarching technique for specifying problems.
inheritance was a crucial technique, and many textbooks and articles
endorsed the approach singlemindedly. since then, stl provided the
realization that not everything is an object, that some things are
algorithms, for example (as opposed to "functor" objects). however it
seems to me that currently generic programming is being pushed the way
object oriented programming was a few years ago. is generic
programming still an immature concept with which we will start to find
significant shortcomings in the future? when one recalls the
conceptual stretching that was once advocated in favor of o-o, i think
this is possible.

Templates allow users not to care whether what they are using is
an object or a function. That's the beauty of it. Why do some
people want extremes? It was OO, now it's not OO, now it's OO and
nothing else again? Why does it have to be one or the other and
not both? There is nothing in generic programming that suggests
that functions are bad. Yes, it uses the benefits of OO concepts
like inheritance. Yes, we are just beginning (for the past six-
seven years) exploring generic programming, simply because there
were no tools that allowed us to do so in the past. Why do you
want some kind of guarantees that there will be no shortcomings?
In my book such expectation is unrealistic. Let's deal with the
shortcomings when they show themselves.

It all in the attitude. If you take the book and understand almost
nothing in it, you can throw it in the bit-bucket and exclaim, "Bah,
how can this gibberish be the Next Big Thing?". Or you could see
_your_own_ shortcomings [before they affect anybody else] and go
learn what seems to be something that only "very specialized and
costly people" know at this time...

Victor
 
G

galathaea

:
: i recently looked at alexandrescu's book on c++, and i found it pretty
: much unintelligible. i have a few points which i wonder about.
:
: 1. as a developer, it is important, from a bottom line standpoint, to
: make code as transparent as possible. through its lifecycle, code
: inevitably becomes larger and more complex to cope with unexpected
: demands. if the starting point looks like alexandrescu's code, it
: will require very specialized and costly people to maintain, and will
: likely be brittle, perhaps not because of the software architecture
: itself (though with such unproven techniques it is likely), but rather
: because nobody will be able to do the work to get the code to adapt to
: its new requirements; ultimately resulting in a rewrite. the trend
: towards outsourcing means that code will likely be changing hands even
: more rapidly and amongst ever more diverse groups of programmers;
: unnecessarily difficult code becomes even more of an encumbrance in
: this situation. what is wrong with this assessment?

It sounds very similar to what someone would have said about OOD when it was
in its infancy. Yes, the methods are not often taught in an undergrad
environment yet (though this is definitely changing), but they don't require
years of intensive solitary study to begin to use them either. There is
nothing more brittle about proper generic programming than any other form of
programming. In fact, with translationtime asserts and the limited member
introspection and concept checking that generic programming brings to one's
arsenal of tricks, code can often be made much more sound through compiler
contracts than without GP. And there is nothing unproven about such
techniques either. They have been studied from both a practical and
mathematical point of view in quite some detail.

: 2. what is the payoff of the techniques in the book to real world
: problems? perhaps they are of interest to generic library writers,
: but thats about all i could conclude from looking at the book.

As Victor Bazarov has already mentioned in this thread, code reuse is one of
the greatest benefits of GP. That does not apply only to third party
libraries, but one's own application framework as well. GP assists in
realizing the many new technologies of generative programming becoming so
popular today, including partially handling such things as Aspect-Oriented
Design. One can save a lot of time (= money) using GP to assist in
generating code during the design process. For example, Alexandrescu's book
was one of the most successful attempts at turning some very popular design
patterns into a generically usable library, which can prevent having to
recode the core of these patterns over and over again while developing a
product (for example, the Abstract Factory that pops up in so many designs).

: 3. i have also been looking into python, which has an emphasis on
: elegance, at the cost of speed. the last release of python, 2.3, had
: among its release notes, a boast about a minimum of new language
: features being added. i think that c++, like java, is not as
: "enjoyable" a language to use because of its ever increasing
: multiparadigm approach, which i think is perhaps becoming a euphemism
: for bloat. in an interview, dennis ritchie said something like he
: thinks c is one of the best ways to get the speed of assembler with a
: minimum of language enhancements. do you think c++'s plethora of
: language features and the general structure of the language and
: approach are an elegant, stimulating way to code, or do you prefer
: something smaller? which is more cost effective?

c++ can even boast that much of its success comes from the minimal and
infrequent changes to the library, and I often see the participants of the
standardization process going to great lengths to keep valid code valid and
move as much as possible to library additions where core language changes
are not necessary. However, c++ has done much to improve itself in the past
by adjusting to the ever-changing theory and practice of coding. Templates
were one of the great additions to the last language update, and much of the
standard library from the wonderful containers, the updated stream
mechanism, and the traits mechanisms found there show just how useful the
feature really is (which many had already seen in Ada's generics and related
earlier examples).

But speed is another reason why templates are so useful. In fact, clever GP
techniques and translationtime data structures can often increase the speed
of complex data types and their manipulation over that of c. As well, the
greater type information that can be determined also gives the optimization
process more information, and additional optimizing transformations are
possible.

: 4. at one time, when the object oriented model was being developed, it
: was hailed as the overarching technique for specifying problems.
: inheritance was a crucial technique, and many textbooks and articles
: endorsed the approach singlemindedly. since then, stl provided the
: realization that not everything is an object, that some things are
: algorithms, for example (as opposed to "functor" objects). however it
: seems to me that currently generic programming is being pushed the way
: object oriented programming was a few years ago. is generic
: programming still an immature concept with which we will start to find
: significant shortcomings in the future? when one recalls the
: conceptual stretching that was once advocated in favor of o-o, i think
: this is possible.

They are perfectly and completely complementary. GP allows one to get as
much work as possible accomplished during the translation process, and OOD
allows for runtime behavior adaptations. Both paradigms support code reuse,
and together provide some of the strongest idioms and patterns of code reuse
the developer has available. That is extremely important to real world
design and development, as it translates into money saved. But of course,
GP is not the end of the line for advancing the way we use the language. It
is merely a way to more fully use the capabilities of the language.
 
C

C Johnson

Victor Bazarov said:
It all in the attitude. If you take the book and understand almost
nothing in it, you can throw it in the bit-bucket and exclaim, "Bah,
how can this gibberish be the Next Big Thing?". Or you could see
_your_own_ shortcomings [before they affect anybody else] and go
learn what seems to be something that only "very specialized and
costly people" know at this time...

I must admit that I purchased this book and had to go through it three
times before I adapted the mindset you speak of Victor. I can also
say that I no longer view this book as a type of "acadamia pursuit"
like when I first ripped through it. For example I finaly got the
template template parameters. I ran test on some old code I had used
using dynamic binding versus the new rewrite of same code using static
binding with policy classes. In simple tests that I performed the
dynamic testing took almost a full minute vs the nine seconds version
of the templated design. So while it took me three months to learn
all this "specialized & costly knowledge" it most certainly translated
into real world use that is not just a new toy. Another aspect is an
observer/subject pattern I implemented using GP with templates. The
above mentioned techniques are remedial for what can be done with GP
and used only as a metric of how much I truly didn't understand their
power.

In essance I was guilty of issuing my own death sentence in this field
of work: Closing my mind to new ideas and techniques. Glad I chose
to woke up to this fact. It is not the be all end all paradigm, but
my toolbox just got a lot more functional.

YMMV of Course,

C Johnson
 
J

Jerry Coffin

hi

i recently looked at alexandrescu's book on c++, and i found it pretty
much unintelligible. i have a few points which i wonder about.

When looking at code like Andrei's (just to use your example) you need
to keep (at least) a couple of things in mind: first of all, it's
entirely possible to USE code like Andrei's without understanding the
details of how it works internally.

Second, although it may be difficult to read, keep in mind that much of
it is really doing things that quite a few intelligent people had
studied for years, and concluded simply could NOT be done.
1. as a developer, it is important, from a bottom line standpoint, to
make code as transparent as possible.

True -- but transparency has to be viewed on a global level. I.e. the
fact that one line of Andrei's code is dense doesn't necessarily mean a
lot. First of all, people's short-term memory is extremely limited, but
their long-term memory is much less so.

This means that if you can learn a "library" of things and deal with
most code in terms of that existing body of knowledge, you've really
gained a great deal, even if code using that library might not lead to
code that initially appears the most transparent.

Second, going back to the first point above, even though Andrei's code
itself isn't necessarily very transparent, code that uses it will often
be substantially _more_ transparent than code that doesn't use it (or at
least something reasonably similar).
through its lifecycle, code
inevitably becomes larger and more complex to cope with unexpected
demands.

I don't consider this inevitable -- in fact, one of the basic ideas of
generic programming is to allow software to deal with unexpected demands
_without_ adding size and complexity. Even when size or complexity does
get added, generic programming can help reduce the _rate_ at which it
has to be added.
2. what is the payoff of the techniques in the book to real world
problems? perhaps they are of interest to generic library writers,
but thats about all i could conclude from looking at the book.

From one viewpoint you're right, but from another you're entirely wrong.
It's true that direct use of most of these techniques is likely to be
restricted primarily to other people writing generic libraries (at least
for a while).

That's not where the payoff lies though: another library is just another
library. The payoff can only happen when that library is put to use.
Even if you choose not to learn the techniques and use them directly
(and most people won't) that doesn't deprive you of their benefit -- by
using libraries using these techniques, you may benefit heavily from the
techniques, even though you never learn to use them directly.
3. i have also been looking into python, which has an emphasis on
elegance, at the cost of speed. the last release of python, 2.3, had
among its release notes, a boast about a minimum of new language
features being added. i think that c++, like java, is not as
"enjoyable" a language to use because of its ever increasing
multiparadigm approach, which i think is perhaps becoming a euphemism
for bloat.

C++ is _far_ from "ever increasing" -- in reality, the standard was
finalized roughly 5 years ago, and since then, there's been NO
substantial change at all (there's been one amendment to the standard,
but it doesn't really add new features -- it's basically devoted to
fixing wording problems to make the standard require what was really
intended to start with).

Even when (and it is when, not if) the next C++ standard is written, I
would _not_ expect to see a large number of new features added to the
language. Almost any suggestion for an addition to the language will be
scrutinized very closely, and the feature added _only_ if it really
can't be done in the existing language, at least without exceeding
difficulty.
in an interview, dennis ritchie said something like he
thinks c is one of the best ways to get the speed of assembler with a
minimum of language enhancements.

"best" is clearly a value judgement, not a statement of fact. C
provides a set of features that works well for quite a few problems.
Nonetheless, it has shortcomings in some areas, and C++ addresses some
of those shortcomings. I certainly wouldn't claim that C++ is the
ultimate programming language, but in an awful lot of situations I (for
one) would choose C++ over C with little or no hesitation.
do you think c++'s plethora of
language features and the general structure of the language and
approach are an elegant, stimulating way to code, or do you prefer
something smaller? which is more cost effective?

I feel a lot the way many people appear to: I kind of wish C++ was
smaller and more elegant -- but I'm also convinced that adding a couple
of new features would _really_ improve things as well...

[ ... ]
however it seems to me that currently generic programming is
being pushed the way object oriented programming was a few years ago.

It's (nearly?) inevitable that a useful technique will attract at least
a few supporters who will claim unrealistic claims for it, that doesn't
change mean the technique is not useful.
 
E

emerth

It might be fair to say generic programming can cause
confusion - but obfuscation is a different thing. That
is solved by learning your generic programming and/or the
area of knowledge & concepts where you will use generic
approach better. Generic & template programming are IMVHO
fundamentally about abstracting a problem or class of
problems. (note 1)

Why is this:

typedef int CoolNameForAnArrayIndexVariable;

considered less obscure than this:

typedef adjacency_list<vecS, vecS, bidirectionalS> Graph; // (note 1) ?

Only because one has to learn what an adjacency_list,
a vecS and a bidirectionalS are and what they mean. Whereas
(hopefully) everyone using C++ knows what an int is.

I suggest that the second typedef example is powerful and worth
the work to understand it because it encapsulates a really large
chunk of knowledge and paradigm regarding graphs. Assuming of
course that you care about graphs. ;-)
If you're considering writing your own generic code I guess you
have to decide if the knowledge domain you are working in merits the
approach - it may not.

--

note1: Here is a reprint of an interview with Alexander
Stepanov where he talks about the creation of the STL and about generic
programming: http://www.sgi.com/tech/stl/drdobbs-interview.html
This interview probably would address your question better
than anything.

note 2: I lifted that line from the docs for the Boost
Graph library. I love the Boost Graph library.
 
G

gong li

Victor said:
It all in the attitude. If you take the book and understand almost
nothing in it, you can throw it in the bit-bucket and exclaim, "Bah,
how can this gibberish be the Next Big Thing?". Or you could see
_your_own_ shortcomings [before they affect anybody else] and go
learn what seems to be something that only "very specialized and
costly people" know at this time...

thanks for the input. i guess the basic problem is what to throw
one's time into. at one time i thought prolog was great, the "Next
Big Thing" and i wrote a bunch of code with it. i still think its
a remarkable language, and occasionally find myself outlining the
approach to a problem by framing it as prolog code first, but overall
i overestimated its significance. over time, ive gleaned that
regardless of the size of the project, libraries,
or language, these are always in such a rapid state of flux that
as soon as some code is written with some dogma or other it becomes "old
school." finally, clarity of the code has always seemed to me to be the
test of its maintainability, and i, so far, have found the alexandrescu
approach impenetrable. unclear code, when handed off (as
it inevitably is) seems always to become either a morass of monstrous
and buggy code, or simply rewritten, eliminating the "code reusability".
on the other hand, code written
with "outmoded" techniques, for example just straight c or fortran
code, but written simply, efficiently, and with a minimum of "neat
tricks", i think can withstand the shocks of time slightly better.
however this
mentality, taken too far, can lead to stagnation; old fortran programmers
are notoriously averse to learning anything like c++ and thus
become obsolete and inefficient.

in the design and evolution of c++ stroustrup indicates that he
became aware of stl shortly before some standardization and there
was a timecrunch of work to get it incorporated into the language.
i find the stl to be "nice" and "promising," and have incorporated
some of the boost libraries into my code, but ive always found
it to be a bit of a disappointment. for example, transform
is to me, one of the most important algorithms, but it is
practically useless because of the absense of lambda functions.
this is partly addressed by the boost lambda library but again as one
uses it one finds its shortcomings. i end up viewing this type of
things as not fully thought out and "inelegant," requiring
some specialized knowledge which i can be sure will be totally
irrelevant in a few years and thus a drain on my time, rather than a
productivity and code reuse enhancement.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top