av wrote On 07/26/06 12:33,:
i have a routine R1 that with input A has output B that follow the
algo C.
optimisation is
search a routine R2 that with input A has output B that follow an algo
H that has minimum instructions for doing calculation
are we agreeing on that definition?
why i would not try to write the "H" routine?
To begin with, things are seldom that simple. For
example, suppose you can implement R1 in portable C but
must use assembly language for R2. This means you need
to re-implement R2 every time you move to a new machine;
if you really think optimization is The Most Important
Thing In The World, you probably need to re-implement R2
even if the new machine has the same instruction set as
the old. So it's not a question of implementing R1 vs.
R2, but a question of implementing R1 vs. all of R2a (for
Pentium IV), R2b (Xeon), R2c (Opteron), R2d (Power4),
R2e (Power5), R2f (UltraSPARC-IIIi), R2g (UltraSPARC-IV),
R2h (UltraSPARC-IV+), R2i (UltraSPARC-T1 "Niagara"), ...
Second, you must assess the likely benefits. "R2 is
faster" is not a strong enough statement; you need to know
how much faster it will be and balance that against the
extra effort required to develop and maintain it. The
question of how many times Rx will be used comes into this
assessment: I put it to you that it is of *no* use at all
to save ten nanoseconds per abort() call! (If you spend
just one minute inventing, implementing, documenting, and
testing a ten-nanosecond improvement to abort(), you will
not recoup your effort until *six billion* programs have
died horrible deaths.)
Third, there's the phenomenon that "clever" code is
more attractive to bugs than is simple code. It is easier
to see and remove a problem from the neatly-trimmed lawn
of a stretch of simple code than when it's camouflaged amid
the high grass, thorn thickets, and meandering waterways
of the wild Everglades. [Insert comment about "up to one's
rear in reptiles" here.] Even if tricksy code starts life
bug-free, it is more susceptible to acquiring bugs later on
as programmers who understand it imperfectly (and don't
realize the lack) "improve" the code to meet new requirements.
Those programmers are not some inferior subspecies of half-
wits, either: They are likely to be YOU. I believe it was
Kernighan who observed that debugging code is harder than
writing it; it follows that if you write it at the limits
of your own cleverness, it will be beyond your power to fix.
Fourth and finally, let's be clear about what kinds of
"optimization" we're considering. Replacing bubblesort with
Shellsort is not an "optimization," but a redesign using a
superior algorithm. Finding a way to re-use some of the work
of iteration N so that iteration N+1 needn't recalculate it
is a similar transformation. Changes in algorithm, changes
in data structure -- these are "finding better solution
methods," not "optimizations." The specific substitution
that sparked this sub-thread had to do with copying data
via a loop in open code instead of using memcpy(), based on
an untested assumption that the open code loop would be
faster -- that sort of thing is an "optimization" or (for
clarity) a "micro-optimization," and that sort of thing is
folly if done without measurement and a clear understanding
of the (supposed) benefits.
The days of Mel are behind us. The economics that made
his heroics worth while are gone, in fact, inverted: CPU
time became cheaper than programmer time long ago. Practices
that were once essential to success have become irrelevant,
replaced by newer practices that would not have made sense in
the world as it once was. If you act as if you were still in
that world you are dreaming; wake up and smell the gigahertz!