Returning values from a function

M

Martijn Lievaart

Doing a few simple modifications (pass per const reference instead
of pass per value, using += instead of +, using std::string instead
of some homegrown string class) speeded the program significantly
such that it was faster then either of the other 2 languages.

Moral: While not caring for technical details is a good habit, one
nevertheless has to know his toolkit, to select the right tool for
the right job. And of course: knowing typical idioms helps.

Yes, I understated (is there such a word?) the gain one can get, in some
cases this /can/ be significant.

Also, using simple idioms (passing by const reference is very important,
and not only for efficiency btw) one can avoid some performance pennalties.

But I tried to echo your sentiment about the obsession with speed some
newbies have. Hey I program 95% in Perl nowadays, I seldom reprogram in
another language for speed. Speed just seldom is an issue anymore.

But to the OP, there are some easy ways to avoid inefficiencies.
F.I. knowing about passing by value and by reference can save potential
copies, and those copies can be expensive. As you have to learn about
these issues anyhow, you'll learn to avoid needless copies as well.

M4
 
G

Gavin Deane

Gene Wirchenko said:
How does one compute digital sums?

You can follow the method implied by the definition: computing
sum of digits and using that as the new number until you get a
single-digit result. You can focus on coding something that very
efficiently performs this algorithm.

Or you could use different algorithm. Digital sum is little more
than a wrapper around a single modulo 9 division.

You are rather less likely to come up with the second method
unless you know your tools.

Although there will be times when clear, readable code is more
important then using the fastest algorithm. If the fast code is
unclear and requires significant documenting to explain it, that can
be a Bad Thing.

Digital sum is probably not a very good specific example, but the
principle is valid.

GJD
 
M

Martijn Lievaart

Premature optimization is a sin.
Absolutely.

Write code that is easy to read, understand and maintain.
Absolutely.

Exhaust your compiler's optimization options first.
Shop around for a better optimizing compiler second.
Only then should you run your profiler
and consider cobbling your code to make it run faster.

I don't completely agree here.

First, investigate it. Is it really the program that's causing the delay?
F.I. network protocols without some form of pipelining often are a
bottleneck. I've seen cases where the networkcard was experiencing
congestion becuase of a low mtu. Or is disk I/O the bottleneck? Sometimes
you cannot tell without a profiler, but I found that fairly simple
measurements can give a lot of insight.

I've seen to often that optimization options introduce compiler bugs to
play with them lightly. Also, deviating from the default optimization most
often gives only marginal gains, though for some programs the savings can
be enormous.

An other compiler is often not an option. F.I. opensource often only
compiles on gcc. OTOH, it's an option that is often overlooked. Intel
makes some fairly good compilers that should be drop in replacements for
others (most notably MSVC). A big win if you do numbercrunching.

And before switching compilers, you should profile. More often than not,
some fairly simple changes can make a huge difference. More intelligent
algrithms are the number one efficiency gain, often speeding up a program
by several orders. Switching to non-blocking I/O is another good one
sometimes. I once found through profiling that a program was busy about
20% of the time in converting UTF-8 to and from UCS32. Some simple caching
on this (a 5 minutes patch, isn't encapsulation wonderful) saved that 20%
almost completely.

Just doing a code review can be helpful, sometimes you find that you copy
objects needlessly. Sometimes it helps to break encapsulation to avoid
copies. But first profile to see if this is where the bottlenecks are.

But the number one advice still is measure, measure, measure. Start with
some real world tests and time them. Zoom in on the bottlenecks, some
print statements in the code can be a cheap and effective profiler. Don't
assume, measure. Programmers are notoriously bad at optimizing by
experience, so you need hard facts.

M4
 
J

Jerry Coffin

[ ... ]
Although there will be times when clear, readable code is more
important then using the fastest algorithm.

In my experience, you can expand this a bit: in many cases the
theoretically fastest algorithm is a truly heinous beast -- large,
complex, fragile and difficult to get truly correct. In most cases
there is also, however, an algorithm that's very close to as fast, but
it MUCH smaller, simpler and easier to get right. The vast majority of
the time, the latter is the one you really want.

A case in point would be string searching: Boyer-Moore string searching
is theoretically optimal -- it reduces the number of comparisons
involved to the minimum possible level for the job it does.
Unfortunately, it's sufficiently complex that as far as I can tell,
nobody published a correct implementation for at least 20 years after
the algorithm itself was published.

The obvious alternative is Sunday's variant of Boyer-Moore-Horspool --
which will often be faster in reality, because its setup is quicker.
It's also enough simpler that you can often progress from an explanation
of the algorithm to a correct implementation in less than an hour.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,091
Messages
2,570,604
Members
47,223
Latest member
smithjens316

Latest Threads

Top