James Kuyper said:
Pascal J. Bourguignon wrote:
...
You're assuming that the program will only be run once. Most of my
programs get run several thousand times per day, each copy using up
about 5 minutes of CPU time per run. I spend about a month per year
rewriting each program to keep pace with changing requirements. The
trade-offs are a little bit different when that is the case.
It's interesting that you don't say what the new trade-offs are --
probably because they are hard to characterise.
I've never liked these discussions where programmer time and run time
are equated (or traded off). They may be measured using the same
units but they are very different things. Even when you try to
monetise them (so that they have some sort of "equality of value") the
results are usually contrived.
For example, I've re-written software (at the expense of the
organisation employing me) just to try out something new. The program
did not run much faster but I became a slightly better programmer for
having done it. The costs and benefits can be very complex.
On the run-time side, the software on my phone runs for hours every
day (and in thousands of copies round the world), but even doubling
it's speed won't help me (or anyone else) because it is fast enough.
I once wrote some logging software that was absolutely fast enough
right up until the database insert started to take longer than the
required logging period at which point it became hopelessly slow. All
seconds are of equal length but some are more equal than others.
I suspect this another dodgy analogy, borrowed from the world of
continuous systems, that does not translate well into the discrete and
discontinuous world of software.