Steven D'Aprano said:
Which it was. It finished executing his code in almost 1/10,000th of the
time his PC could do.
It wasn't a mistake and it did happen.
Yes, yes, of course, it was a mistake, since
the conclusion that he wanted to draw from
this experiment was completely *wrong*.
Similarly, blind experimentalism *without*
supporting theory is mostly useless.
The VAX finished the calculation
10,000 times faster than his PC.
You have a strange concept of "impossible".
What about trying, for a change, to suppress
your polemical temperament? It will only lead
to quite unnecessarily long exchanges in this
NG.
But, mind you, his test was meant to determine,
*not* the cleverness of the VAX compiler *but*
the speed of the floating-point unit. So his
experiment was a complete *failure* in this regard.
Optimizations have a tendency to make a complete mess of Big O
calculations, usually for the better. How does this support your
theory that Big O is a reliable predictor of program speed?
My example was meant to point out how
problematic it is to assume that experimental
outcomes (without carefully relating them
back to supporting theory) are quite *worthless*.
This story was not about Big-Oh notation but
a cautionary tale about the relation between
experiment and theory more generally.
- Got it now?
For the record, the VAX 9000 can have up to four vector processors each
running at up to 125 MFLOPS each, or 500 in total. A Pentium III runs at
about 850 Mflops. Comparing MIPS or FLOPS from one system to another is
very risky, for many reasons, but as a very rough and ready measure
of comparison, a four processor VAX 9000 is somewhere about the
performance of a P-II or P-III, give or take some fudge factor.
Well, that was in the late 1980s and our VAX
certanly most definitely did *not* have a
vector processor: we were doing work in
industrial automation at the time, not much
number-crunching in sight there.
So, depending on when your student did this experiment, it is entirely
conceivable that the VAX might have been faster even without the
optimization you describe.
Rubbish. Why do you want to go off a tangent like
this? Forget it! I just do not have the time to
start quibbling again.
Of course, you haven't told us what model VAX,
That's right. And it was *not* important. Since the
tale has a simple moral: Experimental outcomes
*without* supporting theory (be it of the Big-Oh
variety or something else, depending on context)
is mostly worthless.
or how many processors, or what PC your student had,
so this comparison might not be relevant.
Your going off another tangent like this is
certainly not relevant to the basic insight
that experiments without supproting theory
are mostly worhtless, I'd say...
Precisely. And all the Big O notation is the world will not tell you that.
Only an experiment will. Now, perhaps in the simple case of a bare loop
doing the same calculation over and over again, you might be able to
predict ahead of time what optimisations the compiler will do. But for
more complex algorithms, forget it.
This is a clear case of experimentation leading to the discovery
of practical results which could not be predicted from Big O calculations.
The only problem being: it was *me*, basing
myself on "theory", who rejected the "experimental
result" that the student had accepted *as*is*.
(The student was actually an engineer, I myself
had been trained as a mathematician. Maybe that
rings a bell?)
I find it quite mind-boggling that you would use as if it was a triumph
of abstract theoretical calculation when it was nothing of the sort.
This example was not at all meant to be any
such thing. It was only about: "experimenting
*without* relating experimental outcomes to
theory is mostly worthless". What's more:
constructing an experiment without adequate
supporting theory is also mostly worthless.
Or, to put it another way: your student discovered
No. You didn't read the story correctly.
The student had accepted the result of
his experiments at face value. It was only
because I had "theoretical" grounds to reject
that experimental outcome that he did learn
something in the process.
Why not, for a change, be a good loser?
something by running an experimental test of his code
that he would never have learnt in a million
years of analysis of his algorithm: the VAX compiler
was very cleverly optimized.
Ok, he did learn *that*, in the end. But he
did *also* learn to thoroughly mistrust the
outcome of a mere experiment. Experiments
(not just in computer science) are quite
frequently botched. How do you discover
botched experiments? - By trying to relate
experimental outcomes to theory.
Regards,
Christian