Jerry said:
[ ... ]
The result is that for most cases quicksort will be the
fastest sort. Right behind it are other O(nLOGn) methods.
I prefer mergesort and data input in lists, because then I
don't have to worry about the size to be sorted. In
addition, mergesort is stable (quicksort is not).
The most common implementation of Quicksort for arrays is
unstable -- but a Quicksort on an array _can_ be written to
be stable if you want to badly enough, and for a linked
list, it's quite easy to make it stable.
My linguistic intuitions differ: Quicksort _is_ the algorithm
published as Algorithm 64 (and 63) by C.A.R. Hoare in the
Communications of the ACM. That algorithm sorts a sequence in
place and is not stable. It makes 2n*log(n) comparisons on
average and (1/3)n*log(n) exchanges on average. Anything that
is stable or deals with linked lists is an implementation of a
_different_ algorithm (and probably has different average
complexities).
The algorithm presented by Hoare in Algorithm 64 is very
general; it only refers to algorithm 63, and presumably,
anything that achieves a partition with similar properties would
be acceptable.
I am not saying that there are no refinements or other
algorithms based on similar ideas. Some of these algorithms
are stable and some apply to linked lists. Also, I am not a
native speaker of English. Nonetheless, I feel that algorithms
have a certain identity (which makes it meaningfull to say
that a certain sorting algorithm is stable); and that
implementations of an algorithm must preserve that identity
and the associated characteristics to count as implementations
_of_ that algorithm.
So you're saying that the implementation proposed by Jon Bentley
(section 10.2 of _Programming_ _Pearls_, which is just a reprint
of one of his columns in the CACM) isn't a quick sort, although
he claims it to be one. (I think his algorithm is actually
stable, although I haven't verified; it may have problems if
there are objects equal to the pivot. But I think it could
easily be fixed.)
I don't have access to this one, so I cannot tell. I will hit a library
shortly and hunt it down -- it sounds interesting: I have a hard time
seeing a stable and fast version of the partition-recurse idea.
My understanding is that the term quick sort is applied to all
algorithms that partition, then sort each partition recursively.
I only gave my linguistic intuitions, and they differ. The way I see the
literature is way more specific about quicksort and analyses of that
algorithm have been put forth, which do not apply to other algorithms that
also exploit the partition-recurse idea.
Of course, sometimes algorithms are so closely related that you would call
one a variation of the other. In particular, sometimes the analysis for one
algorithm can be used to start the analysis of another. Nonetheless, I feel
that saying some sorting procedure is stable rules out the existence of
unstable implementations and vice versa.
Of course, it could be that most people have a wider notion of quicksort
and, as you pointed out, might consider any partition-recurse algorithm a
variant of quicksort.
(Also, of course, it's trivial to implement quick sort for a
list; just implement [] in terms of std::advance. Of course,
the adjective "quick" might not be so appropriate then, but the
algorithm stands.)
Nonetheless, the underlying data structure does matter. For instance, the
median-of-three refinement is hard to do on lists. Also, choosing the pivot
randomly is hard on lists, essentially making the average runtime
quadratic. So, at least performance characteristics of a sorting routine
can vary with underlying data structure.
Best
Kai-Uwe Bux