E
Eric Sosman
Arne said:Wanja Gayk wrote:
(e-mail address removed) says...
Wanja Gayk wrote:
Here's the proof: Sorting an array of positive integers in O(n) time: ...
Very nice! Would you care to try this approach on a shorter
input array, like
data = new int[] { Integer.MAX_VALUE };
This case should be quite simple, since the array is already sorted.
Let us know how you make out, will you?
I didn't say it works for any array out there, did I?
By using big-O notation, you said something about the behavior of the
algorithm as the size of the data set tends to infinity, so yes, you did.
No.
But a solution is a bit more practical if it does.
Wanja was making a joke. The performance of an algorithm on one
particular data set is irrelevant to a discussion of asymptotic
performance.
What about correctness? Try his code on the one-element array
I offered: I guarantee that performance will not be an issue. (In
particular, I guarantee you will not get a correct result *and* you
will not run out of memory.)
Bucket sort is O(n) in n.
It depends on what you choose to count, and how. Question: How
much time does it take to select between K alternatives? You need
at least lg(K) decisions. Even if those decisions are being made at
the level of individual transistors you need more time to coordinate
a thousand transistors than to coordinate three. In hardware, this
shows up as a slower clock speed, and in other ways as well, so it
is an error to consider "time to decision" as a constant.
Depending on how you choose to count. Many things can be proven
if you select your counting schemes with due care ...
There's a famous algorithm for finding the largest of an unlimited
quantity of positive numbers: You represent the numbers as chunks of
uncooked spaghetti, each of a length proportional to the number
represented. That's your input. Algorithm: Pick up the whole bunch,
turn it upright, plonk one end down on the kitchen counter, and pluck
out the tallest spaghetto (spaghettus?). Voila! An O(1) maximum-
finding algorithm!
Discerning the flaws in this method is left as an exercise for
the astute reader.
Big O for execution speed traditionally does not look at
memory restrictions and data type restrictions.
Big O for proton density traditionally does not look at execution
time. Big O for the error in Stirling's approximation traditionally
does not look at proton density. Big O for X traditionally does not
look at Y. So?