Chris Torek said:
Yes, O(k) is O(1) if k is constant but for an operation on a
fixed-sized input we do not know if k is a constant -- it is a
function defined at only one point: 64 (or, if you insist, it can be
thought of as defined on the domain [0, 64]). Big O is only defined
for functions over R (or sometimes R+ or Z+). It has no formal
meaning (as far I know) for functions defined over finite sets.
Well, it works fine (math-wise) for finite sets. It just turns
degenerate: *everything* is O(1).
Of course, "real computers" are finite, so this means that every
algorithm can be called O(1). Clearly this is not what we want!
The "obvious" solution is to allow anything that is a variable
to go to infinity, even though technically the size of, e.g., an
input being sorted with an "O(n log n)" sort routine is in fact
bounded by attached storage. If you only have 10 TB of file space,
you will not be sorting 200 tera-items of data. But if we claim
that this sort is therefore O(10T log 10T) and thus O(1) ... well,
that may be *true*, but it is not a useful claim.
Posters on Usenet are often rather loose with big Os. To avoid
arguments such as this one must be clear about how the problem is
"measured" and exactly what the primitive operations of the abstract
machine are.
Indeed.
Consider the game of chess. It has a branching factor of about 35. So, on
average, there are about 35 possible moves you can make at a given stage of
the game.
By using the alpha-beta search algorithm, you can reduce the branching
factor to ~sqrt(35) which is about 6. Through various other tricks like
null move pruning, transposition tables, etc. you can reduce the branching
factor to about 2 if you have a very smart program. Now, it has been
formally proven that there are less than 6000 full moves (a full move is one
move for each side) in the game of chess.
So a really powerful chess program could {approximately} solve the game of
chess with only 6000 plies searched, which would amount to 2^6000 positions
analyzed.
As you know, 2^6000 is just a large constant. But it is so large that it
may as well be infinity (at the present stage of compute power). So
describing the branching factor of chess and considering chess as an
exponential problem still makes sense.
The notion of O(f(n)) is useful for average case projections of how a
problem set behaves. The travelling salesman problem is another hard
problem. But there are not an infinite number of cities in the world. So
(again) we could consider TSP to be O(1) with a huge constant of
proportionality. However, that is not really useful because we can't
calculate an optimal path to visit every city on earth with minimal travel
distance.
Similarly with sorting, we want to know how the problem scales over
different vector lengths. This is useful so that we can calculate how much
it will cost us to sort a given vector.
I'm not sure there is any relevance to C in this post, so probably it is
misplaced.