Why can't I call a function which does what it does if a click on theheader of a column?

T

Thomas 'PointedEars' Lahn

Lasse said:
Thomas 'PointedEars' Lahn said:
What would be the advantage of computing static keys and comparing them
as compared to comparing the live values?

[...]
In the case of the default compare function, it converts both arguments
to strings before comparing them lexicographically.
If that is an expensive operation (which it potentially is, if you
are comparing objects with an expensive toString function), it might
make sense to cache the results. In most cases, the values being compared
with the default compare function will be strings or numbers (and the
numbers really should use a custom compare function too),

Yes, it is a rather bad default.
The swapping of rows can be done by exchanging TR elements in a manner
which is essentially just a pointer exchange, without any object
creation or pseudo-destruction. One should be able to sort the rows
array, with a comparison function that just compares rows[J].key.

However, the row container doesn't have a swap operation, so you will
have to go through the normal DOM methods. That won't be nearly as
fast as doing things directly in JavaScript.
The object that the `rows' property of a table object refers to is not an
Array instance, but a NodeList implementation; that is what would make
this approach not considerably more efficient than the other one of
reading table data into an array, sorting the array and recreating the
table from the sort
result. And are you actually suggesting to augment host objects here?

My guess (it was a long time since I did the test) is that working
directly on the DOM is significantly slower than creating an array in
Javascript and sorting that, and then put the DOM in the correct order
(e.g., by removing all rows and reinserting them in the correct order).

I had an idea this morning that could turn out to be the best of both
worlds: You could read all row object references into an array, sort the
array of references, and then iterate over the array to put the rows in
correct order with insertBefore(). This would take advantage of faster
array sort and would not require removing all rows and reinserting them,
at least not explicitly.

/* but note that this is specified as implementation-dependent */
var rows = Array.prototype.slice.call(tBody.rows, 0);

rows.sort(comparator);

for (var i = 0, len = rows.length - 1, p = len && rows[0].parentNode;
i < len;
i++)
{
p.insertBefore(rows, rows[i + 1]);
}
On problem with using the built-in sort is that it's not guaranteed to be
stable. When sorting a table, it's desireable that sorting first on column
and then another is stable wrt. the result of the first sort.

While what you describe is actually _not_ the definition of a stable sorting
algorithm (its property is instead that elements that compare equal remain
in their original order), the problem you describe can be solved with
computing and comparing row keys as John suggested.

In that case the first pass should probably be a loop in which both the
references are stored and the keys are computed. The key should then be the
property of a *native* object, though:

var rows = [];

for (var tRows = tBody.rows, i = tRows && tRows.length; i--;)
{
rows = {
data: tRows;
key: computeKey(tRows);
};
}


PointedEars
 
M

Matt Kruse

My guess (it was a long time since I did the test) is that working directly
on the DOM is significantly slower than creating an array in Javascript and
sorting that, and then put the DOM in the correct order (e.g., by removing all
rows and reinserting them in the correct order).

Indeed, and in some browsers it makes a _big_ difference how you
structure the loops, too.

I did quite a bit of testing in browsers to optimize my table sorter
when I wrote it. Some of the results were surprising, with IE seeing
huge performance increases just by writing more optimized loop
structures (other browsers not so much). I ended up using appendChild
() as being the most efficient way to move rows around. Sorting the
rows list directly is much slower than sorting a separate array of
just values. And caching the "converted" cell values speeds things up
also.
http://javascripttoolbox.com/lib/table/source.php

Matt Kruse
 
D

Dr J R Stockton

Sat said:
Likely quicksort, often with a fallback to insertion sort for shorter
segments.

Speed, in the stated condition.

If the comparison of two elements requires a significant transformation
of the elements before comparions, performing the transformation once,
at first, or caching the result the first time it's needed, might take
less time than doing it log(n) times (on average) during the sorting.

On the other hand, it complicates the algorithm if you need to keep
two arrays in sync (but that can be avoided by creating an array of
pairs of transformed elements and original indices, and sorting just
that on the transformed elements), and it requires more space, which
would probably be overkill for most smaller arrays.

There is no need to have two arrays. Starting with an array of data,
one transforms it into an array of objects each of which has a property
data and a property key (initially absent). Before sorting on a column,
one sets the keys accordingly; then the comparison function for sorting
the array compares the keys.

One may finally convert back to array of data; but that may not be
necessary.

Consider a singly linked list; it is in some order. The common doubly
linked list is in two orders, forwards and backwards. And a multiply-
linked list representing table rows can be in multiple orders; it can in
effect be sorted both forwards and backwards on every column each in
various manners. After the overhead of setting it up, any order becomes
available without further sorting. And if a new row arrives, one can do
an insertion on whichever sort order is being used, as that order is
being read out.

In the case of the default compare function, it converts both arguments
to strings before comparing them lexicographically.
If that is an expensive operation (which it potentially is, if you
are comparing objects with an expensive toString function), it might
make sense to cache the results. In most cases, the values being compared
with the default compare function will be strings or numbers (and the
numbers really should use a custom compare function too), so any extra
work will be wasted.

Not necessarily. If the strings consist of English sentences, using
only words in a known dictionary, then one can replace the words by
their position numbers (in fixed-length base 36). Comparison will now
be faster, and it is possible that the effort could be worth while.
That can be the .key of each rows[J]?

One assumes that the number of columns will be no more than, say, 20;
and the number of rows no more than 1000 - OR that the user will
understand the need to wait for more than a second.

Hmm, I don't see why.

Just a general feeling of what is reasonable on a Web page. That,
combined with a _little_ experience of sorting hundreds of items, is why
I suggest that, provided that the algorithms themselves are efficient,
the task should not take too long to run.

The swapping of rows can be done by exchanging TR elements in a manner
which is essentially just a pointer exchange, without any object
creation or pseudo-destruction. One should be able to sort the rows
array, with a comparison function that just compares rows[J].key.

However, the row container doesn't have a swap operation, so you will
have to go through the normal DOM methods. That won't be nearly as
fast as doing things directly in JavaScript.

Agreed : the DOM in that respect is badly designed. But a replace
operation could suffice. And I imagine that moveRow(J, K) is efficient,
and two of those implement a swap. But, although it's in IE8, it's not
in FF 3.0.15.

My guess (it was a long time since I did the test) is that working directly
on the DOM is significantly slower than creating an array in Javascript and
sorting that, and then put the DOM in the correct order (e.g., by removing all
rows and reinserting them in the correct order).

It seems likely.

However the row sorting is done, to be efficient it must be in a manner
that swaps by manipulating what are in effect pointers / indexes -to-
rows, and not by copying the actual contents of a row - which will be
obvious to the experienced programmer but not necessarily to all.

On problem with using the built-in sort is that it's not guaranteed to be
stable. When sorting a table, it's desireable that sorting first on column
and then another is stable wrt. the result of the first sort.

That certainly can matter. If it matters often, then ECMAScript ought
to offer an additional, stable sort method; and, failing that, the FAQ
should, either by example or by link.


It could be useful for the FAQ to say something about how to
discriminate between definitely-bad asnd possibly-good JavaScript
tutorial sites. One way is to look up certain things and see whether
they're suitably dealt with or omitted. One such could be table
moveRow.
 
L

Lasse Reichstein Nielsen

Dr J R Stockton said:
There is no need to have two arrays. Starting with an array of data,
one transforms it into an array of objects each of which has a property
data and a property key (initially absent). Before sorting on a column,
one sets the keys accordingly; then the comparison function for sorting
the array compares the keys.

True, this is viable in JavaScript beacause of the ease of creating
objects and the single threaded model.
One may finally convert back to array of data; but that may not be
necessary.

Consider a singly linked list; it is in some order. The common doubly
linked list is in two orders, forwards and backwards. And a multiply-
linked list representing table rows can be in multiple orders; it can in
effect be sorted both forwards and backwards on every column each in
various manners. After the overhead of setting it up, any order becomes
available without further sorting. And if a new row arrives, one can do
an insertion on whichever sort order is being used, as that order is
being read out.

Interesting idea, but it's really a data structure in its own right,
not just an array.
Not necessarily. If the strings consist of English sentences, using
only words in a known dictionary, then one can replace the words by
their position numbers (in fixed-length base 36). Comparison will now
be faster, and it is possible that the effort could be worth while.

So you convert some (known) strings to shorter, more easily comparable,
strings. That should work.
Unless the set of words to be sorted is extremely large, or known to
have many shared prefixes, I still doubt the overhead is worth it.
Randomly distributed strings have only 1/26 chance of agreeing on the
first letter, and up to 1/6 chance for 't' in English text (due to the
many "the"'s, no doubt).

....
However the row sorting is done, to be efficient it must be in a manner
that swaps by manipulating what are in effect pointers / indexes -to-
rows, and not by copying the actual contents of a row - which will be
obvious to the experienced programmer but not necessarily to all.

Indeed. Luckily innerHTML doesn't work well with tables in IE :)

/L
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,996
Messages
2,570,238
Members
46,826
Latest member
robinsontor

Latest Threads

Top