C
Claudio Calboni
Hello folks,
I'm having some performance issues with the client-side part of my
application.
Basically, it renders a huge HTML table (about 20'000 cells in my
testing scenario), without content. Content is "pushed" from the back
via some JS, for the only displayed portion of table. Once the user
scrolls, JS updates visible cells with data. It's quite the philosophy
behind GMaps and similars.
So, the server says to JS "update this group of cells with these
data". JS iterates (this is fast) through these instructions and push
data into cells. To find which object it has to update, JS uses
getElementByID(), and this is slow. With a 20k cells table and 300
displayied cells it takes 5 to 10 seconds to update. I suppose, but
I'm not a veteran JS developer (mainly I develop server-side with .NET
but I'm finding very interesting and poweful "the client side of the
force" ), this is due to the fact that getElementsById actually
*search* through DOM every element my JS is looking for because it
doesn't have a sort of index of elements. I'm trying caching founded
elements and it works great, but loading times are only moved from one
place to another.
I'm thinking about getting all elements (obviously only those I have
to update) via XPath, if possible (never used this tech yet). My
script is able to say "I require cells from--to", so it can be great
if I can extract only a snapshot of elements from the DOM, only those
requirest to be updated, and then iterate through them.
My cells (TD) are named c_R_C with R and C for row number and col
number. With a 100x100 table and 10x10 viewable are, say that I'm in
the almost in the center of the table (first visible cell, top-left
corner with ID c_40_50 and last visible cell, bottom-right corner with
ID c_50_60), I have to extract from DOM cells with row from 40 to 50
and col from 50 to 60 (c_40_50, c_40_51, c_40_52 ... c_50_58, c_50_59,
c_50_60).
If, AFAIK, XPath do extract items into an iterable collection, and if
this extraction can be done with a sort of regular expression I think
this is feasible.
Of course if any of you have other suggestions, that would be greatly
appreciated.
Thanks in advance,
tK
I'm having some performance issues with the client-side part of my
application.
Basically, it renders a huge HTML table (about 20'000 cells in my
testing scenario), without content. Content is "pushed" from the back
via some JS, for the only displayed portion of table. Once the user
scrolls, JS updates visible cells with data. It's quite the philosophy
behind GMaps and similars.
So, the server says to JS "update this group of cells with these
data". JS iterates (this is fast) through these instructions and push
data into cells. To find which object it has to update, JS uses
getElementByID(), and this is slow. With a 20k cells table and 300
displayied cells it takes 5 to 10 seconds to update. I suppose, but
I'm not a veteran JS developer (mainly I develop server-side with .NET
but I'm finding very interesting and poweful "the client side of the
force" ), this is due to the fact that getElementsById actually
*search* through DOM every element my JS is looking for because it
doesn't have a sort of index of elements. I'm trying caching founded
elements and it works great, but loading times are only moved from one
place to another.
I'm thinking about getting all elements (obviously only those I have
to update) via XPath, if possible (never used this tech yet). My
script is able to say "I require cells from--to", so it can be great
if I can extract only a snapshot of elements from the DOM, only those
requirest to be updated, and then iterate through them.
My cells (TD) are named c_R_C with R and C for row number and col
number. With a 100x100 table and 10x10 viewable are, say that I'm in
the almost in the center of the table (first visible cell, top-left
corner with ID c_40_50 and last visible cell, bottom-right corner with
ID c_50_60), I have to extract from DOM cells with row from 40 to 50
and col from 50 to 60 (c_40_50, c_40_51, c_40_52 ... c_50_58, c_50_59,
c_50_60).
If, AFAIK, XPath do extract items into an iterable collection, and if
this extraction can be done with a sort of regular expression I think
this is feasible.
Of course if any of you have other suggestions, that would be greatly
appreciated.
Thanks in advance,
tK