TaskSpeed results for My Library

S

Scott Sauyet

Or maybe its the quality of the coding of "pure DOM" methods?

That's precisely what I meant by the implementation.
Its interesting that the author specifically states that the
source for their "pure DOM" methods is unavailable.

I think you misunderstood this quote from the taskspeed page:

| The 'PureDom' tests are written as a minimal abstract utility
| API to accomplish these tasks, and are included as a baseline
| measurement of the compared libraries. It currently is not a
| library available for download or use.

That does not mean that you can't see it. It's simply meant to be an
efficient, library-agnostic bit of code. It is included with the
tests, but it's not available as a stand-alone library in the manner
that Dojo, jQuery, MooTools, My Library, Prototype, qooxdoo, and YUI
are.

The test code is available at

http://dante.dojotoolkit.org/taskspeed/tests/pure-tests.js

and the minimal library used is at

http://dante.dojotoolkit.org/taskspeed/frameworks/webreflection.js

The test code looks like you would expect, with pure DOM code like
this:

(node = a[j]).parentNode.insertBefore(
p.cloneNode(true).appendChild(text.cloneNode(true))
.parentNode, node.nextSibling
);


The library contains a utility object consisting of four functions:
attachEvent, detachEvent, (Array) indexOf, and a small replacement for
or wrapper around querySelectorAll. Actually, that last looks a
little strange to me:

getSimple:document.createElement("p").querySelectorAll&&false?
function(selector){
return this.querySelectorAll(selector);
}:
function(selector){
// lightweight implementation here
}

Am I crazy or does that "&& false" mean that the first branch will
never be chosen? Perhaps that's the culprit?

-- Scott
 
S

S.T.

Your reluctance to run the tests speaks louder than any words. Those
so called "piss-poor" libraries still fix bugs and address issues that
you are either ignorant of or fail to address properly.

He's made it clear in previous posts that he doesn't believe in unit
testing... it's "an expensive process".

The library's mantra is apparently "it should work, therefore it will".
Doesn't strike me as a comforting approach for any would-be adopters.
 
R

RobG

On 28/01/2010 3:23 PM, Scott Sauyet wrote:
[...]
The test code is available at

http://dante.dojotoolkit.org/taskspeed/tests/pure-tests.js

and the minimal library used is at

http://dante.dojotoolkit.org/taskspeed/frameworks/webreflection.js

The test code looks like you would expect, with pure DOM code like
this:

(node = a[j]).parentNode.insertBefore(
p.cloneNode(true).appendChild(text.cloneNode(true))
.parentNode, node.nextSibling
);

Not exactly what I'd expect. The text node should be appended to the p
earlier so there's no repeated clone, append, step-up-the-DOM.
Optimising as suggested gives a 25% speed boost in Fx and 10% in IE
6.

The same slow logic is used in the make function (my wrapping):


"make": function(){
for(var
d = document, body = d.body,
ul = d.createElement("ul"),
one = d.createElement("li")
.appendChild(d.createTextNode("one"))
.parentNode,
two = d.createElement("li")
.appendChild(d.createTextNode("two"))
.parentNode,
three= d.createElement("li")
.appendChild(d.createTextNode("three"))
.parentNode,
i = 0,
fromcode;
i < 250; ++i
){
fromcode = ul.cloneNode(true);
fromcode.id = "setid" + i;
fromcode.className = "fromcode";
fromcode.appendChild(one.cloneNode(true));
fromcode.appendChild(two.cloneNode(true));
fromcode.appendChild(three.cloneNode(true));
body.appendChild(fromcode);
};
return utility.getSimple
.call(body, "ul.fromcode").length;
},


Note the repetitious clone/append/step-up where a single clone would
have done the job - compare it to the jQuery code used:

$("<ul class='fromcode'><li>one</li><li>two</li><li>three</li></
ul>")

Here all elements are created in one go, so the two are hardly
comparible. The DOM code is doing 4 times the work (but still runs in
half the time of jQuery 1.4). Optimising out the extra work and it
runs about 15% faster in Firefox, and twice as fast in IE 6.

Note also that a selector is used to count the nodes added to the
document at the end and that the speed of this count is included in
the test. Why is selector speed allowed to influence tests of element
creation speed?

The library contains a utility object consisting of four functions:
attachEvent, detachEvent, (Array) indexOf, and a small replacement for
or wrapper around querySelectorAll. Actually, that last looks a
little strange to me:

getSimple:document.createElement("p").querySelectorAll&&false?
function(selector){
return this.querySelectorAll(selector);
}:
function(selector){
// lightweight implementation here
}

Am I crazy or does that "&& false" mean that the first branch will
never be chosen?

Good find.

Perhaps that's the culprit?

Regardless, it doesn't seem sensible to use a lightweight selector
engine when the intention is to compare selector engines to "pure
DOM" (which should mean ad hoc functions). There only selectors in the
test are:

1. ul.fromcode
2. div.added

A simple switch statement would have done the trick. The "pure DOM"
code doesn't leverage the browser-native getElementsByClassName method
if available, a for loop and RegExp is used always. Nor does it
leverage the fact that DOM collections are live, it gets the
collection every time. This is critical as a selector query is
included in nearly all the tests, so its performance affects tests
where it is not the feature being tested.

There are a number of optimisations that could quite easily be added
to "pure DOM", and the tests themselves do not accurately target the
features they are trying to test in some (many?) cases.
 
A

Andrea Giammarchi

I love people keep thinking about how many cheats i could have added
to PureDOM ... I used native everything at the beginning and people
complained about the fact "obviously libraries have better selector
engines" ...

I have nullified querySelectorAll on purpose (which does NOT produce a
live object in any case, so all latest considerations about cached
live objects are superfluous adn nothing new, I have posted about this
stuff ages ago in WebReflection) and people keep thinking I am an
idiot, rather than simply remove that &&false which is clearly a
statement "nullifier" (as ||true is a statement "forcer").

That was the easiest way to test both getSimplle and native method ...
but you guys are too clever here to get this, isn't it?

About tricky code to speed up some appendChild and the BORING
challenge VS innerHTML (e.g. $("<ul class='fromcode'><li>one</
li><li>two</li><li>three</li></ul>") )
I don't think after a year of libraries still behind there's much more
to say about these topics.

If you want to trick PureDOM convinced you can speed it up, well, you
have discovered HOT WATER!!! Good Stuff, Hu?

The meaning of PureDOM is simple: to provide a comparative basic
manual approach and 'till now it demonstrated that no framework is
able to perform daily tasks faster than native DOM, or at least those
considered in TaskSpeed.

If you read tasks carefully, you will realize that if a task says: ...
and for each node, insert this text: "whatever" ...
the PureDOM code simply creates EACH NODE, and it inserts FOR EACH
NODE the text "whatever" ... bt let's cheat and feel cool, that's the
point, right?

Everybody else got it but this ML is still talking about PureDOM and
how badly it is etc etc ... well, use your best practices when
performances matter, and please stop wasting your time talking about
PureDOM or at least be decent enough to understand what is it and why
it's like that.

Finally, the day a library will be faster, I'll remove the fake
selector engine, I will implement proprietary IE way to append nodes
(insertAdjacentNode faster in many cases) and I bet PureDOM will still
outperform ... and that day somebody will talk about joined arrays for
faster strings via innerHTML ... I am sure about it!!!

Now, after all this, what have we learned today about JS? Me, nothing
for sure, yet another boring pointless discussion over PureDOM, I
receive a "warning" weekly basis about how people would have better
cheated in PureDOM ... uh, and don't forget the last test with
document.body.innerHTML = "" which is faster, right?

Best Regards, and thanks for asking before blaming
 
G

Garrett Smith

Andrea said:
I love people keep thinking about how many cheats i could have added
to PureDOM ...


Who thought that?

I used native everything at the beginning and people
complained about the fact "obviously libraries have better selector
engines" ...

I have nullified querySelectorAll on purpose (which does NOT produce a
Where?

live object in any case, so all latest considerations about cached
live objects are superfluous adn nothing new, I have posted about this
stuff ages ago in WebReflection) and people keep thinking I am an
idiot, rather than simply remove that &&false which is clearly a
statement "nullifier" (as ||true is a statement "forcer").

You love confusing people with code that appears broken?

I'm trying to follow your response in response to RobG's post, which had
some of good feedback. Your posting style breaks the discussion, so it's
a struggle here, as a reader. It would have been much better if you had
instead replied inline.
That was the easiest way to test both getSimplle and native method ...
but you guys are too clever here to get this, isn't it?

About tricky code to speed up some appendChild and the BORING
challenge VS innerHTML (e.g. $("<ul class='fromcode'><li>one</
li><li>two</li><li>three</li></ul>") )
I don't think after a year of libraries still behind there's much more
to say about these topics.

If you want to trick PureDOM convinced you can speed it up, well, you
have discovered HOT WATER!!! Good Stuff, Hu?

What?

[...]
 
S

Scott Sauyet

I love people keep thinking about how many cheats i could have added
to PureDOM ... I used native everything at the beginning and people
complained about the fact "obviously libraries have better selector
engines" ...

Whoa, hold on here a second! I don't think the criticism here is that
harsh!

I personally know nothing of the history of TaskSpeed, and nothing
about WebReflections.js.

There is an understandable concern when the libraries start to beat
the pure DOM solutions, as it's assumed that since everything is built
on top of the DOM tools, nothing should be any faster.
I have nullified querySelectorAll on purpose (which does NOT produce a
live object in any case, so all latest considerations about cached
live objects are superfluous adn nothing new, I have posted about this
stuff ages ago in WebReflection) and people keep thinking I am an
idiot, rather than simply remove that &&false which is clearly a
statement "nullifier" (as ||true is a statement "forcer").

That was the easiest way to test both getSimplle and native method ...
but you guys are too clever here to get this, isn't it?

Well, a comment would have been nice...
About tricky code to speed up some appendChild and the BORING
challenge VS innerHTML (e.g. $("<ul class='fromcode'><li>one</
li><li>two</li><li>three</li></ul>") )
I don't think after a year of libraries still behind there's much more
to say about these topics.

If you want to trick PureDOM convinced you can speed it up, well, you
have discovered HOT WATER!!! Good Stuff, Hu?

I would have removed the &&false to test, but I don't want to be using
a different version of PureDOM for any published tests. That would
just muddy the waters.

[ ... ] Everybody else got it but this ML is still talking about PureDOM and
how badly it is etc etc ... well, use your best practices when
performances matter, and please stop wasting your time talking about
PureDOM or at least be decent enough to understand what is it and why
it's like that.

Again, I think you're taking this too personally. First of all, in
this thread there were only a handful of posts questioning the speed
of PureDOM, and I haven't seen any other criticism of it in the few
months I've been around. But secondly, when one of the libraries
outperforms PureDOM, it does raise some awkward questions.

Best Regards, and thanks for asking before blaming

Well, now I know who to ask, anyway. I've seen no URLs or email
addresses in the TaskSpeed tests to use when asking questions.

I'm very impressed with TaskSpeed. It's not ideal, but it's a decent
framework for discussing speeds of things that actually matter to web
developers. I'm glad it has a PureDOM implementation included, and
I'm not overly bothered by those places where it's slower than some of
the other libraries. But I am curious about why it might be slower.

Cheers,

-- Scott
 
A

Andrea Giammarchi

Rob analysis is a bit superficial.
If there is an author, you ask to the author, you don't write
sentences possibly wrong or pointless, no?

If I don't understand something, or I think there is a typo, I don't
necessary start the campaign against that developer and how many
errors he wrote ... we are programmer, aren't we?

if(1&&1&&1&&1&&1&&1&&1&&1&&1&&1&&1&&false) is always false, ABC
if(0||0||0||0||0||0||0||0||0||0||0||true) is always true, ABC

If there is a &&false at the end and this is confusing ... well, I
guess we should consider to use jQuery, right?

Let's move over ... if I create li nodes for each created ul it is
because I am respecting the task, I AM NOT CHEATING

If you write in a row, internally via innerHTML and jQuery, of course
it's faster, isn't it? But that is the jQuery way, not the PureDOM
one, which aim is to PERFORM TASK SPEED TASKS without cheating at all.

Create 3 nodes and for each node append a text, that's what I have
done.

Create a node, append something, and clone it other two times ... that
is NOT the task.

Got my point?
My little cousin could speed up PureDOM test, to obtain WHAT?

We should understand why PureDOM is there and why it is like that ...
there is a Dojo man behind TaskSpeed, and me talking with him and
actually even fixing different things in every other framwork test
implementation (I have spotted inconsistency about one test and
proposed a fix for all).

Why after a year or more people still don't get PureDOM is a mystery
to me, this is why I get bored after the first line of comment.

Is there a way to respect the task and perform better without cheating
or use everything native where supported? I am ready to listen, but
not the classic "here innerHTML would have been better, here
cloneEverything would have speed up".

Got my point? I hope so

Regards
 
A

Andrea Giammarchi

There is an understandable concern when the libraries start to beat
the pure DOM solutions, as it's assumed that since everything is built
on top of the DOM tools, nothing should be any faster.

and how can be this possible, since PureDOM choice is to do not use
innerHTML to add or remove nodes?

innerHTML is standard since HTML5, PureDOM would like to avoid it and
use W3 way to generate content.
I can reduce to "zero" TaskSpeed PureDOM results, and again, to
demonstrate what?

This is why I keep saying after a year people still do not get what
PureDOM is!!!

BASELINE

just to give you an idea:
http://webreflection.blogspot.com/2009/12/taskspeed-real-cheat.html
http://debuggable.com/posts/rightjs...n-jquery:4b1fc009-1940-4d26-bdc6-0af2cbdd56cb

but if you want the fastest version ever, I can create it m then it
will be completely useless for its purpose.

Regards
 
S

Scott Sauyet

If there is an author, you ask to the author, you don't write
sentences possibly wrong or pointless, no?

Only if you know who the author is and how to contact him or her. I
know your name and have seen you around various groups over the years,
but until today did not know you were responsible for the PureDOM
implementation in TaskSpeed. The TaskSpeed page says about the
PureDOM methods that, "It currently is not a library available for
download or use," and the implementation of the tests and
webreflection.js gave no pointers to their author.
If I don't understand something, or I think there is a typo, I don't
necessary start the campaign against that developer and how many
errors he wrote ... we are programmer, aren't we?

Obviously you're seeing something very different in this thread than I
am. I have certainly seen no campaign against you, and I certainly
have not participated in one.
if(1&&1&&1&&1&&1&&1&&1&&1&&1&&1&&1&&false) is always false, ABC
if(0||0||0||0||0||0||0||0||0||0||0||true) is always true, ABC

If there is a &&false at the end and this is confusing ... well, I
guess we should consider to use jQuery, right?

I have no idea what you mean by this.

I did assume that the &&false had been once used to comment out the
branch for some quick testing. Without a comment in the code, though,
I assumed it's continued existence was a mistake.
Let's move over ... if I create li nodes for each created ul it is
because I am respecting the task, I AM NOT CHEATING

If you write in a row, internally via innerHTML and jQuery, of course
it's faster, isn't it? But that is the jQuery way, not the PureDOM
one, which aim is to PERFORM TASK SPEED TASKS without cheating at all.

That's a somewhat different perspective than I had considered. I
really thought about PureDOM as the baseline, the thing nothing could
beat in terms of speed because everything else would (along their own
tortured paths) eventually be calling the same methods that PureDOM
called. Your explanation makes sense, though. It's not a performance
baseline but a standards-based baseline. You're calling only those
methods that the spec require DOM implementations to have, is that
right?

Why after a year or more people still don't get PureDOM is a mystery
to me, this is why I get bored after the first line of comment.

Do you understand where my mistaken impression came from, then? On
the Taskspeed page, we have this: "The 'PureDom' tests are written as
a minimal abstract utility API to accomplish these tasks, and are
included as a baseline measurement of the compared libraries." Since
the only measurements involved are the speeds, it seems a natural
conclusion that it's providing a speed baseline, and at least a good
guess that it should by its nature be the fastest possible.

Got my point? I hope so

I think so. But it would have been much easier to hear without the
extreme defensiveness.

-- Scott
 
S

Scott Sauyet

R

RobG

I love people keep thinking about how many cheats i could have added
to PureDOM ... I used native everything at the beginning and people
complained about the fact "obviously libraries have better selector
engines" ...

It would help greatly if there was a clear statement about the purpose
of the tests. Finding a description of what each test is supposed to
do is not easy, eventually I discovered that it is in on GitHub in
sample-tests.js. It would be better if that was made more easily
available and called something more obvious, such as "test
description" or "test specification".

I have nullified querySelectorAll on purpose

Then a comment in the source would have been helpful. The way it was
nullified gave the impression that it was not intentional.

(which does NOT produce a
live object in any case, so all latest considerations about cached
live objects are superfluous adn nothing new,

I know QSA doesn't return live collections, but other DOM methods
using getElementsBy... do. Richard's comment in regard to IEs
performance and live collections is interesting.

I have posted about this
stuff ages ago in WebReflection) and people keep thinking I am an
idiot, rather than simply remove that &&false which is clearly a
statement "nullifier" (as ||true is a statement "forcer").

I had never heard of WebReflection until I came across TaskSpeed, I
see now it is your blog. If there is important information or
discussion there relating to TaskSpeed, why not put a link to it on
the TaskSpeed page?

And I don't think you're an idiot based on looking at the code (I
suppose you wrote it), but I do think there was a lack of attention to
detail. That can be excused if you didn't expect it to be as popular
as it is and gain the attention it has, but given that it has been up
for some time now, it would have been helpful if those issues had been
addressed.

That was the easiest way to test both getSimplle and native method ...
but you guys are too clever here to get this, isn't it?

If you don't want to use QSA, don't include a branch for it, then no
one is confused.

About tricky code to speed up some appendChild and the BORING
challenge VS innerHTML (e.g. $("<ul class='fromcode'><li>one</
li><li>two</li><li>three</li></ul>") )
I don't think after a year of libraries still behind there's much more
to say about these topics.

My point was that the code itself is not performing equivalent
operations - the pureDOM version does many more create/append
operations.
If you want to trick PureDOM convinced you can speed it up, well, you
have discovered HOT WATER!!! Good Stuff, Hu?

It's not a "trick", just obvious that the operations that each test
performs should be equivalent. It is the operations that are being
tested, not the trickiness of the programmer.

The meaning of PureDOM is simple: to provide a comparative basic
manual approach

Thank you for explaining that, but it's differnt to what you say
below.

and 'till now it demonstrated that no framework is
able to perform daily tasks faster than native DOM, or at least those
considered in TaskSpeed.

If you read tasks carefully, you will realize that if a task says: ...
and for each node, insert this text: "whatever" ...
the PureDOM code simply creates EACH NODE, and it inserts FOR EACH
NODE the text "whatever" ... bt let's cheat and feel cool, that's the
point, right?

Then that is what all the tests should do. The library's version of
create should be called once for each node and the text appended
separately, then each node appended to its parent. If jQuery (for
example) is allowed to use a string of HTML to create a document
fragment of all the nodes n times, then the pureDOM method should be
able to create the same structure once and clone it n times. Otherwise
you are not testing the same thing.

Everybody else got it
Where?

but this ML is still talking about PureDOM and
how badly it is etc etc ...

This group is about javascript, so you will likely only get comments
about pureDOM here. If you posted links to other discussion threads
and included some clarifying documentation (or at least links to it)
on the TaskSpeed page then we might have "got it" too.

well, use your best practices when
performances matter, and please stop wasting your time talking about
PureDOM or at least be decent enough to understand what is it and why
it's like that.

If you make that information accessible, I won't need to guess your
intentions.
Finally, the day a library will be faster, I'll remove the fake
selector engine, I will implement proprietary IE way to append nodes
(insertAdjacentNode faster in many cases) and I bet PureDOM will still
outperform ... and that day somebody will talk about joined arrays for
faster strings via innerHTML ... I am sure about it!!!

I don't see an issue with pureDOM using innerHTML if fits the criteria
for a test.

Now, after all this, what have we learned today about JS? Me, nothing
for sure, yet another boring pointless discussion over PureDOM, I
receive a "warning" weekly basis about how people would have better
cheated in PureDOM ... uh, and don't forget the last test with
document.body.innerHTML = "" which is faster, right?

I don't think anything that is done in pureDOM is a cheat since I have
no criteria on which to base that opinion. If the point of pureDOM is
to use *only* W3C specified DOM interfaces, then fine, eschew
innerHTML. But that is not stated anywhere and above you said that the
point of pureDOM was to "provide a comparative basic manual approach".

So which is it?

One of the main claims about libraries is that they smooth over
browser quirks. It might be interesting to develop a suite of
"QuirksSpeed" tests of known differences between browsers to determine
how well each library manages to overcome them. The overall speed is
likely not that important, quirks accommodated and correctly dealt
with likely are.
 
A

Andrea Giammarchi

Scott fine, RobG idem, I'll talk with @phiggins as soon as I'll have
time.
To me could make sense to write a PureCheat library to put beside
PureDOM, not a replacement, since in any case, innerHTML could be a
problem.

As example, the first test via innerHTML will potentially destroy
every attached event handler to existent content.
I have that fast version, I hope to have time to create a full
PureCheat version as well.

Regards

P.S. same webreflection that blogs sometimes in Ajaxian, that is kinda
still me ;-)
 
D

David Mark

Just catching the tail end of this thread. Sounds like this is
addressed to me, but piss-poor quoting makes it hard to tell. Make no
mistake that the "major" libraries are full of ridiculous holes (e.g.
attribute handling), not to mention browser sniffing. Each instance
of the latter is an admission by the author(s) that they could not
make their design work cross-browser. Need a new design, new authors
or both.
He's made it clear in previous posts that he doesn't believe in unit
testing... it's "an expensive process".

Who said that? My Library certainly has unit tests. Some of them are
even online. Not expensive, just click the button. ;)

What I don't care for is software design that is _shaped_ by unit
testing. Testing is supposed to confirm the correctness of a
solution, not serve as a crystal ball.
The library's mantra is apparently "it should work, therefore it will".

No it isn't and it has been demonstrated to work in virtually any
DOM. Most of the others can't make IE8 work, even with browser
sniffing.
Doesn't strike me as a comforting approach for any would-be adopters.

You don't strike me as very intelligent (or able to read for
comprehension). ;)
 
D

David Mark

Hi David,


Who says you can't teach an old dog new tricks :D

You? And your quoting is as incompetent as usual (I doubt I noted
your salutation).
Don't get hung up on jQuery, you also fail many tests in Dojo,
Prototype, YUI, Google Closure, and general Slick tests.

Other libraries unit tests are inconsequential at this stage. And
SlickSpeed is on my site and My Library is the only one that makes it
through unscathed in all browsers tested. Most of the others start to
sputter outside the confines of browsers the authors have heard of
(and sniffed for).
Great! I can't wait, I hope you follow up with bug reports as you
expect others to report bugs in your lib.

Don't be stupid. I've submitted lots of bug reports to the other
libraries. They are hesitant to change anything as they don't seem to
understand anything they have done (usually a bunch of browser sniffs
stacked on top of each other). It's hard to say what is a bug in a
collection of non-solutions.
Some maybe. Others are related to how you resolve elements with ID's,

That's not a bug. There's no crutch for the IE name/ID issue. That
was decided long ago (and documented here). You get back null if your
markup is screwy (that way you know it is screwy).
multiple class names,

You must be testing an old version. The builder/test page script has
not been updated with the latest changes. As I have never specified
exactly what the library supports (not good, of course), it is hard to
call those bugs either. Certainly the scripts on the Downloads page
support multiple class names (and I added a test for this to the
SlickSpeed page, which oddly omitted that case).
attributes with unicode values and so on.

Attributes? You must be joking. I'm not saying there isn't an issue
with mine, but the others aren't even in the ballpark (particularly in
IE).
You
won't know until you actually review the hundreds and hundreds of
failing tests.

I will review them when I feel it is appropriate. Thanks.
Fair enough. You can certainly add support for additional selectors
via a switch statement, others have.

I already added support for all but one of the original test page's
selection, as well as some others that it omitted. And I still
contend the whole thing with the selector engine is a fool's errand.
How do you spend years trying to make that work, fail, and then add a
QSA branch, which is clearly incompatible with the previous failings?
It's pure madness.
What is your point ? You now have the opposite problem as other
frameworks have addressed bugs in QSA and you have not.

No, I don't have the problem at all. The QSA add-on is not needed as
the library is certainly fast enough without it.
The quotes show an assertion, on your part, of speed and
compatibility. On one hand you claim superiority on the other you hide
behind the excuse that your tests are old and irrelevant.

Now what are you talking about? None of these are my tests. And you
can't get much more superior than the results I've observed across a
wide range of browsers, both new and old (particularly TaskSpeed,
where my optimized-for-conciseness tests beat the "baseline" PureDom
handily).
If you don't
think they should be promoted then remove them from your site.

Remove what from my site? You really need to work on your quoting. I
don't know what you are talking about half of the time.
Why should I? If you don't bother reporting bugs for other libs why
should I bother reporting them to you?

I don't want you to report bugs to me, nor have I asked you to. I
don't need your help. Is that clear enough? And I have reported
plenty of bugs to the other libraries. Where have you been the last
couple of years?
Exactly. Your code is not perfect. Yet you insult others whose code is
not perfect either.

I don't think you understand at all. The others claim that they have
armies of vigilant "hackers" turning out cost-saving cross-browser
"solutions" and most are so completely incompetent that they don't
even _try_ to solve anything, preferring to constantly twiddle with UA
sniffs, "degrading" (on paper) anything but the very latest of the
major desktop browsers. That's the complete opposite of My Library,
which was originally posted as an example for others and is now being
polished into something that should displace all of these other non-
solutions for good.

Great. I'll look at it. There is a function in there to transfer the
listeners on replacement. Perhaps you mean listeners set by other
libraries? And, of course, the whole idea of using the setAttribute
wrapper to change the type (or name) of an INPUT is a novelty anyway.
A workaround doesn't matter. You have exposed API that can clearly
cause critical issues.

The workaround I refer to is behind-the-scenes and is supposed to
transfer the listeners to the new node. Again, this is just a
novelty.
Good to know.


Some would say attempting to support (and failing in more than one
area I might add) dead browsers would certainly lend to an incompetent
design/implementation.

Have you read _anything_ I've written? Do you understand that whether
the browsers are "dead" or alive is irrelevant? The idea is to test
in as many limited DOM's as possible. You have no idea what is out
there in phones and other devices (in some cases, old Opera
versions). ;)

And failing? The degradation is working swimmingly almost entirely
across the board, even back to NN4. Perhaps you prefer scripts that
just blunder into exceptions, leaving the document in an unknown
state? That's what all of the others do as they have no way to tell
the calling app what will work (the library doesn't know itself).
They can only be _expected_ to work in environments where they have
been demonstrated to work and typically that includes only the default
configurations of the very latest versions of major desktop browsers
(excluding IE where they all fail miserably in one crucial way or
another).
Sure, it is still being promoted on your site though. It only takes a
few minutes to manually update the frameworks in Slickspeed/Taskspeed.

They've long since been updated. And what business is it of yours? I
still contend the old speed tests are far more telling than comparing
QSA results. ;)
Not true. Depending on your code implementation and how you address
various bugs speed can differ by a wide margin.

Not really. 90% of the SlickSpeed tests take 0ms in the environments
I've tested. The others "thunk" back to the slow lane, which is
another story. I suppose if a library is so completely tangled up in
its own syntactic sugar, it could manage to make QSA-based queries
slow. The variations will be nowhere near those in the "slow lane"
though.
Your reluctance to run the tests speaks louder than any words. Those
so called "piss-poor" libraries still fix bugs and address issues that
you are either ignorant of or fail to address properly.

No reluctance at all. I don't need to worry about unit tests written
by others at the time. I am busy working on my own. Thanks for your
concern.

And, as noted in another post (and discussed endlessly here for
years), the other piss-poor "major" libraries are typically non-
solutions involving browser sniffing. In other words, they couldn't
make their designs work, even multi-browser (let alone cross-browser),
so gave up and branched on the baseless UA string. That's pure
incompetence and a full decade behind reality. And I mean the users'
reality, not the deluded imaginings of the developers.
 
D

David Mark

My testing on MS Vista:

Library          Fx 3.6     IE 8  Op 10.10  Safari 4.04

PureDom             282    1,211       190          246
jQuery 1.2.6      1,692    6,269     1,904        1,188
jQuery 1.3.2      1,401    3,219       956          708
Prototype 1.6.0.3   889    3,790       830          385
MooTools 1.2.2      733    5,711       451          276
qooxdoo 0.8.2       405    1,383       180          208
Dojo 1.3.2          435    2,104       217          271
Dojo 1.4.0          448    1,771       266          245
YUI 2.7.0           788    2,370       517          286
YUI 3.0             440    1,531       252          255
My Library 1.0      448    1,757       222          168

I'm not sure how myLib can be faster than pure dom???

"Pure DOM" was written by a human being, just like the rest.
Obviously they made some mistakes as it is nowhere near optimal
(particularly in Webkit-based browsers).

And realize that the tests for each framework are different. I
purposely optimized for conciseness, not speed. Most of the others
are using built-in DOM methods for at least some of the test
functions. My point is that I have the best of both worlds. If you
want pure speed, use the more verbose API. If you like jQuery-like
gibberish, chain the object methods together like I did in the test
functions. It's flexible that way. ;)
 
D

David Mark

Not on my machine - Windows XP, 3.3GHz P4. Here's the results for
Firefox 3.5 and IE 6:

Library             Fx      IE 6
PureDom*            794    3,031
jQuery 1.2.6      5,830   36,326
jQuery 1.3.2      3,763   11,826
Prototype 1.6.0.3 2,879   37,185
MooTools 1.2.2    2,306   20,358
qooxdoo 0.8.2     1,051    4,062
Dojo 1.3.2        1,218    9,110
Dojo 1.4.0        1,198    5,125
YUI 2.7.0         2,300    7,063
YUI 3.0           1,062    3,954
My Library 1.0    1,371    5,625


In Firefox My Library is beaten by YUI 3.0 and Dojo versions 1.4 and
1.3.2.

Which My Library? The one with QSA or not? It makes a small
difference in TaskSpeed (and a huge one in SlickSpeed).
In IE 6, it is beaten by YUI 3.0, Dojo 1.4 and qooxdoo.
Prototype failed the insertAfter test in both Firefox and IE 6,

None of the others comes close to supporting IE6 properly (e.g. they
just call getAttribute without regard to the broken implementation).
Of course, that's more of a concern for SlickSpeed.
If "concise" means less code, then the library itself is 145KB, which
is twice the size of YUI 3,

That's with _every_ module included (e.g. Flash, audio, scrolling
effects, etc). Most of them are not needed by these tests.
though smaller than the monstorous
qooxdoo. Perhaps the size can be optimised so that each libarary only
contains the components required for the particular tests.

That's what the builder is for. ;)
The test code itself is not as concise as that for jQuery, and not
much more concise than most of the others.

I looked at some of them and more than one was using native DOM
methods, so it seems they are apples and oranges. I remember that
YUI's weren't particularly concise.
Prototype and MooTools are
perhaps the least concise, the total code for the "pure DOM" is 10KB.

Sure. It's just enough to run these tests. ;)
But the test code is tiny in comparison to the library itself, so not
really a huge concern.

My point is that it supports the "concise" coding style that is so
popular among major library enthusiasts (and is very fast doing it).
It also supports more traditional methods (even faster).

And I have found after extensive testing that the margin of victory is
wider on older machines, older browsers, phones and Webkit-based
browsers. Faster PC's running IE and FF have closer results.

And ultimately, if a library that is supposed to make cross-browser
scripting easier fails to handle attributes properly in IE or resorts
to non-solutions like browser sniffing, it should be disallowed (which
rules out all of the others). I plan to demonstrate that on the test
pages in the near future. It should come as no great revelation to
regular readers of this group though. ;)
 
D

David Mark

He's made it clear in previous posts that he doesn't believe in unit
testing... it's "an expensive process".

I knew I didn't say that. :) You misread. QA testing is an
expensive process, which is why these script of the month clubs are
folly. If you have to re-download and re-test a monstrously large and
complex blob of JS every time a new wave of browsers hits, you are
throwing money out the window for no reason. Eventually, you cut your
losses.
 
D

David Mark

and whcih library outperform PureDOM? Aren't we talking about that
post with my massive comment inside removed/filtered?

Mine kicks the shit out of it in WebKit (also Opera and FF on some
older machines).

And, of course, some of the other libraries cheat (e.g. YUI attaching
a single listener to the body, in lieu of one for each DIV created),
so these numbers cannot be taken at face value.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,082
Messages
2,570,589
Members
47,211
Latest member
Shamestone

Latest Threads

Top