jQuery vs. My Library

D

David Mark

Scott said:
I would definitely prefer not to mess with the list of selectors,
unless it's to add some weighing scheme based upon real-world selector
usage. I didn't write the list; it's the one that was in the early
versions of SlickSpeed and has been copied into many other versions.
If you want to host an altered version, as I said:


Changing selectors is very easy. There is a text file in the
distribution -- "selectors.list" -- containing one selector per line.
(If you don't have a PHP host available, I'm willing to post a version
with the selectors you choose.) The mechanism to deal with
unavailable selectors is naive, perhaps, but does it's job well
enough: it simply highlights every row where the number of results
vary between libraries, and highlights any individual test that throws
an error. Intentionally or not, the appearance of SlickSpeed did help
coalesce the minimum set of selectors libraries tended to support.
That's a good thing for developers looking to use one or the other.



Well this is a test of selector speed. I haven't seen the other
libraries having problems with the attribute-based selectors. I know
there are other significant errors with other parts of attribute
handling.

LOL. That's because you didn't have near a large enough sample set.
See my latest SlickSpeed tests:-

http://www.cinsoft.net/slickspeed.html

....and realize that these things are easy to predict if you have read my
reviews of the various "majors".
But in none was it the overall fastest. JQuery was the fastest in
everything but IE6, where it came in third behind Dojo and MooTools.
In many browsers, if two of the selectors were optimized to match the
speed of the competition, My Library (QSA) would have been about the
fastest overall library. Those were the two selectors with
":contains": "h1[id]:contains(Selectors)" and
"p:contains(selectors)". In the various IE's there was a different
issue, "p:nth-child(even/odd)" were wrong in both versions of My
Library, and were significantly slower, too.
The even/odd discrepancy, which did not show up in my (obviously
incomplete) testing is a legitimate beef. Apparently I managed to make
those both slow and incorrect. It can happen. I'll grab your test page
as an example and fix it when I get a chance. Will also announce that
those two are broken in my forum. As far as I am concerned, the results
are disqualified until those two are fixed.

That's an odd statement. The results still stand.

You didn't understand. I am throwing out results that were _positive_
because I don't consider them valid unless _all_ tests pass.
They have to do
with the code that was available on the day they ran. As things are
fixed and new tests are released, there will be new results. But
MooTools can't disqualify the results because they haven't yet gotten
around to optimizing "tag.class", Nor can you.

You misunderstood what I was saying. It was the exact _opposite_ of an
excuse.
I don't find much practical use for the general "A n + B" syntax, but
the even/odd "2n"/"2n + 1" selectors have been quite helpful to me.

I added those a few days back.
Almost anytime I use selector engines, though, I find myself using
":not" a fair bit. Obviously it's up to you what you want to support,
but I'd urge you to reconsider.

Why would you need to negate? I'll consider it.
I'm surprised the native QSA engines aren't faster at these,
especially at "#id". If an external library can do a quick switch to
getElementById, you'd think the native engines could also do this in
order to speed things up.
Yes.


I'm curious as to why you (and others here, I've noted) say this.
I've found queries to be an incredibly useful tool, especially for
dynamic pages. What is the objection?

The objection is that it takes something that is 100% reliable (e.g. DOM
host methods like gEBI and gEBTN) and makes it a crap shoot, even in the
very latest version of the major desktop browsers. See my latest
SlickSpeed test page. It's a horror show and only scratching the
surface. This is not how cross-browser scripting is done. It is just
impossible to write reliable cross-browser scripts if every line has to
rely on the kindness of JS library developers and their wacky selector
engines.
It is more compelling, but more subject to manipulation, I'm afraid.

Yeah, like YUI using delegation instead of attaching 100 listeners.
They still lost by a mile. :)
Still it's worth pursuing, as long as we keep in mind that speed is
only one of a number of important concerns.

Yeah, the other huge concern is most of those columns are results based
on browser sniffing. Are you familiar with the mechanical Turk? Same
sort of thing, except it isn't just one midget, but a small army of
mental midgets who are feverishly trying to keep up with the very latest
browsers (and to hell with everything else), logging observations,
twiddling with the design, adding forks, etc. That's what I mean by
"test-driven" development. Cross-browser scripting doesn't work like
_that_ either.
 
D

David Mark

Scott said:
Also, I am not sure why you guys are using API.getEBCS over and over.
The extra dot operation is unnecessary. The whole reason the (not
recommended for GP use) $ is in there is for tests like these. The
extra dot operation on each call just clouds the issue as you would
normally use something like this:-

var getEBCS = API.getEBCS;

...at the start. The API is not made up of methods in the strict sense.
The API is simply a "namespace" object containing the supported
functions (which vary according to the environment). They are functions
that don't refer to - this - at all.

That makes sense, and I'll probably do that in future versions.
Unfortunately, I will have to add it to the mylib.js file, as I don't
want to adjust the simple configuration that exists right now, just
lines like this in a config file:

[MooTools 1.2.4]
file = "mootools-yui-compressed.js"
function = "$$"

[My Library]
file = "mylib-min.js"
function = "API.getEBCS"

But I am bothered by "The whole reason the (not recommended for GP
use) $ is in there is for tests like these." Having code that you
expect to be used for tests but not for general purposes strikes me as
an uncomfortable sort of pandering to the tests.


No, I explained that. The tests were looking for that $ identifier. It
isn't any faster or better in any way than using a non-$ identifier
(e.g. getEBCS).
 
D

David Mark

Ivan said:
I've noticed an issue in Firefox 3.6

div, p, a 1337 µs 671 found 1387 µs 671 found 2551 µs 671 found 1832
µs 673 found 1190 µs 673 found 5050 µs 671 found 1256 µs 671 found

Mylib (non-QSA & QSA) found 673 element, while other libraries found
671.

Something is definitely wrong there. But I have fixed several CSS
selector related bugs in the last few days, so it may not be an issue
today. I will check though. I guess I should put up an additional test
page with this other document on my site. Clearly it is exposing some
things that the other document and tests are not.
In Chome 4:

div, p, a 740 µs 671 found 641 µs 671 found 1232 µs 671 found 1761 µs
671 found 539 µs 671 found 1092 µs 671 found 594 µs 671 found

Other browser also found 671 element for all libraries.

Does seem odd that FF3.6 would be special.
 
D

David Mark

Ivan said:
Ok, thanks to you I found what's causing that. It's Firefox add-on
called Adblock Plus (based on method of elimination).

Yeah, I didn't think it was possible for My Library to botch that
selector. Glad it turned out to be a plug-in interfering.
Well ... hate to say, but I still have them. :)

Strange. Opera 10.10 worked swimmingly for me.
 
S

Scott Sauyet

David said:
LOL.  That's because you didn't have near a large enough sample set.
See my latest SlickSpeed tests:-

http://www.cinsoft.net/slickspeed.html

...and realize that these things are easy to predict if you have read my
reviews of the various "majors".

Yup, a number of problems there. Interestingly the fastest libraries
in my only test (FF3.6) (JQ1.3.2, MyLib, Dojo1.3.2, JQ1.4.1,
Dojo1.4.0, and PT1.6.1, in that order) were the ones that also seemed
to have the fewest problems. Is that due to the leveling effect of
QSA?

You didn't understand.  I am throwing out results that were _positive_
because I don't consider them valid unless _all_ tests pass.

You still don't get to disqualify them! :)
You misunderstood what I was saying.  It was the exact _opposite_ of an
excuse.

I did misunderstand. But please do recognize that any performance
tests, especially against fast-changing libraries is always a snapshot
in time. The results show what was available at that time. Tests can
be bungled, misapplied, or poorly conceived. But if the are
appropriate and competently executed, the results demonstrate the
state of things at that time.
I added those a few days back.

I won't have much time until the weekend, but I'll try to throw
together another test then.

Why would you need to negate?  I'll consider it.

Things like this are very convenient:

select("div.navigation a:not(li.special a)");

Sure I can do that with enough code, but that concisely captures a
reasonably complicated bit of work.

The objection is that it takes something that is 100% reliable (e.g. DOM
host methods like gEBI and gEBTN) and makes it a crap shoot, even in the
very latest version of the major desktop browsers.  See my latest
SlickSpeed test page.  It's a horror show and only scratching the
surface.  This is not how cross-browser scripting is done.  It is just
impossible to write reliable cross-browser scripts if every line has to
rely on the kindness of JS library developers and their wacky selector
engines.

I guess because I kept writing more and more complicated element
traversal code, all tending toward something like the CSS query
engines, but without the insight to do it in a generic way, the query
engines seemed incredibly helpful when they first arrived. I've come
to take them for granted at this point. Sure I could code without
them, but I think I'd be less efficient. The example above was a real
selector that I really had to code without the benefit of a selector
engine. I think it was on the order of twenty lines of code for me in
2003. I'd probably be more efficient now, but I don't think it would
be quick and easy.

Of course I'm introducing a dependency that I don't know as intimately
as I do my own code; that's always an issue. But I do that all the
time. I certainly have not written my own HTTP server (ok, so I have,
but that's not the point: I don't *use* it!) or my own encryption
library, or any of a myriad of other tools I use so long as they seem
to work for me. Browser scripting is a bit more problematic because
of the number of distinct platforms I need to support, but as I don't
feel the need to support Lynx or Netscape 4, I can restrict the test
environment to a large, but manageable subset.

Yeah, the other huge concern is most of those columns are results based
on browser sniffing.  Are you familiar with the mechanical Turk?  Same
sort of thing, except it isn't just one midget, but a small army of
mental midgets who are feverishly trying to keep up with the very latest
browsers (and to hell with everything else), logging observations,
twiddling with the design, adding forks, etc.  That's what I mean by
"test-driven" development.  Cross-browser scripting doesn't work like
_that_ either.

Clearly browser sniffing is, to say the least, well past its prime. I
can't agree that it invalidates the libraries or their test results.
But as long as the libraries are depending upon it, they will forever
be scrambling unless some very clear and nearly universal DOM-
scripting standards are available.

Test-driven development means to me writing specifications in terms of
tests that need to be passed. It has little to do with twiddling
except that every regression ever created should have its own test.

-- Scott
 
S

Scott Sauyet

David said:
Scott Sauyet wrote:

No, I explained that.  The tests were looking for that $ identifier.  It
isn't any faster or better in any way than using a non-$ identifier
(e.g. getEBCS).

That's why I was suggesting hosting with PHP. The configuration file
has an entries that looks like this:

[MooTools 1.2.4]
file = "mootools-yui-compressed.js"
function = "$$"

[My Library]
file = "mylib-min.js"
function = "API.getEBCS"

Throwing the library scripts into the frameworks directory, updating
this configuration file, and, if you want, changing the list of
selectors, is all that you need to do to have your own version
running. The tool doesn't need "$". Dojo uses "dojo.query"; YUI uses
"Y.all". So long as the framework has a function that accepts a
string selector and evaluates it as a CSS selector in the context of
the document, it should work.

-- Scott
 
D

David Mark

Scott said:
Yup, a number of problems there. Interestingly the fastest libraries
in my only test (FF3.6)

FF3.6 should be considered the Snoopy run (for children and the
disabled). Try - for example - Opera < 9 or FF1 (or any version of IE).
(JQ1.3.2, MyLib, Dojo1.3.2, JQ1.4.1,
Dojo1.4.0, and PT1.6.1, in that order) were the ones that also seemed
to have the fewest problems. Is that due to the leveling effect of
QSA?

Sure it is. Now, go back a version (library or browser) and watch
things fall apart. This is what I've been saying. Just because they
come out with a new library every six months does not mean that every
site will suddenly upgrade (quite the contrary as they _always_ break
compatibility). Furthermore, just because some clueless developer
decides that IE6 - for example - doesn't matter anymore, doesn't mean
that people are not still using it. Testing just the latest browsers
with QSA-enabled libraries is _almost_ meaningless. Of course, QSA is a
whole new playing field for them to tear up. Near as I can tell, they
are going at it with the same sluggish apathy as they did DOM and XPath.
The results are predictable and really very sad as such endless
futility will likely kill browser scripting for good. :(
You still don't get to disqualify them! :)

I mean I am giving the medal back as I don't feel it is justified to
call something the fastest if it doesn't pass all of the (supported)
tests. :)
I did misunderstand. But please do recognize that any performance
tests, especially against fast-changing libraries is always a snapshot
in time.

Fast-changing libraries is the whole problem. They never needed to be
fast-changing. Lots of twits think they are in a race against each
other, so they keep piling on the bullshit (ooh, our "major" library is
cooler than _yours_). :(

And, of course, with browser sniffing you do have to keep maintaining
them forever. It's so completely stupid it makes me want to scream
(especially when I find myself on a site that is wearing out my PC due
to jQuerification).
The results show what was available at that time. Tests can
be bungled, misapplied, or poorly conceived. But if the are
appropriate and competently executed, the results demonstrate the
state of things at that time.

What I meant was that any positive that could be gleaned from the speed
comparisons should be taken with a grain of salt. It failed as far as I
am concerned. One failure in one browser in one configuration for any
of the (supported) tests is unacceptable. Contrast that with something
like jQuery or Dojo. They change their stuff out all the time, yet they
fail all sorts of basic tests, even in the very latest browsers (and
forget anything that came out a few months back). There's a fundamental
difference in philosophy here. They say it is okay to work "well
enough", even if that means taking something that is 100% reliable (e.g.
gEBI, gEBTN, etc.) and turning it into something akin to a coin flip.
And they have to work really, really hard just to tread water at that
level of "proficiency" in just a handful of browsers (what they refer to
as "all browsers" on "all platforms"). I say that is all bullshit. If
you couldn't pull off a CSS query engine, you shouldn't have spent years
trying, failing, deluding developers, proposing patches, pissing off
users, etc.
I won't have much time until the weekend, but I'll try to throw
together another test then.

Cool. The versions on the Downloads page are always the very latest and
are "mirrored" in the repository.
Things like this are very convenient:

select("div.navigation a:not(li.special a)");

Honestly, that looks ludicrous to me. And why on earth would you trust
these things with something so complex when they plainly can't deal with
very basic queries consistently cross-browser (or even in an IE-only
environment?) It's blind faith in people who have earned no faith at
all. On the contrary, after years of failures, everyone should be ready
to either give up on browser scripting or admit that the big open source
projects were not the way.
Sure I can do that with enough code, but that concisely captures a
reasonably complicated bit of work.

Aha, but what makes you think it will be accurate?
I guess because I kept writing more and more complicated element
traversal code, all tending toward something like the CSS query
engines, but without the insight to do it in a generic way, the query
engines seemed incredibly helpful when they first arrived. I've come
to take them for granted at this point.

Major league fucking mistake. I don't know how else to put it. :)
Sure I could code without
them, but I think I'd be less efficient.

Is it efficient to have to re-do things? And how about re-testing? You
can see that these selector query black boxes are not the magical
cross-browser wonders they are marketed as. How could you use these
things and sleep at night unless you had literally tested _every_
browser in use today? And you'd still have to worry about _new_
browsers. It's pure insanity when they can't even get IE right after
they had years to get ready for it. Hell, none of them even bothered
with the attributes stuff, which I first warned would break IE8 all to
hell back in 2007. ;)
The example above was a real
selector that I really had to code without the benefit of a selector
engine. I think it was on the order of twenty lines of code for me in
2003. I'd probably be more efficient now, but I don't think it would
be quick and easy.

The number of lines is a _very_ poor indicator of efficiency. Write a
function once and be done with it. Don't rely on magical query engines
that aren't.
Of course I'm introducing a dependency that I don't know as intimately
as I do my own code; that's always an issue.

Especially when the authors don't know it very well either. :)
But I do that all the
time. I certainly have not written my own HTTP server (ok, so I have,
but that's not the point: I don't *use* it!) or my own encryption
library, or any of a myriad of other tools I use so long as they seem
to work for me.

That's not the point. These are dubious scripts written by half-ass
weekend warrior programmers, not real software. The confusion may come
from the fact that some of these "warriors" have been picked up by real
companies that don't know any better (browser scripting is a bit of a
mystery for most people).
Browser scripting is a bit more problematic because
of the number of distinct platforms I need to support, but as I don't
feel the need to support Lynx or Netscape 4, I can restrict the test
environment to a large, but manageable subset.

You got it straight that browser scripting is more problematic. You
better believe it. And the wares available are of relatively poor
quality. Do the math. ;)
Clearly browser sniffing is, to say the least, well past its prime.

It never had a prime, unless you count the mid-nineties.
I
can't agree that it invalidates the libraries or their test results.

Of course it does. Look at how the results shift from one column to the
next. It's the whole sad history laid out for you. The influence of
outrageous foolishness is unmistakable. Now, look at the third
dimension here (time). Compare - for example - Opera 8, 9 and 10 and
notice that as they moved to "support" the next version, they broke the
previous. Nobody went out and notified all of the users, of course, and
the users don't care what browser scripting nitwits think anyway (they'd
slam the door on them for sure). IE 6, 7, 8 is also an interesting
progression. It becomes quite clear why they keep whining about
"dropping" IE < 8. They are out of their tiny little minds, of course
(IE8 can mimic IE7 with the touch of a button), but their motivations
are quite clear: they are tired of looking stupid. I say smarten up
rather than saying things that add to the perception of stupidity (like
dropping IE < 8). Be fair, they are likely more ignorant than stupid,
but why split hairs?
But as long as the libraries are depending upon it, they will forever
be scrambling unless some very clear and nearly universal DOM-
scripting standards are available.

Yes, scrambling is an apt term for that. As a site owner or Web
developer, you want to steer well clear of their mad scrambling. It's
very bad for business (not to mention morale). I find it very
irritating too, but that's just me. :)
Test-driven development means to me writing specifications in terms of
tests that need to be passed. It has little to do with twiddling
except that every regression ever created should have its own test.

As I mentioned, your test-driven development is not the same as my
"test-driven" design/development. I am referring to stupid shit like
un-declaring all globals because some half-wit (or two) reported that it
made the library "faster" in some version/configuration/simulation of
IE. True story. The jQuery people didn't do that one, but are famous
for similarly bizarre design decisions where dubious test results are
substituted for research and thinking things through. It's like they
are trying to conserve brain cells. In any event, it bears no
resemblance to science (other than the junk variety). ;)
 
D

David Mark

Scott said:
That's why I was suggesting hosting with PHP. The configuration file
has an entries that looks like this:

I can do all of that with ASP. Have you seen my builder/test page?
These things are toys by comparison. :)
[MooTools 1.2.4]
file = "mootools-yui-compressed.js"
function = "$$"

[My Library]
file = "mylib-min.js"
function = "API.getEBCS"

Yes, but I don't feel compelled to do anything more to these test pages.
The static pages are fine, save for that one I forgot to update (thanks
for the heads up on that).
Throwing the library scripts into the frameworks directory, updating
this configuration file, and, if you want, changing the list of
selectors, is all that you need to do to have your own version
running. The tool doesn't need "$". Dojo uses "dojo.query"; YUI uses
"Y.all". So long as the framework has a function that accepts a
string selector and evaluates it as a CSS selector in the context of
the document, it should work.

Still, some people actually want to use "$", so I make it available. I
am aware that the tests can be changed to use something else, but I am
tired of fiddling with them. Have you noticed that TaskSpeed's page is
just a sloppy copy of SlickSpeed? The origin cell even says
"selectors". :)
 
D

David Mark

Stefan said:
Scott said:
That's why I was suggesting hosting with PHP. The configuration file
has an entries that looks like this: [...]
I can do all of that with ASP. Have you seen my builder/test page?
These things are toys by comparison. :)

About that - are you planning on open-sourcing the builder?

Not at this time.
If not, how
do you suggest that the users of your library will build TheirLibrary
when you've lost interest?

I don't plan to lose interest any time soon and certainly won't take the
app down if I do (more likely I would turn it over to somebody else).
Granted, I have let the source in the builder get stale over the past
few weeks (and it has been a very busy few weeks as far as changes and
additions go). I will definitely rectify that shortly. Perhaps as soon
as tomorrow.
Come to think of it, you'd need to open your actual source as well.

The thing is that the source is dynamically generated. So you can
download the full build and change it around, but any changes that are
to be implemented in the builder (and propagated to the files on the
Downloads page) will have to go through me (which I think makes sense
anyway). It will likely never be open in the sense of allowing anyone
to change anything at any time (at least not on my watch).
 
S

Scott Sauyet

FF3.6 should be considered the Snoopy run (for children and the
disabled).  Try - for example - Opera < 9 or FF1 (or any version of IE)..

I've said before that I'm not particularly interested -- for any
current projects or any that I expect to work on soon -- in FF1. I
also don't have any need to support Opera. I know that you find this
quite important, and I applaud you for building a tool that helps with
these older browsers. (How far back do you expect to be able to go?
NN4, IE3?) But for me, that's simply not a major issue.
[ ... ]
 The results are predictable and really very sad as such endless
futility will likely kill browser scripting for good.  :(

Just out of curiosity, for how long have you been predicting its
imminent demise? :)

Fast-changing libraries is the whole problem.  They never needed to be
fast-changing.  Lots of twits think they are in a race against each
other, so they keep piling on the bullshit (ooh, our "major" library is
cooler than _yours_).  :(

Forget the "major" qualifier. Many of these people work on the
libraries for their own edification, and they've become important
players by happenstance. People might still be contributing to them
as they see a need. Most of the libraries didn't start out ever
designed to be major (Dojo and YUI might be exceptions.) Whether they
need to be fast-changing depends upon how people plan on using them.
You are of course entitled to your own opinion. But the makers of
these libraries and their users don't agree.

Honestly, that looks ludicrous to me.  

Okay, it might look that way to you. To me it is a very concise and
intuitive way to say "select all the anchor elements that are inside
divs with a class of navigation but are not inside list items with a
class of special, and by the way, return them in document order,
please." That was the requirement.
And why on earth would you trust
these things with something so complex when they plainly can't deal with
very basic queries consistently cross-browser (or even in an IE-only
environment?)  

I understand some of your technical objections to these libraries, but
for me, I have not had a single end-user complaint about JS-related
problems sites built using these libraries. Nor have I found myself
fighting too much with them to get my job done. I have never used
YUI, but I have worked with dojo, Prototype, MooTools, and jQuery at
various times.
It's blind faith in people who have earned no faith at
all.  On the contrary, after years of failures, everyone should be ready
to either give up on browser scripting or admit that the big open source
projects were not the way.

I simply have not have the bad experience you predict with these
libraries. I do not think they are the be-all and end-all of browser
scripting tools, but they have actually been pretty steady for me.

Aha, but what makes you think it will be accurate?

It has been accurate in every environment I"ve tested in. There are
of course many untested environments, and there are probably others
that I haven't tested that I should test, but it's hard to argue with
success.

Major league fucking mistake.  I don't know how else to put it.  :)

If I didn't have them, I would almost certainly write my own, or
something structurally similar.

Is it efficient to have to re-do things?  And how about re-testing?  You
can see that these selector query black boxes are not the magical
cross-browser wonders they are marketed as.  How could you use these
things and sleep at night unless you had literally tested _every_
browser in use today?  

Well, I have two things that make that simpler. First, I often work
in a locked-down corporate environment, with a restricted set of
browsers. Second, for the public sites, I have pretty good visitor
statistics. Obviously, I can do nothing about IE9, but I can be
pretty sure that if I haven't seen IE4 in three years, it's not likely
to suddenly start showing up again. Of course it's not certain, but
what in life is?

The number of lines is a _very_ poor indicator of efficiency.  

Of course, but it can stand in as a very rough indicator of effort.

Write a
function once and be done with it.  Don't rely on magical query engines
that aren't.

But how generic should that function be? Should it be able to handle
only that specific case? Or should it work for the equivalent of
this:

div.navigation a:not(dt.special a)

And should the not be always supplied or should it also be able to
handle just

div.navigation a

And should it be further expanded to... oh hell with it, maybe it
should just handle any CSS selectors I choose to throw at it.

That's not the point.  These are dubious scripts written by half-ass
weekend warrior programmers, not real software.  The confusion may come
from the fact that some of these "warriors" have been picked up by real
companies that don't know any better (browser scripting is a bit of a
mystery for most people).

I've spent a lot of time in the open source world. And it has worked
very well for me. Much of the Java ecosystem where I do most of my
work is open source. What you're describing would apply equally to
many other communities in which I participate. And yet quality
software gets written.

I don't really want to be defending these libraries. They have a
great number of faults. But I simply don't think the situation is as
dire as you paint it.

-- Scott
 
D

David Mark

Scott said:
I've said before that I'm not particularly interested -- for any
current projects or any that I expect to work on soon -- in FF1.

What about IE?
I
also don't have any need to support Opera.

Intranet apps?
I know that you find this
quite important, and I applaud you for building a tool that helps with
these older browsers. (How far back do you expect to be able to go?
NN4, IE3?) But for me, that's simply not a major issue.

I've tested both NN4 and IE3. The results are posted in my forum.
There is a working example for NN4. IE3 degrades entirely, which is a
successful result. Now, if it just exposed the entire API to IE3
without testing, that would be a problem.

But it isn't about supporting these unused browsers. It's about testing
in the worst sort of environments you can find.
[ ... ]
The results are predictable and really very sad as such endless
futility will likely kill browser scripting for good. :(

Just out of curiosity, for how long have you been predicting its
imminent demise? :)

I didn't say imminent, just demise. It won't happen overnight.
Forget the "major" qualifier. Many of these people work on the
libraries for their own edification, and they've become important
players by happenstance. People might still be contributing to them
as they see a need. Most of the libraries didn't start out ever
designed to be major (Dojo and YUI might be exceptions.) Whether they
need to be fast-changing depends upon how people plan on using them.
You are of course entitled to your own opinion. But the makers of
these libraries and their users don't agree.

I know they don't agree, but a lot of the things I point out about them
are not matters of opinion.
Okay, it might look that way to you. To me it is a very concise and
intuitive way to say "select all the anchor elements that are inside
divs with a class of navigation but are not inside list items with a
class of special, and by the way, return them in document order,
please." That was the requirement.

I just think that it may appear concise and intuitive on the surface,
but there are so many potential issues lurking beneath that it would be
better to sacrifice this apparent conciseness for simplicity and
reliability.
I understand some of your technical objections to these libraries, but
for me, I have not had a single end-user complaint about JS-related
problems sites built using these libraries.

That doesn't mean there aren't problems. And you may well have managed
to avoid the land mines in enough browsers to make it appear that all is
well. But what happens the next time that either the browsers or the
query engine is upgraded?
Nor have I found myself
fighting too much with them to get my job done. I have never used
YUI, but I have worked with dojo, Prototype, MooTools, and jQuery at
various times.

My condolences on that. I certainly hope that stuff holds up for you.
I simply have not have the bad experience you predict with these
libraries. I do not think they are the be-all and end-all of browser
scripting tools, but they have actually been pretty steady for me.

You may simply lead a charmed life. But it is hard to argue with the
code. Eventually these things will bite you (and may have already
without you knowing it).
It has been accurate in every environment I"ve tested in. There are
of course many untested environments, and there are probably others
that I haven't tested that I should test, but it's hard to argue with
success.

No, it is trivially easy to argue with success. Observations of
successful results should only reinforce what you already know about the
code. If you are simply trusting that the code is correct, the
observations must be taken with a grain of salt. It only takes one bad
result to invalidate a cross-browser application and you can only test
so many browsers. For instance, IE alone has more configuration
permutations than can be tested in one lifetime. ;)
If I didn't have them, I would almost certainly write my own, or
something structurally similar.

And I implore you to understand that it would be a mistake to do so.
Well, I have two things that make that simpler. First, I often work
in a locked-down corporate environment, with a restricted set of
browsers.

But eventually they will upgrade.
Second, for the public sites, I have pretty good visitor
statistics.

Do not trust those.
Obviously, I can do nothing about IE9, but I can be
pretty sure that if I haven't seen IE4 in three years, it's not likely
to suddenly start showing up again. Of course it's not certain, but
what in life is?

It isn't about supporting IE4. Nobody uses that. It's about the fact
that you can't know just what they are using now (and certainly not what
they will use in the future). What if the locked down corporate
environment decides to leverage your existing app for mobile users?
Of course, but it can stand in as a very rough indicator of effort.

But then add all of the thousand lines that you started with (e.g.
jQuery). :)
But how generic should that function be?

Context, context, context. :)
Should it be able to handle
only that specific case? Or should it work for the equivalent of
this:

div.navigation a:not(dt.special a)

Depends on what you are trying to do.
And should the not be always supplied or should it also be able to
handle just

div.navigation a

As mentioned, I'd avoid any such things like the plague. Just fetch the
DIV's, filter by the class and then fetch the anchors. Only you know
how general or specific the function needs to be (and only at the time
that you design each application). Patterns will emerge over time and
you will find that you can re-use the functions.
And should it be further expanded to... oh hell with it, maybe it
should just handle any CSS selectors I choose to throw at it.

No, it should not. That's the main point. CSS selector queries are
diametrically opposed to sound browser scripting practices. The more
general purpose and complicated, the further you move away from
consistent cross-browser results.
I've spent a lot of time in the open source world. And it has worked
very well for me. Much of the Java ecosystem where I do most of my
work is open source.

I don't know a lot about Java or its ecosystem.
What you're describing would apply equally to
many other communities in which I participate. And yet quality
software gets written.

Not so far in this industry.
I don't really want to be defending these libraries. They have a
great number of faults. But I simply don't think the situation is as
dire as you paint it.

You are entitled to your opinion, of course. Stick around a while and
it may well change. ;)
 
S

Sam Doyle

It seems like the whole internet vs you! Every blog I seem to visit
lately wants to have a dig at you! Would I be offending you if I
asked... Maybe there right and your wrong? The numbers are most
definately against you!
 
D

David Mark

Sam said:
It seems like the whole internet vs you! Every blog I seem to visit
lately wants to have a dig at you! Would I be offending you if I
asked... Maybe there right and your wrong? The numbers are most
definately against you!

Right about what?
 
D

Dr J R Stockton

In comp.lang.javascript message <b20dn5hg9c0d17dqk9d57eb8fsr7cj2u17@4ax.
com>, Sat, 13 Feb 2010 13:42:39, Hans-Georg Michna <hans-
(e-mail address removed)> posted:
run your tests twice. First you get a raw measurement with, say,
100 iterations. From that you extrapolate the number of
repetitions you need for some desired total time and run the
test again with that number.

This way the new Date() calls will have no effect on your
measurements, because there will be none inside the loop.

To deduct the loop counter and test time, you can determine and
subtract the time needed for an empty loop.


Good.

But :

If this testing is being taken really seriously, it might be wise to do
it on a specially-configured boot of the system such that a minimum of
other processes are being run. The fewer other processes that might
grab the CPU, the more accurate the results are likely to be.

One can, instead of what you suggest, extrapolate the number for 1/Nth
of the total time, and measure N times; but do it contiguously so that
there are only N+1 calls of new Date(). The answer will still be
obtained from the difference between the last and first times; but by
checking that the partial answers do not vary too much, one can protect
against believing an answer that is unusually influenced as in the
previous paragraph.

Choose to use, as much as possible, a system that gives good and
meaningful resolution in new Date().
<URL:http://www.merlyn.demon.co.uk/js-dates.htm#Ress> indicates what I
mean, but needs updating - results for Vista and Windows 7 are lacking.
On my WinXP, Firefox apparently has a resolution of 1 ms, Opera of 1/64
s.
 
D

Dr J R Stockton

In comp.lang.javascript message <m4dko5hqj5dno9h9vuf508bdh05uj10kif@4ax.
com>, Sun, 28 Feb 2010 10:31:32, Hans-Georg Michna <hans-
(e-mail address removed)> posted:
My proposal was directed at avoiding Date() altogether inside
the loop. I guess that something like

for (i = 0; i < 10000; i++) ...

is much faster and less distorting than Date(), which delves
into the operating system and may cause unpredictable delays.

One just has to replace 10000 with a suitable number, and that's
not difficult. For a few tests one could do it manually, by
trial and error. For a more systematic approach one would use a
reasonably low number for a first run and extrapolate the number
for the second, final run from that.

That leaves you at the mercy of interventions by the OS.

On my P4/3GHz, using Firefox 3.0.18, new Date() takes about 2.5 us; in
Chrome 4.0, it takes 0.4 us. A few of those, during a timing run taking
an appreciable fraction of a second, will not matter.

Another approach would be to use (cf. above)
for (i = 0; i < imax; i++)
and to start with imax = 1, doubling it until the interval became
adequate and then doing a few tests with that imax, checking that they
are in reasonable agreement.

The time for an empty loop should be carefully measured and subtracted.

ECMA should be pressed to add to the language a global function (or ?)
which accesses the finest available time counter of the system. On a
modern PC, a single machine instruction will read a 64-bit CPU since-
boot cycle count, and its nominal frequency is also IIRC available.
Should be easy to implement, if the result is specified as machine-
dependent or 0/false if not available. Only the low 53 bits need be
returned in that case (2^64 at 10GHz is over 60 years, 2^53 is over 10
days).
 
T

Thomas 'PointedEars' Lahn

Scott said:
Okay, it might look that way to you. To me it is a very concise and
intuitive way to say "select all the anchor elements that are inside
divs with a class of navigation but are not inside list items with a
class of special, and by the way, return them in document order,
please." That was the requirement.

The question remains, though, did the requirement make sense in the first
place? Was it really not possible to solve this problem differently, for
example with event bubbling, with using the CSS cascade, with using
standards-compliant event-handler attributes, or a combination of them?
And then, is the approach that had been chosen to meet the requirement more
or at least equally reliable than the alternatives?

The answer to the first question is very likely "no". The answers to the
second and third question are definitely "no".

As for the second one, if you lose the rather unreasonable notion of having
to separate markup and the script written specific for that markup, of
"JavaScript" needing to be "unobtrusive", everything can be done without
having to add event listeners dynamically, especially with server-side
scripting, or select elements with client-side scripting only to modify
their presentation globally (that is what the CSS cascade is for and you
can, very easily and interoperably, add new stylesheets to override the
[self-imposed] defaults).

And as for the third one, no approach that depends on the availability of
API features to define event listeners in the first place, especially not
on those being proprietary (MSHTML does not support W3C DOM Events to date)
can ever be more reliable than one that uses only or mostly standards-
compliant, well-defined and proven-to-be-interoperable features like event-
handler attributes.


PointedEars
 
M

Michael Haufe (\TNO\)

The time for an empty loop should be carefully measured and subtracted.

This approach would assume that the implementation in question doesn't
optimize away the loop as a useless construct.
 
A

Antony Scriven

This approach would assume that the implementation in
question doesn't optimize away the loop as a useless
construct.

How? It's incrementing a variable. --Antony
 
L

Lasse Reichstein Nielsen

Antony Scriven said:
How? It's incrementing a variable. --Antony

If it's a local variable (and you *really* shouldn't use global
variable in a loop, or write benchmark code that runs at top-level),
and it's not read again afterwards, and it's possible to see that the
loop always terminates, then it's a safe optimization to remove the
entire loop.

I.e.
function test() {
var x = 42;
for (var i = 0; i < 1000000; i++) { x = x * 2; }
}

This entire function body can safely be optimized away.
Whether a JavaScript engine does the necessary analyzis to determine
that is another question, but it's a possible optimization.

Quite a lot of stupid micro-benmarks can be entirely optimized away
like this.

/L
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,079
Messages
2,570,574
Members
47,207
Latest member
HelenaCani

Latest Threads

Top