JS framework

T

Thomas 'PointedEars' Lahn

kangax said:
Btw, IIRC, IE6 - not understanding multiple classes - will parse this as:

#sidebar .faq-section div.hidden

Good catch, thanks.
But that could be worked around, of course.

Like

#sidebar.toggle .faq-section div.hidden {
display: none;
}

?
This is a very elegant approach, indeed. The only problem is that
clicking on `.toggle` should usually hide *another element*. For
example, toggle link can be located in H2 and H2 might be followed by
DIV; That DIV is the one that needs to be toggled. I suppose one can add
"hidden" class to a parent element (or any other ancestor) but then
relevant CSS rules add up in complexity.

Not necessary:

<h2 class="toggler" id="foo" ...>
<div id="foo-div" ...>...</div>

That said, elements related this way should be nested. The above example is
an exception.
Besides, one would need to constantly look up corresponding declarations
in stylesheet when reading/changing this code.

CSS syntax is concise and well-known. It's easy to understand and maintain.
-- kangax, <

PointedEars
 
G

Garrett Smith

kangax said:
Thomas said:
kangax said:
Garrett Smith wrote:
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance
[than jQuery]
You can't do much with `document.getElementById`. Sometimes you want
to select by class, attribute, descendant or a combination of those.
I agree that more complex selectors are rarely needed (well, except
maybe something like `nth-child("odd")`).

Event delegation is great, but then you might want some kind of
`match` method to determine whether a target element matches
selector. `match` is not necessary, of course, but testing an element
manually adds noise and verbosity.

Iterating over all elements to add event listeners to all target elements
adds much more noise and unreliability. For universally bubbling events,

I never said anything about adding event listeners to elements being
iterated over. I stopped doing that a while ago in favor of event
delegation. Nevertheless, I still do iterate over elements (by class,
descendant, attribute, etc.) to perform certain action on them, such as
toggling visibility.

I suppose manipulating relevant stylesheet rules could achieve similar
result.

How do you do it?
use event bubbling, and add one event listener to a common ancestor. For
the rest, it depends on how many target elements are there: the more
there
are, the more likely is it that the listeners should be added dynamically
(to ease maintenance). That said, some of those events, like
`mouseover' on
links, do not even require DOM scripting nowadays (not even in bad old
IE 6).

As for verbosity, I fail to see how that could be a drawback.

Verbosity has a tendency to add noise. Noise makes it hard to
understand, maintain and refactor code. That's all I meant.

Compare:

if (matches(element, '#sidebar .faq-section .toggle')) { ... };

vs. something like:

if (hasClass(element, 'toggle') &&
hasParentWithClass(element, '.faq-section') &&
hasParentWithId(element, 'sideabar')) { ... }

CSS syntax is concise and well-known. It's easy to understand and maintain.

I agree. More to that: CSS is easier to understand when it contains
meaningful and specific names.

Regarding the example code posted, that much should never be necessary.
It is a straw man argument there and nothing more.

If the document has more than one type of toggle, then "toggle" is not
specific enough. In that case, the actionable object's class can be
changed to something that describes it more specifically.

var cn = target.className;

if(cn && hasToken(cn, "faq-toggle")) {
FAQExpander.getByNode( target ).toggle();
}

hasToggle is useful, BTW, because it allows the className property to be
retrieved once and checked multiple times, for an element that may have
multiple className tokens, where the additional token(s) may represent
different state e.g. "faq-toggle-expanded".

Garrett
 
D

David Mark

[snip]
As I said to a fellow developer yesterday... "jQuery provides
convenience, not power. Don't confuse the two."

You say a lot of things. Take this:

http://groups.google.com/group/jquery-dev/browse_thread/thread/8d7752dcfedb1c08

"Yes. Avoid the attr() function in jQuery, it's been broken for a long
time."

LOL. Two years too late. What an ineffectual douche bag.

"Wow, why would you want to use jQuery to do this? Instead, try:"

Probably because disingenuous, know-nothing dip-shits have been
promoting it for years as "easier than Javascript." Never mind that
it *is* Javascript or that it introduces far more headaches than it
alleviates.

Or this:

http://groups.google.com/group/jquery-dev/browse_thread/thread/4bab8abc05e60c80#

That's right genius, jQuery sucks at IE, even versions that have been
out for a decade. As you know:

"IE6 has been around for a decade"

....it's like hearing my words coming out of your mouth (two years
later.) Odd as you spent much of those two years bitching about such
criticism of jQuery. Meanwhile you've been patching your own private
jQuery. I don't know if you are delusional, hypocritical or what, but
you are definitely something less than useful.

And pity this poor guy as he probably heard jQuery was "wonderful"
from some "expert" like you.

http://groups.google.com/group/jquery-en/browse_thread/thread/70e8a050f60f8b08#

You couldn't see jQuery had issues with browser sniffing. You
couldn't see that the pivotal ready, attr, etc. methods were pure
fantasy. You couldn't see me writing a better script. You couldn't
see how that script was as simple to use as jQuery. You sure as hell
couldn't see how influential that script would become. Small wonder
you can't see what is happening now. You are just a back-pedaling
schmuck with a big mouth.

And to take a page out of your pathetic playbook: seems nobody wants
to work with you, even your fellow jQuery washouts.

HTH.
 
M

Matt Kruse


I've only hit this bug once or twice before, because I don't use many
animations in jQuery. For the most part, the problems that jQuery has
with IE haven't affected me. When they do, I talk about it in order to
figure out the best way to handle it.

It's odd to me that you would read the support groups for a script
that you find so repulsive and useless. Seems like a waste of time on
your part.
"IE6 has been around for a decade"
...it's like hearing my words coming out of your mouth (two years
later.)

Your criticisms have certainly taught me some things and probably
affected my attitude towards jQuery in some ways. But you are just one
fish in a sea of opinions, ideas, and thoughts. You are not quite as
important or wise or influential as you seem to think you are. Your
fascination with pointing out how "right you were" two years ago and
how nobody listened to you seems... well... odd to me.
You couldn't see jQuery had issues with browser sniffing.  You
couldn't see that the pivotal ready, attr, etc. methods were pure
fantasy.  You couldn't see me writing a better script.  You couldn't
see how that script was as simple to use as jQuery.  You sure as hell
couldn't see how influential that script would become.  Small wonder
you can't see what is happening now.  You are just a back-pedaling
schmuck with a big mouth.

http://en.wikipedia.org/wiki/Megalomania

Matt Kruse
 
D

Diego Perini

I would definitely care about 20ms.

As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.

In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.

Why add something that is not necessary?

FWICS,NWMatcherstill uses function decompilation.http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...

Garrett

Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:

(/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);

with a simpler:

typeof object[method] == 'function';

You don't see the problem because you haven't faced it, but it is a
problem and a very frequent one when you have to ensure compatibility
with code written by others not interested in keeping a clean
environment.

Diego Perini
 
D

Diego Perini

Just to be sure, I checked IE6. It was reporting either 0 or 16ms (with
16ms occurring much more rarely), so I assume it's somewhere in that range.

I'm surprised you would care about those milliseconds. I would worry
about size (48KB original, probably ~30-40 minified and ~15-20 gzipped)
much more than this practically instant initialization.

But then again, most of the time, you wouldn't use all of CSS3
selectors. There's often need for fully-compliant CSS3 selector engine.




Of course. Testing class is easy. It is descendant and attribute values
(as well as combination of those) that introduce complexity.




I'll follow up if I can present a good example.




Necessity is a relative concept ;) Whether such abstraction is to be
used should be determined on a per-project bases, based on the
application nature and its requirements. As always, it's all about
context and balance.




That's one of the things I dislike in it. Overall, though, it's much
better than anything else I've seen. I helped its author with some of
the feature tests, so I know it doesn't use sniffing or any nonsense
inference. If you see something suspicious, speak up, an author is
usually very responsive to any kind of feedback.

@kangax,
about the size of NWMatcher, latest version is currently 15 KBytes
minified and about 5.5 KBytes gzipped. There are a lot of comment in
the original sources and NWMatcher is bigger because it tries to
comply with most CSS3 and HTML5 specifications where others just
skipped them.

NWMatcher is not a framework, so if you don't need it just do not load
it. NWEvents will offer delegation also without NWMatcher, it has a
minimal matcher built in for simple selectors which are probably the
most used in general. If one need complex selectors support to add
behaviors in big and complex HTML then there is no other way then to
load a capable CSS engine like NWMatcher.

The strong points in delegation is that it works cross-browser and
help avoid "onload" problems and other tricky "ready" detections.

NWEvents offer W3C capturing and bubbling emulation on IE to achieve
cross-browser delegation and works also with all forms events.

Thank you for promoting my humble contribution.-

Diego Perini
 
T

Thomas 'PointedEars' Lahn

Diego said:
You don't see the problem because you haven't faced it,

I don't think so.
but it is a problem and a very frequent one when you have to ensure
compatibility with code written by others not interested in keeping a
clean environment.

The problem arises in the first place by using code written by others not
interested in keeping a clean environment (unmodified). Working competently
avoids the problem.


PointedEars
 
D

Diego Perini

I don't think so.


The problem arises in the first place by using code written by others not
interested in keeping a clean environment (unmodified).  Working competently
avoids the problem.

PointedEars
--
Anyone who slaps a 'this page is best viewed with Browser X' label on
a Web page appears to be yearning for the bad old days, before the Web,
when you had very little chance of reading a document written on another
computer, another word processor, or another network. -- Tim Berners-Lee

Fortunately I am not in the position to force users to only use code
keeping a clean environment, I would loose my time.

I prefer to write my own code and ensure as much as I can that it
works with other FW or snippet of code.

Actually I use feature testing wherever possible and try not to
pollute the global scope.


Diego Perini
 
T

Thomas 'PointedEars' Lahn

Diego said:
Thomas said:
Diego said:
You don't see the problem because you haven't faced it,
I don't think so.
but it is a problem and a very frequent one when you have to ensure
compatibility with code written by others not interested in keeping a
clean environment.
The problem arises in the first place by using code written by others
not interested in keeping a clean environment (unmodified). Working
competently avoids the problem. [...]

Please trim your quotes, do not quote signatures.
Fortunately I am not in the position to force users to only use code
keeping a clean environment,

I would not call that "fortunate".
I would loose my time.

You prefer it fastened? ;-)
I prefer to write my own code and ensure as much as I can that it works
with other FW or snippet of code.

By which you promote the proliferation of code of bad quality.
Actually I use feature testing wherever possible and try not to pollute
the global scope.

Good.


PointedEars
 
G

Garrett Smith

Diego said:
I would definitely care about 20ms.

As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.

In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.

Why add something that is not necessary?

FWICS,NWMatcherstill uses function decompilation.http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...

Garrett

Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:

(/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);

with a simpler:

typeof object[method] == 'function';

You don't see the problem because you haven't faced it,

Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?

but it is a
problem and a very frequent one when you have to ensure compatibility
with code written by others not interested in keeping a clean
environment.

Modifying host objects is known to be problematic.

Hiding that mistake does not make it go away. If the code is broken, fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.

General purpose frameworks are often chosen to "save time". Prototype,
Mootools, Cappucino, Mojo all modify host objects. Interesting that
NWMatcher has extra code created for working around such problematic
design of the modification of host objects. Wouldn't it make more sense
to just not use those?

Bubbling is fast, can result in smaller, cleaner code, and often
obviates the perceived need for a "document ready" handler. Examining
the target on a bubbled event is usually trivial.

NWMatcher does not seem to provide a cleaner and faster abstraction than
simply examining the target with tagName and a hasClass function. YAGNI.
No reason to need it and no reason for the workaround in it.
Diego Perini

Garrett
 
D

Diego Perini

Diego said:
Thomas said:
Diego Perini wrote:
You don't see the problem because you haven't faced it,
I don't think so.
but it is a problem and a very frequent one when you have to ensure
compatibility with code written by others not interested in keeping a
clean environment.
The problem arises in the first place by using code written by others
not interested in keeping a clean environment (unmodified).  Working
competently avoids the problem. [...]

Please trim your quotes, do not quote signatures.
Fortunately I am not in the position to force users to only use code
keeping a clean environment,

I would not call that "fortunate".

If it where so easy to do I would have said "unfortunately".
You prefer it fastened? ;-)

No, but your are already doing a good job and I hope some day you will
succeed ! ;-)
By which you promote the proliferation of code of bad quality.

I do not promote them in the sense you are underlining, though I like
their intentions even if they didn't fullfill perfection. ;-)

Thanks. Also you haven't looked to my code, I perfectly understand you
may not have the time nor be interested in it.
 
D

Diego Perini

Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
      (/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
      typeof object[method] == 'function';
You don't see the problem because you haven't faced it,

Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?

Correct you are. Maybe you have a better solution. Just show me so I
can compare.
but it is a


Modifying host objects is known to be problematic.

Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
Hiding that mistake does not make it go away. If the code is broken, fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.

You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.
General purposeframeworksare often chosen to "save time". Prototype,
Mootools, Cappucino, Mojo all modify host objects. Interesting that
NWMatcher has extra code created for working around such problematic
design of the modification of host objects. Wouldn't it make more sense
to just not use those?

The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.
Bubbling is fast, can result in smaller, cleaner code, and often
obviates the perceived need for a "document ready" handler. Examining
the target on a bubbled event is usually trivial.

Yes bubbling is fast and capturing is faster, both are offered
integrated in an easy set of API (that can be changed) for event
management and are in NWEvents which does not need NWMatcher to be
loaded for simple selectors.
NWMatcher does not seem to provide a cleaner and faster abstraction than
simply examining the target with tagName and a hasClass function. YAGNI.
No reason to need it and no reason for the workaround in it.

NWMatcher primarily provides a "match()" method which still does not
exists in any browser.

Doing the same with native QSA on Safari /Chrome is 4 to 10 time
slower than with NWMatcher.

If you havent, I suggest you read a bit about the importance of a
"match()" method from this great reading from David Andersson
(Liorean):

http://web-graphics.com/2006/05/12/javascript-and-selectors/

I will try to search specific archive of webapi ml to find out when
that proposal was discarded.
 
T

Thomas 'PointedEars' Lahn

Diego said:
Thomas said:
Diego said:
Thomas 'PointedEars' Lahn wrote:
Diego Perini wrote:
You don't see the problem because you haven't faced it,
I don't think so.
but it is a problem and a very frequent one when you have to ensure
compatibility with code written by others not interested in keeping a
clean environment.
The problem arises in the first place by using code written by others
not interested in keeping a clean environment (unmodified). Working
competently avoids the problem. [...]
Fortunately I am not in the position to force users to only use code
keeping a clean environment,
I would not call that "fortunate".

If it where so easy to do I would have said "unfortunately".

Your logic module is borken.
No, but your are already doing a good job and I hope some day you will
succeed ! ;-)

I don't follow. Maybe you want to look up "loose"?
I do not promote them in the sense you are underlining,

Yes, you do. If you go to great lengths like this to support inherently
faulty concepts, you are promoting them, if you want it or not. Because
if you would not support them, their problems would not be covered up, which
has at least a chance to lead to better code quality. In addition, your
concept of covering up is faulty, too, so the overall code quality certainly
cannot increase with this approach.
though I like their intentions even if they didn't fullfill perfection. ;-)

Spoken like a true promoter of heavily advertised libraries: clueless and
irresponsible.
Thanks. Also you haven't looked to my code,

It's been reviewed in this very thread already; along with your responses,
that sufficed for me to assess your position in the learning curve. You
have a rather long way to go.
I perfectly understand you may not have the time nor be interested in it.

Rest assured you don't understand anything (about me).


PointedEars
 
D

Diego Perini

Diego said:
Thomas said:
Diego Perini wrote:
Thomas 'PointedEars' Lahn wrote:
Diego Perini wrote:
You don't see the problem because you haven't faced it,
I don't think so.
but it is a problem and a very frequent one when you have to ensure
compatibility with code written by others not interested in keepinga
clean environment.
The problem arises in the first place by using code written by others
not interested in keeping a clean environment (unmodified).  Working
competently avoids the problem. [...]
Fortunately I am not in the position to force users to only use code
keeping a clean environment,
I would not call that "fortunate".
If it where so easy to do I would have said "unfortunately".

Your logic module is borken.
No, but your are already doing a good job and I hope some day you will
succeed ! ;-)

I don't follow.  Maybe you want to look up "loose"?

slipped a double "o" I meant "lose". Thanks for the correction.
Yes, you do.  If you go to great lengths like this to support inherently
faulty concepts, you are promoting them, if you want it or not.  Because
if you would not support them, their problems would not be covered up, which
has at least a chance to lead to better code quality.  In addition, your
concept of covering up is faulty, too, so the overall code quality certainly
cannot increase with this approach.


Spoken like a true promoter of heavily advertised libraries: clueless and
irresponsible.

I always feel clueless when somebody teach me something new. This has
not been the case.

I never feel irresponsible even when I say something wrong, there is
always time to rectify or change mind.
It's been reviewed in this very thread already; along with your responses,
that sufficed for me to assess your position in the learning curve.  You
have a rather long way to go.

I will try to do better, I am not competing with any FW I just have
working events/selectors libraries that perfectly fit my needs, it was
designed and discussed far before these threads with far more
competent people and was referenced by others to implement similar
solutions.

Ok. Ok we are all culprit. And I can accept if we are not in the same
hell circle.
Rest assured you don't understand anything (about me).

Really, I didn't come here for that particularly.

Can you finally point me to a better "match()" method and delgation
implementation just to compare ?

Anyway I still appreciate your time to comment. Sad it hasn't yet show
so useful as I would expect.
 
G

Garrett Smith

Diego said:
Diego said:
kangax wrote:
Garrett Smith wrote:
RobG wrote:
Ivan S pisze:
WhatJSframework would you guys recommend to me?
Tryhttp://www.domassistant.com/
No, don't. It seems to munge the worst of jQuery CSS-style selectors
and Prototype.jsaugmentation of host objects together in one script
that is also depenent on browser sniffing.
The authors' claim that it is the fastest script in the SlickSpeed
tests isn't substantiated - in Safari 4 it comes in 4th out of 7, even
jQuery is faster.
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance.
You can't do much with `document.getElementById`. Sometimes you want to
select by class, attribute, descendant or a combination of those. I
agree that more complex selectors are rarely needed (well, except maybe
something like `nth-child("odd")`).
Event delegation is great, but then you might want some kind of `match`
method to determine whether a target element matches selector. `match`
is not necessary, of course, but testing an element manually adds noise
and verbosity.
A codebase that does not have a selector API should be smaller. In
comparison to a codebase that has a selectors API, the one that does
notshould be downloaded faster, interpreted faster, and should have a
smaller memory footprint.
That much is... obvious :)
I don't think last two really matter. Execution of an entire fully CSS3
compliant selector engine (such asNWMatcher- the best one of them I've
even seen) doesn't take more than few milliseconds. It's 2ms in
FF3.0.10. Probably not more than 20-30ms in relatively slow IE6,7. Would
you really care about those?
I would definitely care about 20ms.
As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.
In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.
Why add something that is not necessary?
FWICS,NWMatcherstill uses function decompilation.http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...
Garrett
--
The official comp.lang.javascript FAQ:http://jibbering.com/faq/
Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
(/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
typeof object[method] == 'function';
You don't see the problem because you haven't faced it,
Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?

Correct you are. Maybe you have a better solution. Just show me so I
can compare.
but it is a

Modifying host objects is known to be problematic.

Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
Hiding that mistake does not make it go away. If the code is broken, fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.

You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.

APE is AFL, so would probably not licenseable for most projects. Having
effectively no user-base, I can remove things that should not be there.

Perhaps in the future, I'll go with a less restrictive mozilla-type
license.

Supporting potential problems in third party libraries is a *choice*.
Piggybacking off the popular libraries probably results in a larger user
base. The obvious downside is that additional code is required to hide
the inherently faulty design of those libraries. What is less obvious is
that the additional code requires "implementation dependent" function
decompilation, which is known to be faulty in at least opera mobile[1]
and is defined to return a FunctionDeclaration, which is not what IE,
Gecko, Safari, Chrome, Opera do when Function.prototype.toString is
called on an anonymous function.
The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.

If I understand correctly, the concern is that a third party library may
have attempted to fix IE's getAttribute/hasAttribute methods by defining
those methods on the element, and that if that had happened, it would
cause the script to not function properly.

XML Namespaced attributes should be avoided. Why would you want to use that?
Yes bubbling is fast and capturing is faster, both are offered
integrated in an easy set of API (that can be changed) for event
management and are in NWEvents which does not need NWMatcher to be
loaded for simple selectors.

Bubbling is available for free without NWEvents. I can't see any reason
for wanting to try to add a code to simulate capturing in IE.
NWMatcher primarily provides a "match()" method which still does not
exists in any browser.

I am aware of that. I don't need that.
Doing the same with native QSA on Safari /Chrome is 4 to 10 time
slower than with NWMatcher.

QuerySelectorAll was, I think, a design mistake inspired by (a)
javascript libraries.
If you havent, I suggest you read a bit about the importance of a
"match()" method from this great reading from David Andersson
(Liorean):

http://web-graphics.com/2006/05/12/javascript-and-selectors/

I see:
| I think they’re currently about to make several design errors that I
| would prefer to be corrected before people start implementing the
| thing.

He goes on to make some good points, Anne van Kesteren did not comment
on those in his response. None of the points raised stopped
implementations from implementing the thing and none of his ideas got
into the API, as Anne published it. What a surprise.

http://www.mail-archive.com/[email protected]/msg00782.html

Looks like the last of that one and the reason given:-
| Parsing the selector string is will likely be by far the least
| expensive part of the match operation.
- Maciej totally missed the point.

If Selectors had used those ideas, it could have been designed in a way
that was (more) useful, but that is fiction.

Given the problem you have presented to solve with NWMatcher, I can
solve that with the following code:-

- which, as you see, is exactly 0 lines.

The problem of examining a target node is usually trivial. A
well-structured semantic page should be easy to navigate using the DOM.
I've been using bubbling for years. Checking tagName, id, and className,
perhaps other properties is enough. A selectors API is not needed.

Imposing arbitrary structural requirements about where an element must
exist is makes the code more fragile and more brittle. When IDs are used
in the HTML, someone editing the HTML can be expected not to change it
without first ascertaining that doing so will not create a bug. I
mention this because like xpath, a selectors API makes it easy to fall
into this trap of referencing fragments which are arbitrary and whose
existence is insignificant or coincidental. Sort of a "non sequitir"
code assertion where the code looks for something matching an arbitrary
fragment.

What's worse is that the code required for checking selectors would be
checking the attribute values. Attributes are broken in IE. Instead of
trying to "make it work", "don't do that" and just use properties. This
means the hand-rolled Selectors API goes out the window. YAGNI anyway.

There is no reason to add the overhead of anything else to the page.
Less code to maintain means less potential for bugs and faster to
download. Less is more. The XP pertinent adage: "Do the simplest thing
that could possibly work, but not any simpler". We're not getting paid
by SLOC.

I see where you are coming from with the API. It would be convenient to
have that supported by native code, but that is not the case.

Looking at the code, I think I see a problem in NW.Dom.getChildren:-

| getChildren =
| function(element) {
| // childNodes is slower to loop through because it contains
| // text nodes
| // empty text nodes could be removed at startup to compensate
| // this a bit
| return element[NATIVE_CHILDREN] || element.childNodes;
| },

The children property is inconsistent across the most common browsers.
It is best left alone.

For Firefox 3.1, there is no "children", so the childNodes property is
used. For childNodes, textNodes and comment nodes will be included and
returned. In Firefox 3.5, and versions of Safari, Opera, and probably
other browsers, "children" is only Elements. In IE, the "children"
property returns a list of "DHTML Objects" and this includes comment
nodes. So, your function returns different results in IE, Firefox <=
3.1, and {Firefox 3.5, Opera, and Safari}.

<IE Madness>
MSIE's "DHTML Objects" includes comment nodes, which MSDN calls "Comment
Elements". They have all the methods and properties of other elements,
such as "attachEvent", "style", "offsetTop", a read-only "innerHTML",
and ironically, "children".

Try the innerHTML property of a comment in IE:-
javascript: alert(document.createComment("foo").innerHTML = "test");

This shows an obvious LSP violation. The API design of shoehorning a
Comment interface as a subclass to the Element interface was a mistake
that has persisted into IE8.
</IE Madness>

Regardless of the getChildren method inconsistency, I don't need that.
AISB, YAGNI!

Garrett
 
T

Thomas 'PointedEars' Lahn

Diego said:
I always feel clueless when somebody teach me something new. This has
not been the case.

I never feel irresponsible even when I say something wrong, there is
always time to rectify or change mind.

That's what I mean. A responsible person would first think, then speak.
I will try to do better, I am not competing with any FW
FW?

I just have working events/selectors libraries that perfectly fit my needs,
it was designed and discussed far before these threads with far more
competent people

People you *assume* to be far more competent.
and was referenced by others to implement similar solutions.

A million flies can't be wrong? I wondered whether that "argument" would
come. You script-kiddies are so very predictable.
Ok. Ok we are all culprit. And I can accept if we are not in the same
hell circle.

We are not even in the same sphere, if you want to employ that figure of speech.
Really, I didn't come here for that particularly.

Then why not stop making assumptions about it?
Can you finally point me to a better "match()" method and delgation
implementation just to compare ?

"Better" would be defined by the (for me) unacceptable conditions that your
approach is based on, therefore no.
Anyway I still appreciate your time to comment. Sad it hasn't yet show
so useful as I would expect.

I am not here to entertain you.


PointedEars
 
D

Diego Perini

Diego said:
Diego Perini wrote:
kangax wrote:
Garrett Smith wrote:
RobG wrote:
Ivan S pisze:
WhatJSframework would you guys recommend to me?
Tryhttp://www.domassistant.com/
No, don't.  It seems to munge the worst of jQuery CSS-style selectors
and Prototype.jsaugmentation of host objects together in one script
that is also depenent on browser sniffing.
The authors' claim that it is the fastest script in the SlickSpeed
tests isn't substantiated - in Safari 4 it comes in 4th out of 7,even
jQuery is faster.
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance..
You can't do much with `document.getElementById`. Sometimes you want to
select by class, attribute, descendant or a combination of those. I
agree that more complex selectors are rarely needed (well, except maybe
 something like `nth-child("odd")`).
Event delegation is great, but then you might want some kind of `match`
method to determine whether a target element matches selector. `match`
is not necessary, of course, but testing an element manually adds noise
and verbosity.
A codebase that does not have a selector API should be smaller. In
comparison to a codebase that has a selectors API, the one that does
notshould be downloaded faster, interpreted faster, and should have a
smaller memory footprint.
That much is... obvious :)
I don't think last two really matter. Execution of an entire fully CSS3
compliant selector engine (such asNWMatcher- the best one of them I've
even seen) doesn't take more than few milliseconds. It's 2ms in
FF3.0.10. Probably not more than 20-30ms in relatively slow IE6,7. Would
you really care about those?
I would definitely care about 20ms.
As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.
In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.
Why add something that is not necessary?
FWICS,NWMatcherstill uses function decompilation.http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...
Garrett
--
The official comp.lang.javascript FAQ:http://jibbering.com/faq/
Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
      (/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
      typeof object[method] == 'function';
You don't see the problem because you haven't faced it,
Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?
Correct you are. Maybe you have a better solution. Just show me so I
can compare.
Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.

APE is AFL, so would probably not licenseable for most projects.  Having
effectively no user-base, I can remove things that should not be there.

Perhaps in the future, I'll go with a less restrictive mozilla-type
license.

Supporting potential problems in third party libraries is a *choice*.
Piggybacking off the popular libraries probably results in a larger user
base. The obvious downside is that additional code is required to hide
the inherently faulty design of those libraries. What is less obvious is
that the additional code requires "implementation dependent" function
decompilation, which is known to be faulty in at least opera mobile[1]
and is defined to return a FunctionDeclaration, which is not what IE,
Gecko, Safari, Chrome, Opera do when Function.prototype.toString is
called on an anonymous function.
The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.

If I understand correctly, the concern is that a third party library may
have attempted to fix IE's getAttribute/hasAttribute methods by defining
those methods on the element, and that if that had happened, it would
cause the script to not function properly.

Correct, in that case it was Prototype messing with those native
functions.
XML Namespaced attributes should be avoided. Why would you want to use that?

NWMatcher try to support as much as possible the W3C specifications
(xml:lang attribute and similar) which are suported by the IE natives.
Bubbling is available for free without NWEvents.  I can't see any reason
for wanting to try to add a code to simulate capturing in IE.

Form events do not bubble in IE and other browsers (submit/reset),
other events do not bubble consistently across browsers, NWEvents
turns this into a cross-browser technique that allows any event to
bubble (or be captured).
I am aware of that. I don't need that.

It is OK if you don't need it, assuming nobody else is needing it is a
big pretention.
QuerySelectorAll was, I think, a design mistake inspired by (a)
javascript libraries.



I see:
| I think they’re currently about to make several design errors that I
| would prefer to be corrected before people start implementing the
| thing.

He goes on to make some good points, Anne van Kesteren did not comment
on those in his response. None of the points raised stopped
implementations from implementing the thing and none of his ideas got
into the API, as Anne published it. What a surprise.

http://www.mail-archive.com/[email protected]/msg00782.html

Looks like the last of that one and the reason given:-
| Parsing the selector string is will likely be by far the least
| expensive part of the match operation.
- Maciej totally missed the point.

If Selectors had used those ideas, it could have been designed in a way
that was (more) useful, but that is fiction.

NWMatcher tries to fill this gap and seems you agrre that the idea was
a good one.
Given the problem you have presented to solve withNWMatcher, I can
solve that with the following code:-

- which, as you see, is exactly 0 lines.

The problem of examining a target node is usually trivial. A
well-structured semantic page should be easy to navigate using the DOM.
I've been using bubbling for years. Checking tagName, id, and className,
perhaps other properties is enough. A selectors API is not needed.

I wouldn't rewrite the matching algorithm each time I need a specific
node, and as kangax already explained sometime we need to combine
positional (ancestor, descendant) and properties checking (id, class,
tag).
Imposing arbitrary structural requirements about where an element must
exist is makes the code more fragile and more brittle. When IDs are used
in the HTML, someone editing the HTML can be expected not to change it
without first ascertaining that doing so will not create a bug. I
mention this because like xpath, a selectors API makes it easy to fall
into this trap of referencing fragments which are arbitrary and whose
existence is insignificant or coincidental. Sort of a "non sequitir"
code assertion where the code looks for something matching an arbitrary
fragment.

By doing the matching manually you will be in the same problem, if the
structure or the names (id, class) change you will have to review your
traversal and names checking and act accordingly. If you insert a
comment node in the wrong place you will fall in the same problem too
since next/previousSibling has changed.
What's worse is that the code required for checking selectors would be
checking the attribute values. Attributes are broken in IE. Instead of
trying to "make it work", "don't do that" and just use properties. This
means the hand-rolled Selectors API goes out the window. YAGNI anyway.

There is no reason to add the overhead of anything else to the page.
Less code to maintain means less potential for bugs and faster to
download. Less is more. The XP pertinent adage: "Do the simplest thing
that could possibly work, but not any simpler". We're not getting paid
by SLOC.

I see where you are coming from with the API. It would be convenient to
have that supported by native code, but that is not the case.

Nice to hear you deem "match()" adequate for a browser native
implementation, it seems Safari/Chrome will be having that in future
releases.
Looking at the code, I think I see a problem in NW.Dom.getChildren:-

| getChildren =
|    function(element) {
|      // childNodes is slower to loop through because it contains
| // text nodes
|      // empty text nodes could be removed at startup to compensate
| // this a bit
|      return element[NATIVE_CHILDREN] || element.childNodes;
|    },

The children property is inconsistent across the most common browsers.
It is best left alone.

There are no problems there, the returned children collection is
filtered for "nodeType == 1" or "nodeName == tag" afterwards.
For Firefox 3.1, there is no "children", so the childNodes property is
used. For childNodes, textNodes and comment nodes will be included and
returned. In Firefox 3.5, and versions of Safari, Opera, and probably
other browsers, "children" is only Elements. In IE, the "children"
property returns a list of "DHTML Objects" and this includes comment
nodes. So, your function returns different results in IE, Firefox <=
3.1, and {Firefox 3.5, Opera, and Safari}.

<IE Madness>
MSIE's "DHTML Objects" includes comment nodes, which MSDN calls "Comment
Elements". They have all the methods and properties of other elements,
such as "attachEvent", "style", "offsetTop", a read-only "innerHTML",
and ironically, "children".

Try the innerHTML property of a comment in IE:-
javascript: alert(document.createComment("foo").innerHTML = "test");

This shows an obvious LSP violation. The API design of shoehorning a
Comment interface as a subclass to the Element interface was a mistake
that has persisted into IE8.
</IE Madness>

Regardless of the getChildren method inconsistency, I don't need that.
AISB, YAGNI!

You are wrong. NWMatcher will never return text or comment nodes in
the result set, so it fixes the problems you mentioned on IE, and
since NWMatcher feature tests that specific problem, any new browser
with the same gEBTN bugs will be fixed too.

NWMatcher is a compliant CSS3 Selector engine, much of the code in
there is to support CSS3 and HTML5 specification including attributes
"case-sensitivity" awarness, document ordered result sets and nested
negation pseudo classes, the cruft there is due to the caching system
and the "select()" method, both of which I added to satisfy current
user requests. The core "match()" method is just 16Kbytes source code.

Thank you for partially reviewing my code, your comments where really
useful, I may try to remove the function decompilation part in the
future if at all possible.

Diego Perini
 
G

Garrett Smith

Diego said:
Diego said:
Diego Perini wrote:
kangax wrote:
Garrett Smith wrote:
RobG wrote:
Ivan S pisze:
WhatJSframework would you guys recommend to me?
Tryhttp://www.domassistant.com/
No, don't. It seems to munge the worst of jQuery CSS-style selectors
and Prototype.jsaugmentation of host objects together in one script
that is also depenent on browser sniffing.
The authors' claim that it is the fastest script in the SlickSpeed
tests isn't substantiated - in Safari 4 it comes in 4th out of 7, even
jQuery is faster.
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance.
You can't do much with `document.getElementById`. Sometimes you want to
select by class, attribute, descendant or a combination of those. I
agree that more complex selectors are rarely needed (well, except maybe
something like `nth-child("odd")`).
Event delegation is great, but then you might want some kind of `match`
method to determine whether a target element matches selector. `match`
is not necessary, of course, but testing an element manually adds noise
and verbosity.
A codebase that does not have a selector API should be smaller. In
comparison to a codebase that has a selectors API, the one that does
notshould be downloaded faster, interpreted faster, and should have a
smaller memory footprint.
That much is... obvious :)
I don't think last two really matter. Execution of an entire fully CSS3
compliant selector engine (such asNWMatcher- the best one of them I've
even seen) doesn't take more than few milliseconds. It's 2ms in
FF3.0.10. Probably not more than 20-30ms in relatively slow IE6,7. Would
you really care about those?
I would definitely care about 20ms.
As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.
In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.
Why add something that is not necessary?
FWICS,NWMatcherstill uses function decompilation.http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...
Garrett
--
The official comp.lang.javascript FAQ:http://jibbering.com/faq/
Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
(/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
typeof object[method] == 'function';
You don't see the problem because you haven't faced it,
Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?
Correct you are. Maybe you have a better solution. Just show me so I
can compare.
but it is a
problem and a very frequent one when you have to ensure compatibility
with code written by others not interested in keeping a clean
environment.
Modifying host objects is known to be problematic.
Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
Hiding that mistake does not make it go away. If the code is broken, fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.
You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.
APE is AFL, so would probably not licenseable for most projects. Having
effectively no user-base, I can remove things that should not be there.

Perhaps in the future, I'll go with a less restrictive mozilla-type
license.

Supporting potential problems in third party libraries is a *choice*.
Piggybacking off the popular libraries probably results in a larger user
base. The obvious downside is that additional code is required to hide
the inherently faulty design of those libraries. What is less obvious is
that the additional code requires "implementation dependent" function
decompilation, which is known to be faulty in at least opera mobile[1]
and is defined to return a FunctionDeclaration, which is not what IE,
Gecko, Safari, Chrome, Opera do when Function.prototype.toString is
called on an anonymous function.
General purposeframeworksare often chosen to "save time". Prototype,
Mootools, Cappucino, Mojo all modify host objects. Interesting that
NWMatcherhas extra code created for working around such problematic
design of the modification of host objects. Wouldn't it make more sense
to just not use those?
The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.
If I understand correctly, the concern is that a third party library may
have attempted to fix IE's getAttribute/hasAttribute methods by defining
those methods on the element, and that if that had happened, it would
cause the script to not function properly.

Correct, in that case it was Prototype messing with those native
functions.

Sounds like a great reason not to use Prototype.
NWMatcher try to support as much as possible the W3C specifications
(xml:lang attribute and similar) which are suported by the IE natives.

Why on earth do you need to match xml:lang? That would only be supported
by using "getAttribute('xml:lang')" where IE would not interpret a
namespace, but would instead use a nonstandard approach. The colon
character is not valid in attributes.
Form events do not bubble in IE and other browsers (submit/reset),
other events do not bubble consistently across browsers, NWEvents
turns this into a cross-browser technique that allows any event to
bubble (or be captured).


It is OK if you don't need it, assuming nobody else is needing it is a
big pretention.

Really? Please show me where NWMatcher is needed.

Wasn't there something I wrote about being able to solve your problem
with 0 lines of code?
NWMatcher tries to fill this gap and seems you agrre that the idea was
a good one.

Filling the gap is overkill and not a good idea.

The selectors API had potential that the author did not realize. It was
probably based on the jquery selectors, which used "$", probably
inspired by, and used to trump Prototype.js (pissing match).

Code based on the premise "but it really should work", is best answered
by "just don't do that".

But it should work's:
* document.getElementById in IE < 8
* document.getElementsByName in IE < 8
* "capturing" event phase in IE
* reading a cascaded style in the DOM (like IE's "currentStyle")

I am completely guilty of that last one. Unfortunately, I did not step
back and say "don't do that". I didn't recognize (or did not admit) the
absurdity of it until too long later.

None of those are necessary. The answers to those problems are, in
respective order:
* do not give an element an ID where there is another element that has
the same value for NAME.
* Ditto.
* YAGNI.
* YAGNI. Use computedStyle and provide an adapter for IE.


Still stands. 0 LOC to solve the problem you've presented.
I wouldn't rewrite the matching algorithm each time I need a specific
node, and as kangax already explained sometime we need to combine
positional (ancestor, descendant) and properties checking (id, class,
tag).

Where did I advocate "rewriting the matching algorithm?"

In most cases, tagName and className are enough. I've said that I don't
know how many times now. By having a few functions that are reliable,
the whole idea of trying to solve every possible "matching" context goes
out the window.

What kangax said, taken out of context, was followed by:-
| Event delegation is great, but then you might want some kind of
| `match` method to determine whether a target element matches selector.

The arguments I have posted are following that. A match method would be
nice, but is unnecessary. It is a lot less code and a lot simpler to use:-

if(target.checked) {
// code here.
}

- then what NWMatcher would do.
By doing the matching manually you will be in the same problem, if the
structure or the names (id, class) change you will have to review your
traversal and names checking and act accordingly. If you insert a
comment node in the wrong place you will fall in the same problem too
since next/previousSibling has changed.

I can use a previousSiblingElement.

<Element Traversal API Design>
The Traversal API decided to call these properties with "Element" as the
second part of the word, not the last. For example:
"previousElementSibling", not "previousSiblingElement". This seems like
a mistake.

We can also see a "childElementCount", but no property for
"childElements". Given that common reason for having a
"childElementCount" would be to iterate over a "childElement"
collection, shouldn't there be one? I mentioned that on the list and
John Resig made mention of that one on his blog. That didn't stop Doug
Schepers from sticking to his API design, which is now a TR.

Only five properties and they're all screwed up.
Nice to hear you deem "match()" adequate for a browser native
implementation, it seems Safari/Chrome will be having that in future
releases.

A boolean "matches(selectorText)" method is a neat idea for native code.

If it is implemented widely enough, say, by 2011, then maybe by 2016 we
may be able to use that. A fallback for hand-rolled "matches" is
overkill. A better approach is to script semantic, structured, valid markup.
Looking at the code, I think I see a problem in NW.Dom.getChildren:-

| getChildren =
| function(element) {
| // childNodes is slower to loop through because it contains
| // text nodes
| // empty text nodes could be removed at startup to compensate
| // this a bit
| return element[NATIVE_CHILDREN] || element.childNodes;
| },

The children property is inconsistent across the most common browsers.
It is best left alone.

There are no problems there, the returned children collection is
filtered for "nodeType == 1" or "nodeName == tag" afterwards.

That can be easily enough demonstrated.

There is no filtering shown in the code above. None. Where is the test
case of NWMatcher.DOM.getChildren?

[snip explanation of children/childNodes]
You are wrong. NWMatcher will never return text or comment nodes in
the result set, so it fixes the problems you mentioned on IE, and
since NWMatcher feature tests that specific problem, any new browser
with the same gEBTN bugs will be fixed too.

There are a couple of things wrong with that paragraph.

1) I am *not* wrong. The source code for NW.Dom.getChildren, as posted
above, taken from NWMatcher source on github[2], indicates otherwise.

2) Who mentioned gEBTN? The problem explained was with the "children"
property.

Test Results:-

Firefox 3.0.11:
kids.length = 3

Opera 9.64:
kids.length = 0

IE 7:
kids.length = 1

Test code:
<!doctype html>
<html>
<head>
<title></title>
<script type="text/javascript" src="../../jslib/nwmatcher.js"></script>
</head>
<body>
<p id="t">
<!-- test comment -->
</p>
<script type="text/javascript">
var t = document.getElementById("t");
var kids = NW.Dom.getChildren(t);
document.write("kids.length = " + kids.length);
</script>
</body>
</html>

AISB, NW.DOM.getChildren() will return inconsistent results. You (not I)
touched upon a similar problem in IE with gEBTN. That problem is not so
surprising when it is realized that comments are Elements in IE.
NWMatcher is a compliant CSS3 Selector engine, much of the code in
there is to support CSS3 and HTML5 specification including attributes
"case-sensitivity" awarness, document ordered result sets and nested
negation pseudo classes, the cruft there is due to the caching system
and the "select()" method, both of which I added to satisfy current
user requests. The core "match()" method is just 16Kbytes source code.

I don't buy one word of that.

* NWMatcher cannot be CSS3 selector compliant and work in IE <= 7
because attributes are broken in IE. Instead of trying to "make it
work", "don't do that" and just use properties.

* 16k is a lot, but NWMatcher is nearly 50k[2].

* What is the situation you're finding yourself in that you need
"negation pseudo classes"?

Chances are, someone has encountered that problem before and has figured
out a way that does not require matching "negation pseudo classes". It
may be that you have a unique problem. That warrants exposition.
Thank you for partially reviewing my code, your comments where really
useful, I may try to remove the function decompilation part in the
future if at all possible.

NWMatcher isn't something I would ever want or need and so I don't see
much reason to get into the details of code review. The problems with
IE's broken attributes make using attribute selectors a bad choice.
"Popular" libraries may blur the distinction between properties and
attributes but the problem exists nonetheless.

I see also:-
| // WebKit case sensitivity bug with className (when no DOCTYPE)

How about just don't do that?

Nothing can be expected of quirks mode. Ian Hickson trying to
standardize quirks mode does not make it any more reliable or advisable.

Validation is an important part of solving script-related problems,
which is why it is mentioned in: http://jibbering.com/faq/#postCode

Garrett
 
D

Diego Perini

Diego said:
Diego Perini wrote:
Diego Perini wrote:
kangax wrote:
Garrett Smith wrote:
RobG wrote:
Ivan S pisze:
WhatJSframework would you guys recommend to me?
Tryhttp://www.domassistant.com/
No, don't.  It seems to munge the worst of jQuery CSS-style selectors
and Prototype.jsaugmentation of host objects together in one script
that is also depenent on browser sniffing.
The authors' claim that it is the fastest script in the SlickSpeed
tests isn't substantiated - in Safari 4 it comes in 4th out of 7, even
jQuery is faster.
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance.
You can't do much with `document.getElementById`. Sometimes you want to
select by class, attribute, descendant or a combination of those.I
agree that more complex selectors are rarely needed (well, exceptmaybe
 something like `nth-child("odd")`).
Event delegation is great, but then you might want some kind of `match`
method to determine whether a target element matches selector. `match`
is not necessary, of course, but testing an element manually addsnoise
and verbosity.
A codebase that does not have a selector API should be smaller. In
comparison to a codebase that has a selectors API, the one that does
notshould be downloaded faster, interpreted faster, and should have a
smaller memory footprint.
That much is... obvious :)
I don't think last two really matter. Execution of an entire fully CSS3
compliant selector engine (such asNWMatcher- the best one of themI've
even seen) doesn't take more than few milliseconds. It's 2ms in
FF3.0.10. Probably not more than 20-30ms in relatively slow IE6,7.. Would
you really care about those?
I would definitely care about 20ms.
As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.
In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.
Why add something that is not necessary?
FWICS,NWMatcherstill uses function decompilation.http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...
Garrett
--
The official comp.lang.javascript FAQ:http://jibbering.com/faq/
Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
      (/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
      typeof object[method] == 'function';
You don't see the problem because you haven't faced it,
Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?
Correct you are. Maybe you have a better solution. Just show me so I
can compare.
but it is a
problem and a very frequent one when you have to ensure compatibility
with code written by others not interested in keeping a clean
environment.
Modifying host objects is known to be problematic.
Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
Hiding that mistake does not make it go away. If the code is broken,fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.
You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.
APE is AFL, so would probably not licenseable for most projects.  Having
effectively no user-base, I can remove things that should not be there..
Perhaps in the future, I'll go with a less restrictive mozilla-type
license.
Supporting potential problems in third party libraries is a *choice*.
Piggybacking off the popular libraries probably results in a larger user
base. The obvious downside is that additional code is required to hide
the inherently faulty design of those libraries. What is less obvious is
that the additional code requires "implementation dependent" function
decompilation, which is known to be faulty in at least opera mobile[1]
and is defined to return a FunctionDeclaration, which is not what IE,
Gecko, Safari, Chrome, Opera do when Function.prototype.toString is
called on an anonymous function.
General purposeframeworksare often chosen to "save time". Prototype,
Mootools, Cappucino, Mojo all modify host objects. Interesting that
NWMatcherhas extra code created for working around such problematic
design of the modification of host objects. Wouldn't it make more sense
to just not use those?
The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.
If I understand correctly, the concern is that a third party library may
have attempted to fix IE's getAttribute/hasAttribute methods by defining
those methods on the element, and that if that had happened, it would
cause the script to not function properly.
Correct, in that case it was Prototype messing with those native
functions.

Sounds like a great reason not to use Prototype.
NWMatcher try to support as much as possible the W3C specifications
(xml:lang attribute and similar) which are suported by the IE natives.

Why on earth do you need to match xml:lang? That would only be supported
by using "getAttribute('xml:lang')" where IE would not interpret a
namespace, but would instead use a nonstandard approach. The colon
character is not valid in attributes.

I don't have much use myself either, that doesn't mean I have to
constrain my project to my own needs, I have bug tracking tools that I
try to follow and where possible and by specs I try to have it
considered. Is that a problem ?

The "xml:lang" property is mentioned in all the CSS >= 2 specs I have
read and it hasn't changed in the CSS Selectors Level 3, read first
6.3.1 and then follow links to 6.6.3 of latest CSS Selector Level 3
draft March/2009:

http://www.w3.org/TR/css3-selectors/#attribute-selectors

Though you may still say that my implementation is failing in some
way. No problems with that, my objective is learn and improve.

Really? Please show me where NWMatcher is needed.

I would like to do it OT if you have spare time and you really wish !

The code I am going to show already uses delegation with some tricky
and possibly highly populated "trees" in smaller overlay windows; the
task is to replace "manual" delegation spread throughout the code with
a centralized event manager that could handle delegation using CSS
selectors. This is already handled very well since hundred of events
are handled by a few elements that act as events dispatchers (the
bubbling).

Writing English at this "level" is a bit difficult for me but I am
willing to try hard.
Wasn't there something I wrote about being able to solve your problem
with 0 lines of code?

It is not clear what problem you solve with 0 lines of code ! No lines
of code means no problem, so be real.

As an example I can write:

NW.Event.appendDelegate( "p > a", "click", handler_function ); //
only first link in each paragraph

and I add some needed action to all the links in the page with no need
to wait for an onload event, thus ensuring a cross-browser experience
and an early activated interface. You may question the lengthy
namespace or names, yes the code is free just use yours...

Sure you can write the above snippet of code shorter and better, but
then you write it once and forget about reuse it.

Now suppose those "links" where also changing, pulled in by some http
request, the event will still be there, no need to redeclare nothing.

Add this same technique to some more complex page with several FORMs
and validation as a requirement and you will probably see some benefit
in having all form events also bubble up the document (submit/reset/
focus/blur and more).
Filling the gap is overkill and not a good idea.

It was a very good idea for me and for others using it, obviously you
can teach me how to do that in 0 lines I am all ears.
The selectors API had potential that the author did not realize. It was
probably based on the jquery selectors, which used "$", probably
inspired by, and used to trump Prototype.js(pissing match).

Code based on the premise "but it really should work", is best answered
by "just don't do that".

It works, NWMatcher passes the only few test suites available to me
(Prototype and jQuery), if you wish one added please suggest.

I have some test of my own obviously.
But it should work's:
  * document.getElementById in IE < 8
  * document.getElementsByName in IE < 8
  * "capturing" event phase in IE
  * reading a cascaded style in the DOM (like IE's "currentStyle")

I am completely guilty of that last one. Unfortunately, I did not step
back and say "don't do that". I didn't recognize (or did not admit) the
absurdity of it until too long later.

None of those are necessary. The answers to those problems are, in
respective order:
* do not give an element an ID where there is another element that has
the same value for NAME.
* Ditto.
* YAGNI.
* YAGNI. Use computedStyle and provide an adapter for IE.

I know elements ID are unique. Thanks.

The IDs are in many case generated by module in the page, we have no
control over uniqueness between module's output but we may solve the
problem by giving different IFRAMEs to each module to ensure that.
Still stands. 0 LOC to solve the problem you've presented.

Well to achieve that with CSS and HTML in a site one should be
prepared to having to produce and maintain hundred/thousand different
manually edited static pages, we are miles away from that approach,
wasn't scripting introduced to solve that too. Or is there some
restrictions that impose the "just effects" web ? Isn't then jQuery or
APE enough for that ?

I mean there are other markets and other needs, is it so difficult to
realize that ?
Where did I advocate "rewriting the matching algorithm?"

Well algorithm was indeed a too big word, I meant the mix of different
conditional comparisons (it, tag, class, perv/next etc) you would have
to do each time the structure of your DOM changes (excluding the 0 LOC
trick).
In most cases, tagName and className are enough. I've said that I don't
know how many times now. By having a few functions that are reliable,
the whole idea of trying to solve every possible "matching" context goes
out the window.

I agree that in MOST cases, surely the majority, this code overhead is
not necessary. What about the other part of the "MOST" cases ?
What kangax said, taken out of context, was followed by:-
| Event delegation is great, but then you might want some kind of
| `match` method to determine whether a target element matches selector.

The arguments I have posted are following that.  A match method would be
nice, but is unnecessary. It is a lot less code and a lot simpler to use:-

if(target.checked) {
   // code here.

}

- then what NWMatcher would do.

NWMatcher will parse and compile the passed selector to a
corresponding resolver function in javascript. From what I understand
it's exactly what you do manually, as simple as that, the compiled
function is then invoked and saved for later use, no successive
parsing is done for the same selector to boost performances. I leaved
in a function to see the results of the process for demo purpose in
case you are curious about the outcome, use NW.Dom.compile(selector)
and print the resulting string to the console.
I can use a previousSiblingElement.

Not all browsers have that API extension, only the very latest
browsers have that.
<Element Traversal API Design>
The Traversal API decided to call these properties with "Element" as the
second part of the word, not the last. For example:
"previousElementSibling", not "previousSiblingElement". This seems like
a mistake.

We can also see a "childElementCount", but no property for
"childElements". Given that common reason for having a
"childElementCount" would be to iterate over a "childElement"
collection, shouldn't there be one? I mentioned that on the list and
John Resig made mention of that one on his blog. That didn't stop Doug
Schepers from sticking to his API design, which is now a TR.

Only five properties and they're all screwed up.
</Element Traversal API Design>

I have heard saying that when QSA happeared first in browsers more
than a year ago, you said yourself QSA was mistakenly designed. Until
this is fixed things like NWMatcher will still bee needed and I am
speaking about the newest and most advanced implementors Webkit/
Chrome, what about IE6/IE7, maybe in a few years. NWMatcher will last
some more time for these reasons be assured.
A boolean "matches(selectorText)" method is a neat idea for native code.

Well I implemented that in javascript...why do you have such doubt
then ?
If it is implemented widely enough, say, by 2011, then maybe by 2016 we
may be able to use that. A fallback for hand-rolled "matches" is
overkill. A better approach is to script semantic, structured, valid markup.

I really look forward to see that happen too. I am not in an hurry !

Technology can be improved by developers by making it more easy and
simple not by teaching difficult actions or hard to remember
procedures.
Looking at the code, I think I see a problem in NW.Dom.getChildren:-
| getChildren =
|    function(element) {
|      // childNodes is slower to loop through because it contains
| // text nodes
|      // empty text nodes could be removed at startup to compensate
| // this a bit
|      return element[NATIVE_CHILDREN] || element.childNodes;
|    },
The children property is inconsistent across the most common browsers.
It is best left alone.
There are no problems there, the returned children collection is
filtered for "nodeType == 1" or "nodeName == tag" afterwards.

That can be easily enough demonstrated.

There is no filtering shown in the code above. None. Where is the test
case of NWMatcher.DOM.getChildren?

This is the relevant code string that is wrapped around during the
function build that does what you are looking for:

// fix for IE gEBTN('*') returning collection with comment
nodes
SKIP_COMMENTS = BUGGY_GEBTN ? 'if(e.nodeType!=1){continue;}' :
'',


Nowhere I said either that the method serves to the purpose you are
trying to give it. You just guessed it !

That may be partly my fault too, by having exposed it as a public
method. ;-)

I was talking about "match()" and "select()", these are the only two
methods meant to be used. Sorry if it is unclear in the code/comments.
[snip explanation of children/childNodes]


You are wrong. NWMatcher will never return text or comment nodes in
the result set, so it fixes the problems you mentioned on IE, and
since NWMatcher feature tests that specific problem, any new browser
with the same gEBTN bugs will be fixed too.

There are a couple of things wrong with that paragraph.

1) I am *not* wrong. The source code for NW.Dom.getChildren, as posted
above, taken from NWMatcher source on github[2], indicates otherwise.

As I told I am not trying to have an unified cross-browser
"getChildren" it is an helper used by the compiled functions, I could
have completely avoided having that function independent, it was to
improve speed on IE quickly discarding text nodes.
2) Who mentioned gEBTN? The problem explained was with the "children"
property.

Test Results:-

Firefox 3.0.11:
kids.length = 3

Opera 9.64:
kids.length = 0

IE 7:
kids.length = 1

Test code:
<!doctype html>
<html>
<head>
<title></title>
<script type="text/javascript" src="../../jslib/nwmatcher.js"></script>
</head>
<body>
<p id="t">
  <!-- test comment -->
</p>
<script type="text/javascript">
  var t = document.getElementById("t");
  var kids = NW.Dom.getChildren(t);
  document.write("kids.length = " + kids.length);
</script>
</body>
</html>

AISB, NW.DOM.getChildren() will return inconsistent results. You (not I)
touched upon a similar problem in IE with gEBTN. That problem is not so
surprising when it is realized that comments are Elements in IE.

No, let's see it this way, the getChildren() is to get the fastest
collection available. It didn't improve so incredibly for the records
but there was a gain.
I don't buy one word of that.

I have never thought it was to be sold. Incentives comes in various
formats !
* NWMatcher cannot be CSS3 selector compliant and work in IE <= 7
because attributes are broken in IE. Instead of trying to "make it
work", "don't do that" and just use properties.

Works for me and for the hundreds of tests it passes.

But I agree that the problem you talk about has been greatly
underestimated by several related working groups. Don't blame me or my
code for those errors.
* 16k is a lot, but NWMatcher is nearly 50k[2].

The "match()" method is the length I said, no more than 16kbytes
source code, the rest is for the "select()" method (I have no use for
the "select()" while everybody else uses only it) the caching code and
the type of checks that you said I should have leaved out.-
* What is the situation you're finding yourself in that you need
"negation pseudo classes"?

Scraping external text content most (hope the term is correct). Also,
in general when the list of things to do is much bigger of the list of
things not to do (while attaching event).
Chances are, someone has encountered that problem before and has figured
out a way that does not require matching "negation pseudo classes".  It
may be that you have a unique problem. That warrants exposition.

Let's say I want all the elements but not SCRIPTS and or STYLESHEETS
and or OBJECTS/APPLETS...sound familiar and useful as a task ?

However it is also a specification of CSS3, they may be able to give
you other ideas about that in their docs.
NWMatcher isn't something I would ever want or need and so I don't see
much reason to get into the details of code review. The problems with
IE's broken attributes make using attribute selectors a bad choice.
"Popular" libraries may blur the distinction between properties and
attributes but the problem exists nonetheless.

Yeah you repeated it a few times, you have no use...I see, I will not
blame you for that.

However I have no errors nor warnings in the console using these
helpers and the results are correct AFAIK.

This should already be a good enough reason to start using them and
try something new.
I see also:-
| // WebKit case sensitivity bug with className (when no DOCTYPE)

How about just don't do that?

I could, you haven't given me a reason to not do it, but I will
carefully ponder any related/motivated suggestion.
Nothing can be expected of quirks mode. Ian Hickson trying to
standardize quirks mode does not make it any more reliable or advisable.

It's a shame few follows, rules are easier to follow and doesn't
require big efforts just open minds.
Validation is an important part of solving script-related problems,
which is why it is mentioned in:http://jibbering.com/faq/#postCode

Validation is a big target both for NWEvents and NWMatcher. You should
try it.

Thank you for scrutinizing. I have to address some of the concerns you
made like remove some public methods to avoid confusing devs.


Diego Perini
 
D

Diego Perini

Diego said:
Diego Perini wrote:
Diego Perini wrote:
kangax wrote:
Garrett Smith wrote:
RobG wrote:
Ivan S pisze:
WhatJSframework would you guys recommend to me?
Tryhttp://www.domassistant.com/
No, don't.  It seems to munge the worst of jQuery CSS-style selectors
and Prototype.jsaugmentation of host objects together in one script
that is also depenent on browser sniffing.
The authors' claim that it is the fastest script in the SlickSpeed
tests isn't substantiated - in Safari 4 it comes in 4th out of 7, even
jQuery is faster.
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance.
You can't do much with `document.getElementById`. Sometimes you want to
select by class, attribute, descendant or a combination of those.I
agree that more complex selectors are rarely needed (well, exceptmaybe
 something like `nth-child("odd")`).
Event delegation is great, but then you might want some kind of `match`
method to determine whether a target element matches selector. `match`
is not necessary, of course, but testing an element manually addsnoise
and verbosity.
A codebase that does not have a selector API should be smaller. In
comparison to a codebase that has a selectors API, the one that does
notshould be downloaded faster, interpreted faster, and should have a
smaller memory footprint.
That much is... obvious :)
I don't think last two really matter. Execution of an entire fully CSS3
compliant selector engine (such asNWMatcher- the best one of themI've
even seen) doesn't take more than few milliseconds. It's 2ms in
FF3.0.10. Probably not more than 20-30ms in relatively slow IE6,7.. Would
you really care about those?
I would definitely care about 20ms.
As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.
In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.
Why add something that is not necessary?
FWICS,NWMatcherstill uses function decompilation.http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...
Garrett
--
The official comp.lang.javascript FAQ:http://jibbering.com/faq/
Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
      (/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
      typeof object[method] == 'function';
You don't see the problem because you haven't faced it,
Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?
Correct you are. Maybe you have a better solution. Just show me so I
can compare.
but it is a
problem and a very frequent one when you have to ensure compatibility
with code written by others not interested in keeping a clean
environment.
Modifying host objects is known to be problematic.
Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
Hiding that mistake does not make it go away. If the code is broken,fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.
You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.
APE is AFL, so would probably not licenseable for most projects.  Having
effectively no user-base, I can remove things that should not be there..
Perhaps in the future, I'll go with a less restrictive mozilla-type
license.
Supporting potential problems in third party libraries is a *choice*.
Piggybacking off the popular libraries probably results in a larger user
base. The obvious downside is that additional code is required to hide
the inherently faulty design of those libraries. What is less obvious is
that the additional code requires "implementation dependent" function
decompilation, which is known to be faulty in at least opera mobile[1]
and is defined to return a FunctionDeclaration, which is not what IE,
Gecko, Safari, Chrome, Opera do when Function.prototype.toString is
called on an anonymous function.
General purposeframeworksare often chosen to "save time". Prototype,
Mootools, Cappucino, Mojo all modify host objects. Interesting that
NWMatcherhas extra code created for working around such problematic
design of the modification of host objects. Wouldn't it make more sense
to just not use those?
The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.
If I understand correctly, the concern is that a third party library may
have attempted to fix IE's getAttribute/hasAttribute methods by defining
those methods on the element, and that if that had happened, it would
cause the script to not function properly.
Correct, in that case it was Prototype messing with those native
functions.

Sounds like a great reason not to use Prototype.
NWMatcher try to support as much as possible the W3C specifications
(xml:lang attribute and similar) which are suported by the IE natives.

Why on earth do you need to match xml:lang? That would only be supported
by using "getAttribute('xml:lang')" where IE would not interpret a
namespace, but would instead use a nonstandard approach. The colon
character is not valid in attributes.


Form events do not bubble in IE and other browsers (submit/reset),
other events do not bubble consistently across browsers, NWEvents
turns this into a cross-browser technique that allows any event to
bubble (or be captured).
It is OK if you don't need it, assuming nobody else is needing it is a
big pretention.

Really? Please show me where NWMatcher is needed.

Wasn't there something I wrote about being able to solve your problem
with 0 lines of code?


NWMatcher tries to fill this gap and seems you agrre that the idea was
a good one.

Filling the gap is overkill and not a good idea.

The selectors API had potential that the author...

leggi tutto

I went back to check (some) of my statements above just to make sure
sizes and functionality and be able to show.

Online demo/test with the sliced down version of NWMatcher containing
just the needed "match()" method:

http://javascript.nwbox.com/cljs-071809/nwapi/nwapi_test.html

The demo should work on any desktop browser, type some text in the
input boxes or leave them empty to see validation example, click in
the cells or the unordered list items, move the mouse around on the
elements and then look at the console (you can enable strict mode if
you like) everything is passed through lint and shouldn't show any
errors.

You can download a complete archive of all the source used to build
this demo here (with minified and compressed examples):

http://javascript.nwbox.com/cljs-071809/nwapi-demo-cljs.tgz

By dropping support for the "bads" you pointed out completely avoids
using the hated isNative() method, I completely removed the getChildren
() method that could have tricked user too.

No one of the feature/capabilities and fixes are lost, only the "select
()" method and it's dependencies where removed.

NWMatcher match() method = 16Kb source, minimized 8Kb, gzipped 3.456
bytes

NWApi = NWEvents + NWMatcher = 32Kb source, minimized 15Kb, gzipped
5.856 bytes

With the "match()" method is still possible to build a generic "select
()" method if needed at all (also slower).

Thank you for your suggestions, they where mostly appropriated but I
will have to keep also the complete version until this is well
understood, people are still relying on the "select()" (QSA or XPATH)
to achieve this kind of functionality and implementors are still
pondering their ways around. IE browsers are far away, I see Webkit/
Chrome developers being potential implementors for this.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,091
Messages
2,570,605
Members
47,225
Latest member
DarrinWhit

Latest Threads

Top