Diego said:
Diego Perini wrote:
Diego Perini wrote:
kangax wrote:
Garrett Smith wrote:
RobG wrote:
Ivan S pisze:
WhatJSframework would you guys recommend to me?
Tryhttp://
www.domassistant.com/
No, don't. It seems to munge the worst of jQuery CSS-style selectors
and Prototype.jsaugmentation of host objects together in one script
that is also depenent on browser sniffing.
The authors' claim that it is the fastest script in the SlickSpeed
tests isn't substantiated - in Safari 4 it comes in 4th out of 7, even
jQuery is faster.
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance.
You can't do much with `document.getElementById`. Sometimes you want to
select by class, attribute, descendant or a combination of those.I
agree that more complex selectors are rarely needed (well, exceptmaybe
something like `nth-child("odd")`).
Event delegation is great, but then you might want some kind of `match`
method to determine whether a target element matches selector. `match`
is not necessary, of course, but testing an element manually addsnoise
and verbosity.
A codebase that does not have a selector API should be smaller. In
comparison to a codebase that has a selectors API, the one that does
notshould be downloaded faster, interpreted faster, and should have a
smaller memory footprint.
That much is... obvious
I don't think last two really matter. Execution of an entire fully CSS3
compliant selector engine (such asNWMatcher- the best one of themI've
even seen) doesn't take more than few milliseconds. It's 2ms in
FF3.0.10. Probably not more than 20-30ms in relatively slow IE6,7.. Would
you really care about those?
I would definitely care about 20ms.
As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.
In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.
Why add something that is not necessary?
FWICS,NWMatcherstill uses function decompilation.
http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...
Garrett
--
The official comp.lang.javascript FAQ:
http://jibbering.com/faq/
Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
(/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
typeof object[method] == 'function';
You don't see the problem because you haven't faced it,
Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?
Correct you are. Maybe you have a better solution. Just show me so I
can compare.
but it is a
problem and a very frequent one when you have to ensure compatibility
with code written by others not interested in keeping a clean
environment.
Modifying host objects is known to be problematic.
Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
Hiding that mistake does not make it go away. If the code is broken,fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.
You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.
APE is AFL, so would probably not licenseable for most projects. Having
effectively no user-base, I can remove things that should not be there..
Perhaps in the future, I'll go with a less restrictive mozilla-type
license.
Supporting potential problems in third party libraries is a *choice*.
Piggybacking off the popular libraries probably results in a larger user
base. The obvious downside is that additional code is required to hide
the inherently faulty design of those libraries. What is less obvious is
that the additional code requires "implementation dependent" function
decompilation, which is known to be faulty in at least opera mobile[1]
and is defined to return a FunctionDeclaration, which is not what IE,
Gecko, Safari, Chrome, Opera do when Function.prototype.toString is
called on an anonymous function.
General purposeframeworksare often chosen to "save time". Prototype,
Mootools, Cappucino, Mojo all modify host objects. Interesting that
NWMatcherhas extra code created for working around such problematic
design of the modification of host objects. Wouldn't it make more sense
to just not use those?
The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.
If I understand correctly, the concern is that a third party library may
have attempted to fix IE's getAttribute/hasAttribute methods by defining
those methods on the element, and that if that had happened, it would
cause the script to not function properly.
Correct, in that case it was Prototype messing with those native
functions.
Sounds like a great reason not to use Prototype.
NWMatcher try to support as much as possible the W3C specifications
(xml:lang attribute and similar) which are suported by the IE natives.
Why on earth do you need to match xml:lang? That would only be supported
by using "getAttribute('xml:lang')" where IE would not interpret a
namespace, but would instead use a nonstandard approach. The colon
character is not valid in attributes.
I don't have much use myself either, that doesn't mean I have to
constrain my project to my own needs, I have bug tracking tools that I
try to follow and where possible and by specs I try to have it
considered. Is that a problem ?
The "xml:lang" property is mentioned in all the CSS >= 2 specs I have
read and it hasn't changed in the CSS Selectors Level 3, read first
6.3.1 and then follow links to 6.6.3 of latest CSS Selector Level 3
draft March/2009:
http://www.w3.org/TR/css3-selectors/#attribute-selectors
Though you may still say that my implementation is failing in some
way. No problems with that, my objective is learn and improve.
Really? Please show me where NWMatcher is needed.
I would like to do it OT if you have spare time and you really wish !
The code I am going to show already uses delegation with some tricky
and possibly highly populated "trees" in smaller overlay windows; the
task is to replace "manual" delegation spread throughout the code with
a centralized event manager that could handle delegation using CSS
selectors. This is already handled very well since hundred of events
are handled by a few elements that act as events dispatchers (the
bubbling).
Writing English at this "level" is a bit difficult for me but I am
willing to try hard.
Wasn't there something I wrote about being able to solve your problem
with 0 lines of code?
It is not clear what problem you solve with 0 lines of code ! No lines
of code means no problem, so be real.
As an example I can write:
NW.Event.appendDelegate( "p > a", "click", handler_function ); //
only first link in each paragraph
and I add some needed action to all the links in the page with no need
to wait for an onload event, thus ensuring a cross-browser experience
and an early activated interface. You may question the lengthy
namespace or names, yes the code is free just use yours...
Sure you can write the above snippet of code shorter and better, but
then you write it once and forget about reuse it.
Now suppose those "links" where also changing, pulled in by some http
request, the event will still be there, no need to redeclare nothing.
Add this same technique to some more complex page with several FORMs
and validation as a requirement and you will probably see some benefit
in having all form events also bubble up the document (submit/reset/
focus/blur and more).
Filling the gap is overkill and not a good idea.
It was a very good idea for me and for others using it, obviously you
can teach me how to do that in 0 lines I am all ears.
The selectors API had potential that the author did not realize. It was
probably based on the jquery selectors, which used "$", probably
inspired by, and used to trump Prototype.js(pissing match).
Code based on the premise "but it really should work", is best answered
by "just don't do that".
It works, NWMatcher passes the only few test suites available to me
(Prototype and jQuery), if you wish one added please suggest.
I have some test of my own obviously.
But it should work's:
* document.getElementById in IE < 8
* document.getElementsByName in IE < 8
* "capturing" event phase in IE
* reading a cascaded style in the DOM (like IE's "currentStyle")
I am completely guilty of that last one. Unfortunately, I did not step
back and say "don't do that". I didn't recognize (or did not admit) the
absurdity of it until too long later.
None of those are necessary. The answers to those problems are, in
respective order:
* do not give an element an ID where there is another element that has
the same value for NAME.
* Ditto.
* YAGNI.
* YAGNI. Use computedStyle and provide an adapter for IE.
I know elements ID are unique. Thanks.
The IDs are in many case generated by module in the page, we have no
control over uniqueness between module's output but we may solve the
problem by giving different IFRAMEs to each module to ensure that.
Still stands. 0 LOC to solve the problem you've presented.
Well to achieve that with CSS and HTML in a site one should be
prepared to having to produce and maintain hundred/thousand different
manually edited static pages, we are miles away from that approach,
wasn't scripting introduced to solve that too. Or is there some
restrictions that impose the "just effects" web ? Isn't then jQuery or
APE enough for that ?
I mean there are other markets and other needs, is it so difficult to
realize that ?
Where did I advocate "rewriting the matching algorithm?"
Well algorithm was indeed a too big word, I meant the mix of different
conditional comparisons (it, tag, class, perv/next etc) you would have
to do each time the structure of your DOM changes (excluding the 0 LOC
trick).
In most cases, tagName and className are enough. I've said that I don't
know how many times now. By having a few functions that are reliable,
the whole idea of trying to solve every possible "matching" context goes
out the window.
I agree that in MOST cases, surely the majority, this code overhead is
not necessary. What about the other part of the "MOST" cases ?
What kangax said, taken out of context, was followed by:-
| Event delegation is great, but then you might want some kind of
| `match` method to determine whether a target element matches selector.
The arguments I have posted are following that. A match method would be
nice, but is unnecessary. It is a lot less code and a lot simpler to use:-
if(target.checked) {
// code here.
}
- then what NWMatcher would do.
NWMatcher will parse and compile the passed selector to a
corresponding resolver function in javascript. From what I understand
it's exactly what you do manually, as simple as that, the compiled
function is then invoked and saved for later use, no successive
parsing is done for the same selector to boost performances. I leaved
in a function to see the results of the process for demo purpose in
case you are curious about the outcome, use NW.Dom.compile(selector)
and print the resulting string to the console.
I can use a previousSiblingElement.
Not all browsers have that API extension, only the very latest
browsers have that.
<Element Traversal API Design>
The Traversal API decided to call these properties with "Element" as the
second part of the word, not the last. For example:
"previousElementSibling", not "previousSiblingElement". This seems like
a mistake.
We can also see a "childElementCount", but no property for
"childElements". Given that common reason for having a
"childElementCount" would be to iterate over a "childElement"
collection, shouldn't there be one? I mentioned that on the list and
John Resig made mention of that one on his blog. That didn't stop Doug
Schepers from sticking to his API design, which is now a TR.
Only five properties and they're all screwed up.
</Element Traversal API Design>
I have heard saying that when QSA happeared first in browsers more
than a year ago, you said yourself QSA was mistakenly designed. Until
this is fixed things like NWMatcher will still bee needed and I am
speaking about the newest and most advanced implementors Webkit/
Chrome, what about IE6/IE7, maybe in a few years. NWMatcher will last
some more time for these reasons be assured.
A boolean "matches(selectorText)" method is a neat idea for native code.
Well I implemented that in javascript...why do you have such doubt
then ?
If it is implemented widely enough, say, by 2011, then maybe by 2016 we
may be able to use that. A fallback for hand-rolled "matches" is
overkill. A better approach is to script semantic, structured, valid markup.
I really look forward to see that happen too. I am not in an hurry !
Technology can be improved by developers by making it more easy and
simple not by teaching difficult actions or hard to remember
procedures.
Looking at the code, I think I see a problem in NW.Dom.getChildren:-
| getChildren =
| function(element) {
| // childNodes is slower to loop through because it contains
| // text nodes
| // empty text nodes could be removed at startup to compensate
| // this a bit
| return element[NATIVE_CHILDREN] || element.childNodes;
| },
The children property is inconsistent across the most common browsers.
It is best left alone.
There are no problems there, the returned children collection is
filtered for "nodeType == 1" or "nodeName == tag" afterwards.
That can be easily enough demonstrated.
There is no filtering shown in the code above. None. Where is the test
case of NWMatcher.DOM.getChildren?
This is the relevant code string that is wrapped around during the
function build that does what you are looking for:
// fix for IE gEBTN('*') returning collection with comment
nodes
SKIP_COMMENTS = BUGGY_GEBTN ? 'if(e.nodeType!=1){continue;}' :
'',
Nowhere I said either that the method serves to the purpose you are
trying to give it. You just guessed it !
That may be partly my fault too, by having exposed it as a public
method. ;-)
I was talking about "match()" and "select()", these are the only two
methods meant to be used. Sorry if it is unclear in the code/comments.
[snip explanation of children/childNodes]
You are wrong. NWMatcher will never return text or comment nodes in
the result set, so it fixes the problems you mentioned on IE, and
since NWMatcher feature tests that specific problem, any new browser
with the same gEBTN bugs will be fixed too.
There are a couple of things wrong with that paragraph.
1) I am *not* wrong. The source code for NW.Dom.getChildren, as posted
above, taken from NWMatcher source on github[2], indicates otherwise.
As I told I am not trying to have an unified cross-browser
"getChildren" it is an helper used by the compiled functions, I could
have completely avoided having that function independent, it was to
improve speed on IE quickly discarding text nodes.
2) Who mentioned gEBTN? The problem explained was with the "children"
property.
Test Results:-
Firefox 3.0.11:
kids.length = 3
Opera 9.64:
kids.length = 0
IE 7:
kids.length = 1
Test code:
<!doctype html>
<html>
<head>
<title></title>
<script type="text/javascript" src="../../jslib/nwmatcher.js"></script>
</head>
<body>
<p id="t">
<!-- test comment -->
</p>
<script type="text/javascript">
var t = document.getElementById("t");
var kids = NW.Dom.getChildren(t);
document.write("kids.length = " + kids.length);
</script>
</body>
</html>
AISB, NW.DOM.getChildren() will return inconsistent results. You (not I)
touched upon a similar problem in IE with gEBTN. That problem is not so
surprising when it is realized that comments are Elements in IE.
No, let's see it this way, the getChildren() is to get the fastest
collection available. It didn't improve so incredibly for the records
but there was a gain.
I don't buy one word of that.
I have never thought it was to be sold. Incentives comes in various
formats !
* NWMatcher cannot be CSS3 selector compliant and work in IE <= 7
because attributes are broken in IE. Instead of trying to "make it
work", "don't do that" and just use properties.
Works for me and for the hundreds of tests it passes.
But I agree that the problem you talk about has been greatly
underestimated by several related working groups. Don't blame me or my
code for those errors.
* 16k is a lot, but NWMatcher is nearly 50k[2].
The "match()" method is the length I said, no more than 16kbytes
source code, the rest is for the "select()" method (I have no use for
the "select()" while everybody else uses only it) the caching code and
the type of checks that you said I should have leaved out.-
* What is the situation you're finding yourself in that you need
"negation pseudo classes"?
Scraping external text content most (hope the term is correct). Also,
in general when the list of things to do is much bigger of the list of
things not to do (while attaching event).
Chances are, someone has encountered that problem before and has figured
out a way that does not require matching "negation pseudo classes". It
may be that you have a unique problem. That warrants exposition.
Let's say I want all the elements but not SCRIPTS and or STYLESHEETS
and or OBJECTS/APPLETS...sound familiar and useful as a task ?
However it is also a specification of CSS3, they may be able to give
you other ideas about that in their docs.
NWMatcher isn't something I would ever want or need and so I don't see
much reason to get into the details of code review. The problems with
IE's broken attributes make using attribute selectors a bad choice.
"Popular" libraries may blur the distinction between properties and
attributes but the problem exists nonetheless.
Yeah you repeated it a few times, you have no use...I see, I will not
blame you for that.
However I have no errors nor warnings in the console using these
helpers and the results are correct AFAIK.
This should already be a good enough reason to start using them and
try something new.
I see also:-
| // WebKit case sensitivity bug with className (when no DOCTYPE)
How about just don't do that?
I could, you haven't given me a reason to not do it, but I will
carefully ponder any related/motivated suggestion.
Nothing can be expected of quirks mode. Ian Hickson trying to
standardize quirks mode does not make it any more reliable or advisable.
It's a shame few follows, rules are easier to follow and doesn't
require big efforts just open minds.
Validation is an important part of solving script-related problems,
which is why it is mentioned in:
http://jibbering.com/faq/#postCode
Validation is a big target both for NWEvents and NWMatcher. You should
try it.
Thank you for scrutinizing. I have to address some of the concerns you
made like remove some public methods to avoid confusing devs.
Diego Perini