Diego said:
Diego said:
Diego Perini wrote:
Diego Perini wrote:
kangax wrote:
Garrett Smith wrote:
RobG wrote:
Ivan S pisze:
WhatJSframework would you guys recommend to me?
Tryhttp://
www.domassistant.com/
No, don't. It seems to munge the worst of jQuery CSS-style selectors
and Prototype.jsaugmentation of host objects together in one script
that is also depenent on browser sniffing.
The authors' claim that it is the fastest script in the SlickSpeed
tests isn't substantiated - in Safari 4 it comes in 4th out of 7, even
jQuery is faster.
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance.
You can't do much with `document.getElementById`. Sometimes you want to
select by class, attribute, descendant or a combination of those. I
agree that more complex selectors are rarely needed (well, except maybe
something like `nth-child("odd")`).
Event delegation is great, but then you might want some kind of `match`
method to determine whether a target element matches selector. `match`
is not necessary, of course, but testing an element manually adds noise
and verbosity.
A codebase that does not have a selector API should be smaller. In
comparison to a codebase that has a selectors API, the one that does
notshould be downloaded faster, interpreted faster, and should have a
smaller memory footprint.
That much is... obvious
I don't think last two really matter. Execution of an entire fully CSS3
compliant selector engine (such asNWMatcher- the best one of them I've
even seen) doesn't take more than few milliseconds. It's 2ms in
FF3.0.10. Probably not more than 20-30ms in relatively slow IE6,7. Would
you really care about those?
I would definitely care about 20ms.
As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.
In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.
Why add something that is not necessary?
FWICS,NWMatcherstill uses function decompilation.
http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...
Garrett
--
The official comp.lang.javascript FAQ:
http://jibbering.com/faq/
Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
(/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
typeof object[method] == 'function';
You don't see the problem because you haven't faced it,
Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?
Correct you are. Maybe you have a better solution. Just show me so I
can compare.
but it is a
problem and a very frequent one when you have to ensure compatibility
with code written by others not interested in keeping a clean
environment.
Modifying host objects is known to be problematic.
Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
Hiding that mistake does not make it go away. If the code is broken, fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.
You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.
APE is AFL, so would probably not licenseable for most projects. Having
effectively no user-base, I can remove things that should not be there.
Perhaps in the future, I'll go with a less restrictive mozilla-type
license.
Supporting potential problems in third party libraries is a *choice*.
Piggybacking off the popular libraries probably results in a larger user
base. The obvious downside is that additional code is required to hide
the inherently faulty design of those libraries. What is less obvious is
that the additional code requires "implementation dependent" function
decompilation, which is known to be faulty in at least opera mobile[1]
and is defined to return a FunctionDeclaration, which is not what IE,
Gecko, Safari, Chrome, Opera do when Function.prototype.toString is
called on an anonymous function.
General purposeframeworksare often chosen to "save time". Prototype,
Mootools, Cappucino, Mojo all modify host objects. Interesting that
NWMatcherhas extra code created for working around such problematic
design of the modification of host objects. Wouldn't it make more sense
to just not use those?
The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.
If I understand correctly, the concern is that a third party library may
have attempted to fix IE's getAttribute/hasAttribute methods by defining
those methods on the element, and that if that had happened, it would
cause the script to not function properly.
Correct, in that case it was Prototype messing with those native
functions.
Sounds like a great reason not to use Prototype.
XML Namespaced attributes should be avoided. Why would you want to use that?
NWMatcher try to support as much as possible the W3C specifications
(xml:lang attribute and similar) which are suported by the IE natives.
Why on earth do you need to match xml:lang? That would only be supported
by using "getAttribute('xml:lang')" where IE would not interpret a
namespace, but would instead use a nonstandard approach. The colon
character is not valid in attributes.
I don't have much use myself either, that doesn't mean I have to
constrain my project to my own needs, I have bug tracking tools that I
try to follow and where possible and by specs I try to have it
considered. Is that a problem ?
The "xml:lang" property is mentioned in all the CSS >= 2 specs I have
read and it hasn't changed in the CSS Selectors Level 3, read first
6.3.1 and then follow links to 6.6.3 of latest CSS Selector Level 3
draft March/2009:
http://www.w3.org/TR/css3-selectors/#attribute-selectors
Though you may still say that my implementation is failing in some
way. No problems with that, my objective is learn and improve.
Not modifying host objects would be an improvement.
| d.isCaching
Mutation events. I remember in Gecko, DOMAttrModified would fire when a
textarea's |value| property was changed and that bug was justified on a
bugzilla ticket. I'm also concerned with the reliability of
"DOMNodeRemoved" when setting innerHTML or textContent.
I would like to do it OT if you have spare time and you really wish !
It is not "off topic". Quite the contrary: It is the essence of not
having a problem that draws the very point I am trying to make: YAGNI
[YAGNI].
The code I am going to show already uses delegation with some tricky
and possibly highly populated "trees" in smaller overlay windows; the
task is to replace "manual" delegation spread throughout the code with
a centralized event manager that could handle delegation using CSS
selectors. This is already handled very well since hundred of events
are handled by a few elements that act as events dispatchers (the
bubbling).
Link?
Writing English at this "level" is a bit difficult for me but I am
willing to try hard.
Yes, please do try and I will do my best to understand and where I do
not understand, I will try and make that clear.
It is not clear what problem you solve with 0 lines of code ! No lines
of code means no problem, so be real.
Yes, that was my point; no problem to solve.
As an example I can write:
NW.Event.appendDelegate( "p > a", "click", handler_function ); //
only first link in each paragraph
That constrains the position of the "activating" link the paragraph.
That, as is, is solvable that by checking tagName, parentNode, and
parentNode.getElementsByTagName("a") === target. However that strategy
in and of itself is arbitrary and fragile. If the markup were to have a
case for including a non-activating link as the first link in a <p>, the
script would fail. It is rigid for the same reason. You can't change the
order. I've been advocating this whole time that it is best to use
semantic markup. The |class| attribute could be used here.
addCallback(baseNode, handlePanelActuatorClick);
- and then in "handlePanelActuatorClick", do the checking to see if the
target has class "panelActuator".
function checkLinks(ev) {
var target = getTarget(ev);
var isPanelActuator = hasClass(target, "panelActuator");
if( isTargetFirstLinkInParagraph ) {
alert("winner");
}
}
The side effects to that strategy are:
* callback does the checking.
- reduces function calls
- debugging is straightforward
- does not require an extra call to a Selector lookup function
* NWMatcher is not required
- less code overall
- no extra non-standard API to learn
* Encourages the authoring of semantic HTML
- makes automation testing easier (for the same reasons it makes
assertions in the callback easier).
- behavior is deliberately and consistently applied (irrelevant
structural changes won't cause problems)
Drawbacks:
* requires user-defined hasClass(el, klass) and getTarget(ev) methods.
and I add some needed action to all the links in the page with no need
to wait for an onload event, thus ensuring a cross-browser experience
and an early activated interface. You may question the lengthy
namespace or names, yes the code is free just use yours...
Can I not question the need for the code itself?
Sure you can write the above snippet of code shorter and better, but
then you write it once and forget about reuse it.
I would not reuse the implementation. I /would/ reuse the
getNextSiblingElement or hasClass methods. I'd organize those methods
where they tend to get reused together, or where those methods have
common functionality, such as a some shared hidden (scope) variables.
Checking an element's "checked" or "tagName" property is trivial. It
would be pointless to create an abstraction for that.
OTOH, reading an element's checked *attribute* seems pointless. Why
would a program care? A CSS3 compliant selector API would would be
required to have that feature, but is it needed by a program?
Now suppose those "links" where also changing, pulled in by some http
request, the event will still be there, no need to redeclare nothing.
Yet another benefit to using bubbling. However, NWEvents is not needed
for that. Bubbling comes for free.
One-up to that is to reuse and cache an object decorator/wrapper that
was created lazily, on a bubbled event. That is possible when the
decorating object does not hold a referece to the element, but to an ID
(string).
Add this same technique to some more complex page with several FORMs
and validation as a requirement and you will probably see some benefit
in having all form events also bubble up the document (submit/reset/
focus/blur and more).
Submit does not bubble in IE, though. You could try and capture bubbled
"Enter" key and (submit) button clicks, but that also requires more
aggressive interrogation of the target (to make sure it is not readonly,
not disabled, etc).
The "but it should work" situations usually make me think about trying a
different approach.
I don't care about reset events, but I'm curious about handling the
bubbled submit. Would you like to post up some code, or a link to the
relevant place? Maybe another thread for that would be better, so that
discussion stays focused on that.
[Selectors API design discussion]
It was a very good idea for me and for others using it, obviously you
can teach me how to do that in 0 lines I am all ears.
A solution to a problem can not be critiqued if there is no problem
provided.
Code that does not meet the requirements fails on the first criteria.
So, if an assessment is to be made of NWMatcher, doesn't it sound right
and proper to show NWMatcher being used to solve a problem? Given a
problem P, in context, a comparison of P solved with NWMatcher vs P
solved with something else.
It looks like NWMatcher is adapted for jquery and prototype, right? If
so, I wonder what those users' good idea(s) were.
It works, NWMatcher passes the only few test suites available to me
(Prototype and jQuery), if you wish one added please suggest.
I have some test of my own obviously.
Depending on the browser, a checkbox' "checked" attribute may be a
boolean, null, or a string value.
A CSS3 compliant selector API would have to take all that into account.
I can't see a good reason for wanting to read the checked *attribute*.
Why would a script care about that?
The checked *property* should be what is a concern. A textarea's "value"
property, or another property, such as IE's "unselectable", might be
things a program would be concerned about. How would you match those
using css 3 selectors?
I know elements ID are unique. Thanks.
That wasn't my point. document.getElementById and getElementsByName are
broken in IE[1]. The reason I mentioned that it is a similar type of
thinking. The thinking is "X should work" and then trying to make it work.
A workaround for the IE bug is to replace document.getElementById with a
hand-rolled version. The hand rolled version checks to see if the
non-null element has an "id" property that is the same value. The
workaround is avoidable by not giving one element an ID that matches a
NAME of another (and expecting that nobody else will do that).
I was just trying to illustrate a point of not trying to patch all
browser bugs. There are way too many of them and they can often be
avoided by just being aware of the problem and not triggering it.
The IDs are in many case generated by module in the page, we have no
control over uniqueness between module's output but we may solve the
problem by giving different IFRAMEs to each module to ensure that.
I don't know what you are referring to. It sounds like you are
describing a mashup.
Well to achieve that with CSS and HTML in a site one should be
prepared to having to produce and maintain hundred/thousand different
manually edited static pages, we are miles away from that approach,
wasn't scripting introduced to solve that too. Or is there some
restrictions that impose the "just effects" web ? Isn't then jQuery or
APE enough for that ?
Brendan Eich would be able to provide a better answer on why scripting
was introduced. I'm not even sure I know the correct answer. I only know
what is going on for the past 10 years.
I proposed a History page for ecmascript.org, sent as email to one of
the es-discuss maintainers. I'm not expecting a 911 response, but it
would be nice to see such page.
I mean there are other markets and other needs, is it so difficult to
realize that ?
That sounds like something Martin Fowler calls "speculative generality".
Well algorithm was indeed a too big word, I meant the mix of different
conditional comparisons (it, tag, class, perv/next etc) you would have
to do each time the structure of your DOM changes (excluding the 0 LOC
trick).
There are cases where order matters and cases where an element's
position in the source order is arbitrary, and can often be enforced by
using valid HTML (only <li> inside a list, for example). The markup can
give big hints at what is arbitrary and what is not. The author is
responsible for that. The class or ID usually is not arbitrary.
What I would do is write semantic markup, make it is simple and logical
and obvious as possible, and then code for that. Things that are
arbitrary being changed won't affect the script.
I agree that in MOST cases, surely the majority, this code overhead is
not necessary. What about the other part of the "MOST" cases ?
NWMatcher will parse and compile the passed selector to a
corresponding resolver function in javascript. From what I understand
it's exactly what you do manually, as simple as that, the compiled
function is then invoked and saved for later use, no successive
parsing is done for the same selector to boost performances. I leaved
in a function to see the results of the process for demo purpose in
case you are curious about the outcome, use NW.Dom.compile(selector)
and print the resulting string to the console.
Selector API is overkill. The only time I can see needing that is for a
StyleSheet-related application. I made a styleSheet editor about four
years ago and used a Selectors API to match all the nodes in the
document based on selector text found in the styleSheet.
Not all browsers have that API extension, only the very latest
browsers have that.
Nobody has previousSiblingElement; that is a user defined function (I
miscommunicated that). "previousElementSibling" is Doug Schepers' choice
of name for the property (as I previously stated below).
I have heard saying that when QSA happeared first in browsers more
than a year ago, you said yourself QSA was mistakenly designed. Until
this is fixed things like NWMatcher will still bee needed and I am
speaking about the newest and most advanced implementors Webkit/
Chrome, what about IE6/IE7, maybe in a few years. NWMatcher will last
some more time for these reasons be assured.
That sounds like something I might say, though I don't recall
specifically.
Well I implemented that in javascript...why do you have such doubt
then ?
I really look forward to see that happen too. I am not in an hurry !
Technology can be improved by developers by making it more easy and
simple not by teaching difficult actions or hard to remember
procedures.
Can you explain a little more? What do you mean by "technology" and
"teaching difficult actions"?
Looking at the code, I think I see a problem in NW.Dom.getChildren:-
| getChildren =
| function(element) {
| // childNodes is slower to loop through because it contains
| // text nodes
| // empty text nodes could be removed at startup to compensate
| // this a bit
| return element[NATIVE_CHILDREN] || element.childNodes;
| },
The children property is inconsistent across the most common browsers.
It is best left alone.
There are no problems there, the returned children collection is
filtered for "nodeType == 1" or "nodeName == tag" afterwards.
That can be easily enough demonstrated.
There is no filtering shown in the code above. None. Where is the test
case of NWMatcher.DOM.getChildren?
This is the relevant code string that is wrapped around during the
function build that does what you are looking for:
// fix for IE gEBTN('*') returning collection with comment
nodes
SKIP_COMMENTS = BUGGY_GEBTN ? 'if(e.nodeType!=1){continue;}' :
'',
Nowhere I said either that the method serves to the purpose you are
trying to give it. You just guessed it !
No, I did not guess. I looked an found an inconsistency.
If someone were going to guess what that method is for (I would not), he
might read the code comment:-
| // retrieve all children elements
| getChildren: getChildren,
- and make a fair guess that it returns child elements.
A fair /expectation/ would be that the method would not return
inconsistent results across browsers.
That may be partly my fault too, by having exposed it as a public
method. ;-)
I was talking about "match()" and "select()", these are the only two
methods meant to be used. Sorry if it is unclear in the code/comments.
Why would you expose other methods if they are not intended to be used?
[snip explanation of children/childNodes]
Regardless of the getChildren method inconsistency, I don't need that.
AISB, YAGNI!
You are wrong. NWMatcher will never return text or comment nodes in
the result set, so it fixes the problems you mentioned on IE, and
since NWMatcher feature tests that specific problem, any new browser
with the same gEBTN bugs will be fixed too.
There are a couple of things wrong with that paragraph.
1) I am *not* wrong. The source code for NW.Dom.getChildren, as posted
above, taken from NWMatcher source on github[2], indicates otherwise.
As I told I am not trying to have an unified cross-browser
"getChildren" it is an helper used by the compiled functions, I could
have completely avoided having that function independent, it was to
improve speed on IE quickly discarding text nodes.
Why not just use the NATIVE_CHILDREN variable? Providing inconsistent
results to the caller imposes a responsibility that the caller has to
know about, despite the method name "getChildren" and the comment above it.
The caller must perform a few steps:
1) call getChildren
2) filter out comments, text nodes.
You can probably get away with it if you own all the code, but such code
would not fly where code sharing is common.
I suggest renaming of NATIVE_CHILDREN to something less incorrect. Maybe
CHILDREN_OR_CHILDNODES. NATIVE_CHILDREN is incorrect because it can be
"childNodes", which is not children (seems confusing).
[snip getChildren example]
No, let's see it this way, the getChildren() is to get the fastest
collection available. It didn't improve so incredibly for the records
but there was a gain.
Using === instead of == to compare nodeType might also make a comparable
improvement of performance. Probably only measurable in extreme cases.
Not referencing the |arguments| object would also help. This has been
discussed here at length.
I have never thought it was to be sold. Incentives comes in various
formats !
Yes, they do.
Works for me and for the hundreds of tests it passes.
I wonder can the following be expected to match:
input[checked]
input[checked=checked]
input:checked:not
disabled)
How do I select an input that is checked (input.checked == true) and not
disabled.
But I agree that the problem you talk about has been greatly
underestimated by several related working groups. Don't blame me or my
code for those errors.
* 16k is a lot, but NWMatcher is nearly 50k[2].
The "match()" method is the length I said, no more than 16kbytes
source code, the rest is for the "select()" method (I have no use for
the "select()" while everybody else uses only it) the caching code and
the type of checks that you said I should have leaved out.-
* What is the situation you're finding yourself in that you need
"negation pseudo classes"?
Scraping external text content most (hope the term is correct). Also,
in general when the list of things to do is much bigger of the list of
things not to do (while attaching event).
Chances are, someone has encountered that problem before and has figured
out a way that does not require matching "negation pseudo classes". It
may be that you have a unique problem. That warrants exposition.
Let's say I want all the elements but not SCRIPTS and or STYLESHEETS
and or OBJECTS/APPLETS...sound familiar and useful as a task ?
No, I can't say that I've even been in such situation where I needed all
elements by not SCRIPT, LINK, OBJECT, APPLET.
I'm familiar with page scraping, though I've never made a mashup.
Code could filter those elements.
var tagsExluded = /^SCRIPT|LINK|OBJECT|APPLET|!$/;
for(...) {
if(!tagsExcluded.test(el.tagName)) {
}
}
How is that expressed using "negation pseudo class" selector?
That is:
input:not([checked])
-would match inputs that do not have a checked attribute (not property).
I don't see how you'd use :not() to match elements.
http://www.w3.org/TR/css3-selectors/#negation
| The following selector matches all button elements in an HTML
| document that are not disabled.
|
| button:not([DISABLED])
That example is wrong. The button could have been disabled, but the tag
does not have the disabled attribute declared.
However it is also a specification of CSS3, they may be able to give
you other ideas about that in their docs.
Sure good stuff to study, along with ARIA, which I've meant to read more.
Yeah you repeated it a few times, you have no use...I see, I will not
blame you for that.
However I have no errors nor warnings in the console using these
helpers and the results are correct AFAIK.
This should already be a good enough reason to start using them and
try something new.
Honestly, it doesn't do something I need. I think some use-cases would
help show areas that are unused or problematic. If there's a part you
want code-reviewed, post it up
.
I could, you haven't given me a reason to not do it, but I will
carefully ponder any related/motivated suggestion.
By not using quirks mode, the script is less complicated.
It's a shame few follows, rules are easier to follow and doesn't
require big efforts just open minds.
Validation of HTML can be enforced on any project. Just do simple buddy
checks/code reviews. It shouldn't take that long to catch on and pretty
soon everybody validates their code.
Validation is a big target both for NWEvents and NWMatcher. You should
try it.
Try validation?
I validate ruthlessly and have been a big proponent of validating
everywhere I go, often the the annoyance of other developers. I made a
point of adding that to the FAQ. Earlier versions did not mention HTML
validation at all.
Or are you again suggesting me to try NWMatcher? If that is so, then I
feel like I failed to explain my reasons for needing to see
justification for it. I've included some links to "the simplest thing
that could possibly work, but not any simpler," and "YAGNI". I don't
know of a write up for "don't do that".
Thank you for scrutinizing. I have to address some of the concerns you
made like remove some public methods to avoid confusing devs.
Do you want more code review? Where? Post a link.
Garrett
[YAGNI]
http://groups.google.com/group/comp.software-eng/msg/f3882fbbb48b80cd?dmode=source
YAGNI on Wikipedia:
http://en.wikipedia.org/wiki/You_Ain't_Gonna_Need_It
[DoTheSimplestThingThatCouldPossiblyWork]
http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.html
[Speculative Generality]
http://foozle.berkeley.edu/projects/streek/agile/bad-smells-in-code.html#Speculative+Generality