Help Jquery: unable to register a ready function

D

David Mark

The point that you are still missing after all these months I already tried
to explain it to you is that your theory is flawed.

Using Conditional Comments does not break anything because when it is not
regarded as a special processing instruction (as by MSHTML, or a
deliberately borken UA if it can't handle it properly) it is regarded an
SGML/HTML/XML comment and goes *ignored*, plain and simple.  IOW, when a CC
fails you do *not* do anything.

Much in contrast, using browser sniffing and sniffing to the wrong effectis
inevitably harmful because you end up *doing* things that you did not
intended to do in the UA that you did not know to spoof you successfully
yet.  And you will have to adapt your code each time it happens to handle
that case, *after* some kind soul *might* have told you that your code
messed with their UA.  I hope you can agree that being aware of those issues
and continue sniffing anyway (unless it is *absolutely required*) is just
plain stupid.

As mentioned, he knows this. Seems he just wants to be different.
 
D

David Mark

If you restrict sniffing to only looking at the user-agent, then I
suppose you are right.


I agree that it is more reliable in practice, but not in theory.


So you're going to assume it's a safe tactic because, according to
your knowledge, no browser is currently doing it?

Your question has already been answered in this very thread. I won't
bother repeating it.
What if I wrote a browser that used IE's parsing/js engine but my own

Using IE's parsing engine? Do tell.
CSS logic? It might evaluate conditional comments, but not follow the
CSS quirks that you are inferring. The point is, you are making an

1. Nobody, but nobody, would use your browser, even if you could
somehow create it (and I am convinced you cannot.)

2. Wouldn't hurt a thing anyway as using a transparent GIF equivalent
to whatever fancy translucent PNG's you used with your widget(s) can
never be harmful. Not in this lifetime.
assumption about some behavior (css quirks) based on something
entirely different (support for conditional comments evaluation).

You are fantasizing.
In my experience, I've never come across a browser that faked it's UA
string and caused a problem for the user. So by your logic, can't I

Then you are extremely inexperienced (or lying.)
assume it is a safe practice?

You can take it from me (and many others) that it is not. It wasn't
safe a decade ago. It's just plain pitiful to design a browser script
that is hinged on sniffing the UA string (and therefore sure to blow
up when trying to apply a DirectX in any agent that is not IE.)

What are your other examples of scripts that absolutely *must* sniff
the browser? Opacity, getAttribute, etc.? We've heard all of this
before (from clearly dubious sources) and it is just as much nonsense
today as it has been in the past. You know all of this as you have
been involved in near identical threads in the past. Same with IE
conditional comments. Why do we have to keep revisiting these
topics? It only serves to confuse new users.

[snip]
 
M

Matt Kruse

Using Conditional Comments does not break anything because when it is not
regarded as a special processing instruction (as by MSHTML, or a
deliberately borken UA if it can't handle it properly) it is regarded an
SGML/HTML/XML comment and goes *ignored*, plain and simple.  IOW, when a CC
fails you do *not* do anything.

I know how CC's work, I'm not stupid.

My point is simple - a browser other than IE could implement CC's,
process them correctly, and behave as if it is IE6/7/8. If you are
using CC to determine the capabilities of the browser and inferring
it's CSS quirks, then you would apply incorrect logic to "fix" such a
browser that isn't broken.

In the same way, the user-agent string of IE6/7/8 is predictable. If a
browser fakes it and appears to be IE6/7/8, then your browser-sniffing
code may "fix" quirks that aren't really broken.

Both approaches are making an assumption about X based on some other
unrelated Y.

Granted, user-agent sniffing is more error-prone than CC. But I would
hope you would realize that both strategies have some room for
failure.

In my experience, both approaches should be used as a last-resort. And
I would use CC before browser sniffing. But also in my experience,
I've never come across a problem (in many years) caused by incorrect
browser-sniffing when used as a last resort. So it's not something I'm
afraid of, either.
Much in contrast, using browser sniffing and sniffing to the wrong effectis
inevitably harmful because you end up *doing* things that you did not
intended to do in the UA that you did not know to spoof you successfully
yet.

And as explained, using CC is vulnerable to the same thing.

And further on this point, I personally don't care about browsers that
are trying to spoof me and pretend to be IE or whatever. If they want
to proclaim to the world that they are IE, then they get treated as IE
in cases when it's necessary to differentiate. If users don't like it,
they should use a browser that doesn't pretend to be something it's
not.
 And you will have to adapt your code each time it happens to handle
that case, *after* some kind soul *might* have told you that your code
messed with their UA.  I hope you can agree that being aware of those issues
and continue sniffing anyway (unless it is *absolutely required*) is just
plain stupid.

It is stupid to sniff when it is not the most efficient way to
accomplish a task. And in most cases, it is the wrong strategy.

Matt Kruse
 
M

Matt Kruse

You have no point at all.  You can't feature test the color shirt the
user is wearing either.  Sheesh.

What about the need to fix for control z-index bleed-thru (ie, select
box showing above popup divs, etc).
I use CC to apply the fix for that. I've not come across a good test
for it.

Matt Kruse
 
E

Eric B. Bednarz

David Mark said:
[…] If you mean conditional
compilation, wrong again. I would never put that into the library as
(for one) it screws up the YUI minifier.

Only clueless idiots use minifiers (what else is new :).
 
D

David Mark

David Mark said:
[…] If you mean conditional
compilation, wrong again.  I would never put that into the library as
(for one) it screws up the YUI minifier.

Only clueless idiots use minifiers (what else is new :).

Oh brother. Who told you that?
 
D

David Mark

What about the need to fix for control z-index bleed-thru (ie, select
box showing above popup divs, etc).

There are numerous ways to design that out of the system. If you
refuse to do that, then you will have to either deal with it the same
way in all browsers (e.g. hide the selects when popping up a div)
or...
I use CC to apply the fix for that. I've not come across a good test
for it.

There is nothing inherently wrong with CC. If you find yourself using
it for more than this and perhaps two other things I can think of, you
are probably using it as a crutch. In any event, CC is not the same
thing as "detecting" the user agent.
 
R

RobG

In my experience, I've never come across a browser that faked it's UA
string and caused a problem for the user. So by your logic, can't I
assume it is a safe practice?

I think there are two bigger issues.

1. When a new version of a browser fixes[1] whatever quirk the sniff
was directed at, either the browser continues to get the "assume it's
broken" fork when it should not or the code authors have to add a
version-specific sniff.

2. Developers become lazy and start the old "this site must be viewed
in browser X" crap when the browser X is actually pefectly capable of
viewing the site.

My ISP continues to deliver different content to Safari users based on
a UA sniff, despite the fact that they could very easily have used a
feature test and it was fixed about version 1.2 or so. I change my UA
string to mimic Firefox and everything is fine.


1. Where "fixes" can mean conforms to whatever norm is expected, it
need not actually be a bug or missing feature.
 
E

Eric B. Bednarz

David Mark said:
David Mark said:
[…] If you mean conditional
compilation, wrong again.  I would never put that into the library as
(for one) it screws up the YUI minifier.

Only clueless idiots use minifiers (what else is new :).

Oh brother. Who told you that?

The same back-end guy who told me that creative mangling^Woptimizing
of source code will introduce new bugs, and that if your toolbox is
broken you fix your toolbox without making too much noise. Not that it’s
really relevant, I rather enjoy shooting without aiming, just like you.
 
D

David Mark

I know how CC's work, I'm not stupid.

That is a debate for another time.
My point is simple - a browser other than IE could implement CC's,

A browser could be made out of cake too. Highly unlikely, but
theoretically possible. IE could stop implementing them in a future
version as well (also highly unlikely.)
process them correctly, and behave as if it is IE6/7/8. If you are
using CC to determine the capabilities of the browser and inferring
it's CSS quirks, then you would apply incorrect logic to "fix" such a
browser that isn't broken.

The browser you described is broken as designed. But regardless,
consider the case at hand, which comes up over and over. How to deal
with the fact that IE6 cannot render transparent PNG's properly.

Solution #1

Use CC's to include an additional style sheet for IE6. As discussed,
other browsers ignore comments (as they must!) The images will look
slightly less impressive (if they have any translucent pixels that is)
in IE6 (or any hypothetical browser that chooses to interpet comments
as directives.) No script required. No chance of degrading the user
experience in any way (other than maybe a slightly less impressive
graphic.)

Sounds hard to beat doesn't it? Inexplicably, many script developers
endeavor to create scripted solutions for problems that will never be
perfectly solved by script.

Solution #2

Sniff the user agent string. Set a global variable to true to
indicate when the string "MSIE" is contained therein. If this
variable is set, call a proprietary DirectX hack that is sure to throw
an exception in any agent other than IE. For those who don't know,
lots of agents have "MSIE" in their user agent strings (e.g. mobile
devices, old versions of Opera, FF with the UA string changed to
thwart browser sniffing scripts, etc.)

The inevitable, ridiculous argument that comes back from these library
developers and their proponents is that they "did what they had to do"
to make it "work." If, for example, a PNG correction routine is
present in Prototype or jQuery, it almost certainly uses browser
sniffing to "work", so clearly the feature should have been left out
altogether (it is better solved in other ways.) Ask why they didn't
opt for the obvious and the answer will be that it wouldn't have been
"cool."

And for those that don't know, in jQuery and Prototype, you can find
the sloppy fingerprints from this sort of handiwork throughout.
Really. Tangled all throughout, this sort of "logic" waits to
explode on anyone foolish enough to browse your site with something
other than the latest versions of FF, IE, Safari or Opera. I guess a
disclaimer wouldn't have been "cool" either.
In the same way, the user-agent string of IE6/7/8 is predictable. If a
browser fakes it and appears to be IE6/7/8, then your browser-sniffing
code may "fix" quirks that aren't really broken.

Whose browser sniffing code? This is an "apples vs. oranges" argument
anyway.
Both approaches are making an assumption about X based on some other
unrelated Y.

The assumption that browsers will treat comments as comments is far
better than any inference you can make from the user agent string.
Granted, user-agent sniffing is more error-prone than CC. But I would

Apples and oranges.
hope you would realize that both strategies have some room for
failure.

One has virtually zero chance of failure and for the other, failure is
a virtual certainty. Take your pick.
In my experience, both approaches should be used as a last-resort. And

One should be used as a last resort to serve proprietary rules to
IE6. The other should never be used for anything. There just aren't
any parallels here.
I would use CC before browser sniffing. But also in my experience,

I would Notepad before Outlook Express.
I've never come across a problem (in many years) caused by incorrect
browser-sniffing when used as a last resort. So it's not something I'm
afraid of, either.

Your clients should be scared to death though, particularly if you
built public sites for them.
And as explained, using CC is vulnerable to the same thing.

Notepad and OE both crash.
And further on this point, I personally don't care about browsers that
are trying to spoof me and pretend to be IE or whatever. If they want

Oh, Christ on a crutch, here we go with the "I don't care" argument.
You are not your users.
to proclaim to the world that they are IE, then they get treated as IE

Are you really this clueless or just trying to make conversation here?
in cases when it's necessary to differentiate. If users don't like it,
they should use a browser that doesn't pretend to be something it's
not.

There is the classic and idiotic assumption that the user knows what
browser and configuration they are using and/or has any means to
change these circumstances. Blame the user for "pretending" to cover
up for your own shortcomings as a Web developer.
It is stupid to sniff when it is not the most efficient way to
accomplish a task. And in most cases, it is the wrong strategy.

I am still waiting (after ten odd years) to hear of a single case
where it is the right strategy.
 
D

David Mark

David Mark said:
[…] If you mean conditional
compilation, wrong again.  I would never put that into the libraryas
(for one) it screws up the YUI minifier.
Only clueless idiots use minifiers (what else is new :).
Oh brother.  Who told you that?

The same back-end guy who told me that creative mangling^Woptimizing
of source code will introduce new bugs, and that if your toolbox is
broken you fix your toolbox without making too much noise. Not that it’s
really relevant, I rather enjoy shooting without aiming, just like you.

Never me.
 
E

Eric B. Bednarz

David Mark said:
On Nov 4, 8:35 pm, Eric B. Bednarz <[email protected]>
^^^
I *do* have one good thing to say about jQuery: I hate G2 much more.
[…] I rather enjoy shooting without aiming, just like you.

Never me.

Not? The only thing that keeps me from considering you and Thomas Lahn
to be jQuery’s most effective – albeit somewhat unintentional –
ambassadors in this NG is having read its source code myself;
accidental readers are unlikely to share this advantage.
 
D

David Mark

                                          ^^^
I *do* have one good thing to say about jQuery: I hate G2 much more.

You hate something I have never heard of more than jQuery. How does
that promote jQuery?
[…] I rather enjoy shooting without aiming, just like you.
Never me.

Not? The only thing that keeps me from considering you and Thomas Lahn

What does he have to do with it? Do you think he is the only other
member of the group to question the competence of the jQuery project?
to be jQuery’s most effective – albeit somewhat unintentional –
ambassadors in this NG is having read its source code myself;

Perhaps you are reading a different newsgroup?
accidental readers are unlikely to share this advantage.

I don't follow the logic. You don't have to read all of the code.
Just read a few choice excerpts that have been posted here repeatedly.
 
D

dhtml

Conrad said:
That's not an answer. Until you actually show me a better solution, I'm
just going to assume that you don't have one.

In the example I've posted, I'm checking whether I should use the
IE-proprietary way of calling createElement, so it's perfectly
acceptable to use a proprietary property in the check.

No you are not. You are checking to see if creating a named control does
not result in a lower-case "name=" in an outerHTML property. Does your
application really care what the outerHTML string looks like?

The easiest way to avoid the problem of not being able to find an
element by name is simply not give the element a - name - and use ID
instead.

createElement(invalid_string) is supposed to raise a DOMException

If you used that, and you got a domexception, it would be entirely your
fault; that is exactly what should happen in that case. A browser that
had the same problem of creating named form controls would get the
createElement(invalid_string) would trigger that situation.

If creating an element and setting a name creates a problem, the problem
should be identified clearly in a feature test.

The pseudo code might be:-

create a named anchor
check to see if the element is found in the way you are looking for it
(getElementsByName).

Again, what's _your_ solution?
[...] The simplest solution is to provide GIF
equivalents and override the the PNG's in the inevitable IE6-specific
style sheets. And those are typically hidden with IE conditional
comments.

That's not an answer either. My point was that it couldn't be feature
tested, and you say use GIFs and stylesheets. That's not a test, it's a
workaround, and an ugly one at that, given that GIFs don't support alpha
transparency.

A device may support PNG without supporting the alpha channel. It's
optional.
http://www.w3.org/TR/PNG/

When a PNG is desired, but cannot be displayed, the grey fuzzy junk is
not acceptable. There should be a way to have a fallback.

The scope of the problem is larger than IE. I don't have the answer.

There are better and more reliable ways to test for IE versions. And you
know it - you're using conditional evaluation in your own library...


I've seen much worse. Probably anyone whose had a job has. Search random
websites and view the source.

Garrett
 
M

Matt Kruse

There are numerous ways to design that out of the system.  

Are you aware of every system?
Sounds like "I can't solve this problem, so I'll just avoid it
instead."
That works well in some cases, and not so well in others.
If you
refuse to do that, then you will have to either deal with it the same
way in all browsers (e.g. hide the selects when popping up a div)

Terrible approach. Especially for browsers that don't exhibit the
problem.
There is nothing inherently wrong with CC.  If you find yourself using
it for more than this and perhaps two other things I can think of, you
are probably using it as a crutch.  In any event, CC is not the same
thing as "detecting" the user agent.

But it kind of is "detecting" the user agent. You can use tags to
check against the OS, browser version, etc.

To use CC's, you are:
1. Trusting that the browser is what it says it is
2. Assuming that behavior X exists because of what your experience
tells you about that browser

When you use sniffing, you are:
1. Trusting that the browser is what it says it is
2. Assuming that behavior X exists because of what your experience
tells you about that browser

The point is valid that #1 is more reliable for CC than it is for
sniffing. True in practice. But not necessarily so. CC's could be
spoofed, just as user agent strings can be spoofed.

Point #2 is a necessary evil for both, because there are some things
you simply can't reliably test for. The GOAL is to handle as many
cases as possible and offer users the best possible experience.

A point that seems to get lost is that I'm not justifying the browser
sniffing in jQuery at all. It's unnecessary and amateur-ish. But just
because it exists doesn't invalidate the rest of the code for me. And
because it works consistently, reliably, and conveniently for me in
every situation I choose to use it in, I find value in it. It's far
from perfect and it has flaws, but I can accept that.

Matt Kruse
 
M

Matt Kruse

[snip detailed analysis]

Richard, as always your analysis is detailed, analytical, accurate,
and insightful (and long).
I wish I had someone at my disposal to review my work in such detail.
Surely you recognize that there are few people who would be capable of
such an analysis.
John Resig is very keen to speak of the 3 or 4
recent desktop web browsers that JQuery actually does support as "all
browsers", and when you are fully supporting "all browsers" the result
must then be cross-browser.

I think the most valid non-technical criticism of jQuery is that it is
promoted as being a generalized solution that is applicable to most
(or all) public web sites. It is not. But must we continually repeat
that fact as if it completely invalidates it as a tool for other
purposes?
Libraries such a JQuery allow people with little or no experience in web
development to crate sites that, on a limited number of web browsers, it
there default configurations, appear to 'work' and present a (more or
less) impressive presentation. But these people have no understanding of
the consequences of their actions (indeed mostly seem unaware that there
are any issues arising from their decisions at all).

This is true of many things in life, not just jQuery. You just happen
to be an expert in the area that jQuery addresses and can see its
faults. Technical perfection is NOT the only measure of success or
evaluation criteria.

Take a retail situation as an example. A store may not be able to
accept all forms of payment, thereby limiting their potential
customers. They may not be open at convenient hours, further limiting
potential customers. They may not have adequate parking. Maybe some of
their products are out of reach of some customers. Maybe they have
shopping carts that fit most people, but some find inadequate so they
leave. Maybe their store layout is too confusing so some people leave
because they can't find what they want. Maybe their sales tactics are
obnoxious and they annoy customers so much that they walk out.

BUT WAIT! Don't they know that each of these things is a design
decision that could prevent a small percentage of potential customers
from giving them money?! Why don't they fix them all and optimize it
so every single person can use their store and buy as much as
possible?

Why? Because it's made by humans, and humans aren't perfect. Sometimes
you have to say "close enough" and get on with it. If a site "appears
to work" for most people, and the customer finds it acceptable, and
even if they lose out on a small percentage of potential sales, maybe
that is truly "good enough". Maybe they can't afford to do it exactly
right. Maybe the analysis and perfection of the system just isn't
warranted, because with just a little effort they can get a lot of
business, and that more than makes up for the potential losses of
customers. That's the real world.
This, oft repeated, assertion that the only alternative to using a third
party general purpose library is write everything from scratch for
yourself is a total nonsense.

It's not black-and-white to you, but to others in different situations
it just might be, because they don't have the same options as you do.
In-between those two alternatives lay a
whole spectrum of code re-use opportunities

You are in a position to determine that because you have a good
understanding of the technology. You criticize jQuery because it
doesn't handle all cases and has design flaws that may impact a public
site that is using it, because its developers are too naive to know
the pitfalls of the library. But then you expect these same naive
developers to understand how to develop, test, modularize, and re-use
their own code as a better option?
From my own experience; when IE 7 was released I was working on a web
application with (at the time) 100,000 lines of client-side code (code
employing nothing but feature detection where it is necessary to react
to divergent environments).

You realize that this kind of example shows that you are at the very
extreme edge of js development? You are very clearly not the audience
for jQuery or any other popular library approach. I suspect you have a
hard time relating to the average developer who is struggling just to
do simple animations or toggle divs.
And last week QA started testing that application on IE 8.

And do you know how many projects even _have_ QA departments? Consider
yourself lucky to work in such an environment! You are the exception,
not the norm.
Now that is "Future Proof" javascript, and low maintenance javascript.

Planned, designed, and written by an expert in the field. We just need
everyone to be experts and all this discussion would go away! :)
While you may conclude that the reaction to the self proclaimed
"JavaScript Ninja" on this group is biased and personal, in reality it
is mostly a direct reaction to what he writes/does, and a reaction
informed by pertinent knowledge and experience in the field of browser
scripting.

From my perspective, John Resig is clearly not as knowledgeable and
experienced as you are with regards to Javascript. But, whether you
like it or not, he has much more knowledge than the vast majority of
people attempting to write Javascript. He's not perfect (who is?) but
he's out there, sticking his neck out, showing his cards, sharing what
he knows (or thinks he knows). That takes a lot of guts, and I'm sure
he's learning along the way. If everyone needs to be an expert at your
level before they can share anything they know, no one would be
learning!

The developer world needs people like him, and libraries like jQuery,
to bridge the gap between the average user who is lost and confused,
and the expert developer such as yourself. You may not think that
jQuery is a positive thing in the scripting world, but countless other
people disagree with you, and things are moving in that direction
whether you like it or not. Even Microsoft (not your biggest measure
of success, I'm sure) has adopted jQuery into its development
platform. Surely this doesn't affect how you do your work. But you
need to recognize how it changes the game for the rest of the
developer world.

It seems that your perspective prevents you from understanding the
needs of people who are in a very different situation from yourself.
And if you want to have a more positive impact on the scripting world,
you need to learn from people like John Resig just as much as he needs
to learn from you.

All IMO, of course.

Matt Kruse
 
T

Thomas 'PointedEars' Lahn

Eric said:
[...] The only thing that keeps me from considering you and Thomas Lahn
to be jQuery’s most effective – albeit somewhat unintentional –
ambassadors in this NG is having read its source code myself;
accidental readers are unlikely to share this advantage.

That argument is fallacious as it is based on the false assumption that
discussing the shortcomings of a piece of software attracts a majority of
relevant users to exactly that software.


PointedEars
 
D

David Mark

Are you aware of every system?
Sounds like "I can't solve this problem, so I'll just avoid it
instead."
That works well in some cases, and not so well in others.


Terrible approach. Especially for browsers that don't exhibit the
problem.


But it kind of is "detecting" the user agent. You can use tags to
check against the OS, browser version, etc.

To use CC's, you are:
1. Trusting that the browser is what it says it is
2. Assuming that behavior X exists because of what your experience
tells you about that browser

When you use sniffing, you are:
1. Trusting that the browser is what it says it is
2. Assuming that behavior X exists because of what your experience
tells you about that browser

The point is valid that #1 is more reliable for CC than it is for
sniffing. True in practice. But not necessarily so. CC's could be
spoofed, just as user agent strings can be spoofed.

We've been over that.
Point #2 is a necessary evil for both, because there are some things

It isn't necessary. That is my whole point. When somebody asks the
jQuery or Prototype "teams" to implement (for example) a PNG fix
script, they should simply refuse as it is impossible to detect the
condition. Perhaps they could even recommend the obvious non-script
solutions. But no, they are in a perceived arms race to create the
biggest general-purpose script ever. It is just ridiculous.
you simply can't reliably test for. The GOAL is to handle as many
cases as possible and offer users the best possible experience.

A point that seems to get lost is that I'm not justifying the browser
sniffing in jQuery at all. It's unnecessary and amateur-ish. But just

I know you aren't and I know you tried to convince the jQuery support
group to see the light on this. I also know they didn't see the light
and therefore their uses are still in the dark.
because it exists doesn't invalidate the rest of the code for me. And
because it works consistently, reliably, and conveniently for me in
every situation I choose to use it in, I find value in it. It's far
from perfect and it has flaws, but I can accept that.

I think you know that I don't really care if you use it for your
private application. It is just that the library is "marketed" (often
with religious zeal) to Web developers, most of whom work on the
public Internet. That is upsetting for a number of reasons, not the
least of which is that I can't browse the Web without tripping over
incompetent scripts (can't turn script off either of course.) Not all
of them use jQuery of course (lots do), but the whole culture of Web
development seems to have taken a major step backwards in the last few
years.

So I would just like to see some disclaimers when people talk about
it. For example:

jQuery Rules!!!!!!!!!*

* Provided you are using the default configuration of a handful of
modern browsers, then, well, it still doesn't rule per se, but it is
there. Warning: may explode in the middle of "chained" calls, leaving
the document in an unexpected (and possibly non-working) state.
 
M

Matt Kruse

So I would just like to see some disclaimers when people talk about
it.  For example:

jQuery Rules!!!!!!!!!*

* Provided you are using the default configuration of a handful of
modern browsers, then, well, it still doesn't rule per se, but it is
there.  Warning: may explode in the middle of "chained" calls, leaving
the document in an unexpected (and possibly non-working) state.

Heh. Now that I agree with.

Matt Kruse
 
D

David Mark

On 2008-10-29 20:43, David Mark wrote:
Actually, I thought it was quite interesting.

I found it quite informative, but on its author rather than its subject.
He wrote that scripting libraries like JQuery or Dojo are
used, among other things, to "pave over" browser bugs and
incompatibilities,

'Plaster over' rather than "pave over". In solving a small subset of the

Exactly. Their unit tests throw an exception and somebody comes up
with a half-baked plan to patch it with yet another browser sniff.
Prototype looks like a structure ready to collapse as so many hacks
have been piled on top of other hacks. They are reduced to testing
minor version numbers at one point. Of course, nobody seems to use
Prototype anymore (other than with those Rails "helper" things.)
jQuery has been widely mistaken as a viable alternative.

[snip]
| example:
|
| if ( elem.getAttribute ) {
|     // will die in Internet Explorer
| }
| That line will cause problems as Internet Explorer attempts to
| execute the getAttribute function with no arguments (which is
| invalid). (The obvious solution is to use
| "typeof elem.getAttribute == 'undefined'" instead.)

Now I know that that is pure BS because I have been using tests along

You are correct. Pure and unadulterated BS and demonstrably so.
the lines of - if(elem.getAttribute){ .... } - on Element nodes for
years (at least 5) and have never seen them fail, and I do test/use that

IIRC, it happens only with XML elements (appears they are ActiveX
objects under the hood.) No surprise that jQuery attempts to deal
with such objects. Clearly Resig bumped into my "unknown" bug and
typical jumped to the wrong conclusion, based on guesswork,
meditation, etc.
code in (lots of versions of) IE. But still, lets make it easy for
everyone to test the proposition for themselves and so verify its
veracity. A simple test page might be:-

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01
  Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
    <head>
        <title></title>
<script type="text/javascript">
window.onload = function(){
    if(document.body.getAttribute){
        alert(
            'document.body.getAttribute = '+
            document.body.getAttribute
        );
    }};

</script>
    </head>
    <body>
    </body>
</html>

Here is a slightly modified demonstration that shows how another
property can explode in similar fashion (and how simple it is to test
this case.) This property isn't even a method, so it proves that
Resig's theory about accidentally calling methods by type conversion
is nonsense.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01
Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title></title>
<script type="text/javascript">
window.onload = function(){
var el = document.createElement('div');
document.body.appendChild(el);

document.body.innerHTML = '';

if(typeof el.offsetParent == 'unknown'){
alert('Warning. About to explode...');
}
alert('offsetParent = '+ el.offsetParent);
};

</script>
</head>
<body>

</body>
</html>

Clearly a competently designed and written Web application will never
run into this. However, since MS can change the rules at any time, I
advocate using the typeof operator to test all host object methods.
Now to start with we need to know what IE's - getAttribute - would do if
it were executed with no arguments. The DOM spec is not that clear,
execute that it says no exceptions will be thrown. It is easy enough to
test, and it turns out that:-

alert(''+document.body.getAttribute());

- alert "null", so the method call returns null. So, if -
elem.getAttribute - calls the method with no arguments the result is
likely be null. If the result is null then in the above test page the -
if(document.body.getAttribute){ -  test will be false, the - if - branch
will not be entered and the alert will not be shown. But on every IE
version where I have tried the above test the alert is shown, plus the
alert shows "document.body.getAttribute = function
getAttribute(){[native code]}", not the "null" that would be result of
calling the method with no arguments.

Clearly Resig prefers voodoo to scientific methods (like testing) or
he would never have published his article.
Can you find an IE version that does exhibit this 'issue'? I am pretty
certain that I have tested code that used this test on IE versions from
4 to 8 without issue, so I doubt it.

Not the issue as such (clearly it is being mistaken for another
issue.)
Given that this assertion is demonstrably BS may attitude toward the
previous "demonstrated" issue on Safari, in light of my test, leans
heavily toward dismissing that as also being BS.

My feeling exactly. I have never finished that article. It is
sickening to think how many people have visited this Fantasyland over
the years and gone on to spread such "wisdom" in the real world. If
there is one guy in the world who should not be talking about feature
testing, it is this guy. Ironic that he is near constantly blithering
about the subject.
So what is this article? If it is a reasoned analysis of future

A delusion masquerading as a reasoned argument. As you mentioned, the
argument serves only to justify his own incompetence vis-a-vis feature
detection. One of the Prototype twits authored a similar call to
browser sniffing back when that library was the flavor of the day.
People have a right to their own ignorant opinions, but what compels
these people to spread them like they are gospel?

[snip]
One more quote from Resig's article:-

| The point of these examples isn't to rag on Safari or Internet
Explorer
| in particular, but to point out that rendering-engine checks can end
up
| becoming very convoluted - and thusly, more vulnerable to future
| changes within a browser.

And thus, John Resig is both incompetent and incoherent.
- which can be summarised as 'complexity equals vulnerability'.

Interesting take.
Certainly complexity will relate to vulnerability, in a very general
sense, but in the context of feature detection if a test is the correct
test, no matter how complex it may be, it will not be vulnerable to
updates in the browsers because significant changes the browsers will
directly impact on the outcomes of those tests, which is the point of
feature detection.

From my own experience; when IE 7 was released I was working on a web
application with (at the time) 100,000 lines of client-side code (code
employing nothing but feature detection where it is necessary to react
to divergent environments). The total number of changes in the scripts
needed to accommodate IE 7 was zero. When Chrome was released the total
number of changes necessary to accommodate that was also zero. And last
week QA started testing that application on IE 8. They may take another
week to run through all there tests but so far they have found no
issues, and I have a realistic expectation that they will not (as pretty
much everything that would be likely to be problematic would inevitably
get used in the first day or so).

Yes. My experiences with Chrome, Windows Safari, FF3, Opera, etc.
have been similar. Do things right the first time and you don't have
to re-do them.
Now that is "Future Proof" javascript, and low maintenance javascript.

The title of that article is ironic indeed. Even worse, jQuery's two
main selling points are reduced maintenance and that it is "fast." We
know browser sniffing only creates future maintenance headaches and,
of course, there is no slower way to do anything in browser scripting
than to use jQuery (the design ensures that.) He can rewrite his
silly CSS selector queries until the end of the earth but it won't put
a dent in the overall inefficiency (he is looking at a total rewrite
from scratch to accomplish that.)
<snip>

Chrome and JQuery made interesting point about User Agent string based
browser detection; Chrome works (more or less) with JQuery because its
authors pragmatically gave it a UA string that would result in most
current browser sniffing scripts identifying it as Safari, and treating
Chrome as Safari is probably the best thing to do if you are going to
script in that style at all. But this means that UA sting based browser
sniffing was effective in this case not because it enabled the accurate
determination of the browser and its version, but instead was effective
precisely because it misidentified the browser and its version.

Of course. And the purveyors of jQuery, Prototype, etc. ran their
unit tests through it and rejoiced that their code still "worked."
Somehow they see the coincidental circumstances that led to this
latest "success" as validation of their baseless methods. People like
that should not be writing software. Period. They certainly should
not be degrading Web documents with their collective delusions.
I think you have missed the point. He observed a weird behaviour in IE,

He did.
applied an obviously fatly process to the analysis of that behaviour and
that then resulted in his coming to a series of utterly false
conclusions about what the observed behaviour was. Joking about
applications for that behaviour became irrelevant as having
misidentified the behaviour in the first place any proposed applications
of that misidentified behaviour would be worthless even if taken
seriously.

And you are likely to ask; his mistake was using an - alert - in the
test. Generally, alerts aren't much use in examining pointing device
interactions in web browsers because they tend to block script execution
(and so real-time event handling) and they can shift focus (keyboard
'enter' goes to the button on the box. Most people learn this lesson
quite early on as they learn browser scripting, as a consequence of
trying things out for themselves and trying to examine how they work.
The BS examples on the 'future proof javascript' page might suggest that
John Resig is not someone who goes in for trying things out for himself
that often.

Or paying the slightest attention to anything critical of his work.
His mantra seems to be "stop hating me!" Last time I pointed out an
obvious mistake to him, he responded with the age-old "argument" of
library authors: "where is your way-cool cross-browser library?" Now
that that "platform" has collapsed, he is predictably absent from all
such discussions.
Here is an alternative test page for the IE setTimeout quirk:-

[snip]


While you may conclude that the reaction to the self proclaimed
"JavaScript Ninja" on this group is biased and personal, in reality it

And who on earth would take technical advice from a self-described
"JavaScript Ninja?" I wonder if he has a JScript belt too?
is mostly a direct reaction to what he writes/does, and a reaction
informed by pertinent knowledge and experience in the field of browser
scripting.

I've had the displeasure of talking to him briefly (here) and came to
the conclusion that he is a few sandwiches short of a picnic. But it
is his code, books, blogs, etc. that irk me as he is spreading
outrageous misconceptions.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,816
Latest member
nipsseyhussle

Latest Threads

Top