I am preparing an introductory course on JavaScript for the place I work
at. As part of the course, I would like present some of the common
knowledge and generally accepted advice of this group, since it is not
commonly known and it should guide them in the right direction.
(A small) part of this is on general purpose libraries. I've exported
the slides as a set of html documents which can be found here:
<
http://higher-order.net/courses/05-js-libraries.html>
I would appreciate feedback from the group. Remember that this
is an introductory course to it should be kept to basics and
generally accepted statements.
Slide img2.html
"The goal for JavaScript libraries is to present an API which
- us uniform and more high-level, yet efficient.
- works around known bugs
- supports a wide range of user agents."
The "supports a wide range of user agents" is demonstrably false in
almost all cases. Most javascript libraries aim to support a fixed set
of user agents, and quite a restricted set at that.
slide img3.html
"Most libraries provide
* ... , Keyboard normalization ..."
Mostly they don't. Keyboard normalisation (so between event models and
key event behaviour, Hardware variations, OSs (which must include non-
desktop OSs) and language related keyboard layout) is complex subject
that is rarely more than superficially addressed in library code.
Slide img5.html
"Be vary of augmenting HTML Elements" probably contains a typo.
Slide img7.html
"Least common denominator, e.g., event capture"
Shouldn't that be event bubbling, as its capturing that IE doesn't do?
Slide img8.html
"This is considered bad practice, a hack, by many.
- Unreliable
- Restricted accessibility
- Maintenance"
I don't think that "accessibility" has much direct relevance to
browser sniffing. It can be restricted as much without it as with it.
As this related to UA string based browser sniffing why doesn't the
list of reasons for considering it "bad practice, a hack" (which is a
considerable understatement in many cases) include the observation
that it has no technical foundations (i.e. that there are no technical
grounds for believing that it should be possible to determine browser
type or version from any User Agent header/string). Given that the
pertinent standard (the HTTP 1.1 standard) does not even require that
a user agent send the same sequence of characters in its User Agent
header for two consecutive requests (let alone that it should contain
anything specific, let alone specific to the browser or its version).
The relevance of this is highlighted by 'arguments' in favour of
browser sniffing such as:-
<URL:
http://blog.davglass.com/2009/01/browser-sniffing-vs-object-detection/
| That would be like me changing the information on my license
| plates and then telling the officer: "You should have checked
| the VIN instead of the license plate, I didn’t like what it
| said so I changed it".
- where that analogy implies some sort of 'law' being broken. While
the applicable 'law' actually says the UA string can be anything
anyone wants, and doesn't even have to be the same thing from one
occasion to the next. The real automobile analogy for the UA header/
string is not with a licence plate, but rather with something like a
bumper sticker; if you don't like it you may change it at your own
discretion, and no "officer" has any reason to question you about it
(at least assuming it is not offensive, an incitement to violence/
crime, etc.).
The observations that there is no technical basis for the belief that
you can discriminate between web browsers/browser versions using the
User Agent header/string, that web browser default User Agent headers
have been observed as being indistinguishable from those of other
browsers, and that User Agent headers can, in some circumstances, be
modified by users (and third party software) should be enough to
convince the rational that "bad practice, a hack" is an extremely mild
label to attach to the folly of browser sniffing.
Slide img10.html
The - typeof el.childNodes // 'function' - example for Safari is
probably inappropriate in context as the childNodes collection can be
called in that environment and so is a function, making the behaviour
fully conforming with the ECMAScript behaviour for a native function.
Slide img15.html
The "Potential cons" list:-
- Does not mention that the libraries are rarely actually cross-
browser (but merely support a limited set of brewers, so are actually
little more than an elaboration of the "both browsers" scripts from
the end of the last century).
- Does not mention that any 'community' is no more than the sum of the
people who participate in it, and that if the users of a library are
doing so because they don't know enough to do anything else (or
better) then their potential to offer help may be severely
constrained. (For example, while it existed the JQuery 'community' on
Google groups did not even answer between a quarter and a third of the
questions asked there, which was of zero 'help' to the people asking
those questions)
- Does not mention that the quality of library documentation can be
very poor, especially when the authors of the documentation either do
not understand what their code actually does, (and/)or believe that
what it does is obvious.
(There is a general documentation dilemma where the people who
understand cannot easily think themselves into the position of those
that don't, and so cannot see all of what needs to be put across, and
the people who don't understand can see what they would need to be
told, but cannot tell it.)
- is it accurate (enough for an introductory course)?
It would be very difficult to tell without the actual text.
I didn't like any of the feature test examples. There didn't seem to
be a statement of the basic feature testing principle that wherever
possible you design a test that has the closest possible relationship
with the thing that you need to know; preferably a one to one
relationship. Rather than demonstrating this principle in action some
of the tests were pushing object inference.
- is something important missing?
Additional questions on attribution:
- Did Cornford or Crockford invent the module pattern?
Invent (which, in principle, many people may do independently), invent
first, or publish first?
Dougless Crockford has never claimed to have invented the "module
pattern" (and has sufficed intellectual integrity that he never will).
All attributions to him are indirect, third party, and not based on
any actual knowledge.
To the best of my knowledge, I published the first example of the
archetypal "module pattern" (the specific example from the YUI blog
article), having previously published numerous variations on the
theme, most of which would generally be agreed to be examples of the
'module pattern' in the wider sense (though many of them were things
that others have since re-invented and given other names to as
derivatives of the "module pattern").
It is possible that one of the other people developing/expanding on
previous examples of mine actually hit the archetypal "module pattern"
first. Finding out would probably take working thorough the entire
archive for the group between May and August 2003.
- Who created the initial "clone/object/beget/Object.create":
Cornford, Crockford or Reichstein Nielsen?
If you mean a pattern where an object is assigned to the - prototype -
property of an empty function and then that function is used to
construct a new object as a 'clone' of the original object, then
Reichstein Nielsen published the first example that I noticed.
Richard.