Recommendations for JavaScript drop-down menu code

D

David Mark

Brian said:
Brian Adkins said the following on 9/29/2007 12:51 PM:
[...]
I'm not familiar with the phrase "soup tag HTML", but it doesn't sound
good. Which browsers process XHTML as "soup tag HTML" ?
The correct term is "tag soup HTML" which is the way most (if not all)
parsers in browsers work. If they do no support a HTML feature or encounter
not Valid code, they do error correction. That is a behavior not allowed
for XML documents such as XHTML documents; they have to be well-formed (they
need not to be Valid, although due to the definition of validating parsers
that is strongly recommended).

Interesting information. I do validate my pages, and I'm not planning
on sending as 'application/xhtml+xml' anytime soon, but it certainly

Then why are you writing XHTML?
may be that I assumed there were advantages to XHMTL 1.0 Strict after
reading various information sources (i.e. jumped on a bandwagon). Some
Yep.

of the proposed benefits of XHTML I've heard are:

* future proofing web pages

Somewhat true, though many will argue that HTML5 is the real future.
But if you are going to serve your pages as HTML, you should write
them as HTML.
* allowing aural screen readers to more easily consume it

Hardly. Most screen readers rely on browsers to parse your tag soup,
so there is no benefit there. Moreover, I know of no aural browsers
with XML parsers.
* it's becoming the language of choice for mobile devices

Not really. I imagine you are thinking of XHTML Basic, but that has
little to do with serving XHTML 1.0 as HTML.
* existing data is more easily transformed into XHTML than HTML

Backwards. It is easier to retrieve data from XHTML.
* there's not going to be an HTML 5, the new standard is XHTML 1.0

XHTML 1.0 is an old standard and it doesn't look like HTML 5 is going
away. Regardless, the two are not mutually exclusive.
* <br> seems wrong compared to <br />

It is wrong for XHTML, but correct for HTML.
* etc.

For what it's worth, here are the doctypes of 49 sites that I checked
in mid July:

http://lojic.com/blog/2007/07/12/which-doctypes-are-being-used/

Summary is: none = 10, html = 20, xhtml = 19 (only 1 using XHTML 1.1)

That one person using XHTML 1.1 is nuts and I suspect the 19 using
XHTML are doing what you are doing (serving tag soup HTML.)
You've given me some things to think about. If anyone is aware of
advantages of XHTML 1.1 Strict over HTML 4.01 Strict, feel free to
chime in.

Zero advantages and lots of pitfalls. XHTML 1.1 is not appropriate
for public Web sites.
 
D

David Mark

Oops. The instantiation should look more like this:

var displayCheck;
el = doc.getElementById('myMenubar');
if (el) {
el.style.visibility = 'visible';
displayCheck = el.style && typeof(el.style.display) == 'string' &&
typeof(el.style.position) == 'string';
if (el.parentNode && displayCheck) {
initializeMenu(el);
}
}
el = doc.getElementById('myMenubarVertical');
if (el && el.style) { el.style.visibility = 'visible'; }
if (el && el.parentNode && displayCheck) {
initializeMenu(el, null, 'right');
}
el = doc.getElementById('myPopupMenu');
elButton = doc.getElementById('testPopupMenu');
if (el && el.style) { el.style.visibility = 'visible'; }
if (elButton && elButton.style) { elButton.style.visibility =
'visible'; }
if (el && el.parentNode && displayCheck) {
initializeMenuPopup(el);
if (elButton) {
elButton.disabled = false;
attachPopupActivator(elButton, el);
}
}

I tangled up the visibility/display/positioning detection. Also, the
two references to document.onclick should technically be doc.onclick.
 
D

David Mark

[snip]

One last thought. It would be nice to allow keyboard users to quickly
close the menus without tabbing back to the root (or the activator for
a popup.)

Add one the first function and modify the other two.

function attachDocumentKeyHandler(list, root) {
if (typeof(root) == 'undefined') { root = list; }
var onkeypressOld = doc.onkeypress;
var onkeypressNew = function(e) { e = e || global.event; var key =
e.which || e.keyCode; if (key == 27 && activeMenu &&
(isAncestor(activeMenu, list) || activeMenu == list))
{ hideActiveMenu(null, root) } };
doc.onkeypress = (onkeypressOld)?function(e) { onkeypressOld(e);
onkeypressNew(e); }:eek:nkeypressNew;
}

function initializeMenu(list, className, initialSide) {
list.className = className || 'menubar';
list.style.position = 'relative';
initializeChildMenus(list, list, initialSide);
attachDocumentClickHandler(list);
attachDocumentKeyHandler(list);
}

function initializeMenuPopup(list, className) {
list.className = className || 'menuPopup';
list._isRootless = true;
initializeChildMenus(list, null);
list.style.position = 'absolute';
list.style.display = 'none';
attachDocumentClickHandler(list, null);
attachDocumentKeyHandler(list, null);
}
 
P

Peter Michaux

Brian Adkins said the following on 9/29/2007 6:44 PM:


Brian Adkins wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Brian Adkins wrote:
Brian Adkins said the following on 9/29/2007 12:51 PM:
I also forgot to list one requirement. I'm using XHTML 1.0 Strict, so
the menu would need to be compatible with that.
Why are you using something that isn't understood by 90% of the web and
would end up being processed as soup tag HTML and thus a true XHTML
script wouldn't work with it?
I wasn't aware that "90% of the web" doesn't understand it. Can you
provide documentation for that statistic?
He is referring to Microsoft Internet Explorer not using a built-in XML
parser for XHTML by default, and not supporting the proper media type for
XHTML, application/xhtml+xml, ...
I see. I'm not serving it as 'application/xhtml+xml',
But as what?
and haven't had any issues with IE so far. I guess the 'text/html' vs.
'application/xhtml+xml' accounts for the discrepancy between "90% of
the web doesn't understand it" and what I'm seeing.
It merely accounts for the possibility of your not understanding
what you are doing, see <[email protected]>.
You could be right. I should probably hear your perspective on the
discrepancy between someone's statement that XHTML 1.0 Strict won't
work on "90% of the web", and the fact that I haven't uncovered any
problems (except for losing the target attribute on <a>) on 5 browsers
running on 3 operating systems covering well over 90% market share.

If you don't serve it as XHTML then it isn't XHTML. And no DTD (or any
other element/tag/code in the page) will make the browser interpret it
as XHTML. Even your Firefox is interpreting it as HTML. Serve it with a
proper MIME type and see what IE does with it.

I'd think these quotations would be justification enough.

http://www.webdevout.net/articles/beware-of-xhtml#quotes

Peter
 
B

Brian Adkins

Brian Adkins said the following on 9/29/2007 6:44 PM:
On Sep 29, 5:51 pm, Thomas 'PointedEars' Lahn <[email protected]>
wrote:
Brian Adkins wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Brian Adkins wrote:
Brian Adkins said the following on 9/29/2007 12:51 PM:
I also forgot to list one requirement. I'm using XHTML 1.0 Strict, so
the menu would need to be compatible with that.
Why are you using something that isn't understood by 90% of the web and
would end up being processed as soup tag HTML and thus a true XHTML
script wouldn't work with it?
I wasn't aware that "90% of the web" doesn't understand it. Can you
provide documentation for that statistic?
He is referring to Microsoft Internet Explorer not using a built-in XML
parser for XHTML by default, and not supporting the proper media type for
XHTML, application/xhtml+xml, ...
I see. I'm not serving it as 'application/xhtml+xml',
But as what?
and haven't had any issues with IE so far. I guess the 'text/html' vs.
'application/xhtml+xml' accounts for the discrepancy between "90% of
the web doesn't understand it" and what I'm seeing.
It merely accounts for the possibility of your not understanding
what you are doing, see <[email protected]>.
You could be right. I should probably hear your perspective on the
discrepancy between someone's statement that XHTML 1.0 Strict won't
work on "90% of the web", and the fact that I haven't uncovered any
problems (except for losing the target attribute on <a>) on 5 browsers
running on 3 operating systems covering well over 90% market share.
If you don't serve it as XHTML then it isn't XHTML. And no DTD (or any
other element/tag/code in the page) will make the browser interpret it
as XHTML. Even your Firefox is interpreting it as HTML. Serve it with a
proper MIME type and see what IE does with it.

I'd think these quotations would be justification enough.

http://www.webdevout.net/articles/beware-of-xhtml#quotes

Peter

Yes, I find those quotes signficant.

I see what Randy was trying to say in his initial post - the subtlety
was lost on me. I was ignorant of the fact that my carefully crafted
XHTML 1.0 Strict code was being handled as as HTML. I think I figured
the 'text/html' content type was simply to placate IE and that the
XHTML capable browsers would obey the doctype. From my research today,
it's clear that this is an all too common misconception. Bad book
authors :)

This is certainly a heavily debated topic, but I've personally been
unable to find enough evidence to justify serving XHTML up as HTML, so
unless I turn up something significant in the next few days, I think
I'll switch to HTML 4.01 Strict and continue to code in an XHTML
style. At least I feel better informed now.

There certainly seems to be a strong trend in moving to XHTML with a
'text/html' content type, so it appears that either a lot of major
site operators are misinformed, or I've yet to get all the relevant
facts about this.
 
D

David Mark

On Sep 29, 7:37 pm, David Mark <[email protected]> wrote:
[snip]

Took a moment to try this in IE6 and it looked a lot like IE7 quirks
mode. That was surprising as there isn't usually a correlation
between IE6 standards and IE7 quirks. I had forgotten how
unpredictable lists were in IE.

In short, IE6 requires widths for the list items (which sucks.) I
gave them all the same widths and tweaked a few other things with
conditional comments. The result is a lot uglier, but at least there
are no parse-related hacks. The rendering is the same as before in
everything but IE6. Other than the fixed widths, IE6 looks the same
as everything else. Quirks mode is definitely out, but not
necessarily IE5.5 as there are no box model issues. I suspect Mac IE
will have some some problems, but couldn't care less at this point.

Also note that one of the menus in the original is too wide for 6em.
I didn't bother to add a specific rule to accommodate it, so that menu
will look weird in IE6 unless the captions are shortened.

<style type="text/css" media="all">
ul.menubar { list-style-type:none;padding:0;margin:0 }
ul.menubar a, ul.menuPopup a { padding: 0 .25em 0 .25em }
ul.menubar li { margin:0;padding:0;display:inline}

ul.menubar li a:link, ul.menubar li a:visited { text-decoration:none }
ul.menubar li a:hover { background-color:#0000DD;color:white }
ul.menuPopup { list-style-type:none;padding:0;margin:0;border:eek:utset
2px;background-color:threedface;color:windowtext;z-index:1 }
ul.menuPopup li { white-space:nowrap;display:block;width:auto }
ul.menuPopup li a { width:auto }
ul.menuPopup li a:link, ul.menuPopup li a:visited { text-
decoration:none }
ul.menuPopup li a:hover { background-color:#0000DD;color:white }
ul.menuPopup li a.popup:after { content: "\0020\00BB"; }

#myMenubarVertical.menubar { width:6em;margin-bottom:1em }
#myMenubarVertical.menubar li { display:block;width:auto }
#myMenubarVertical.menubar li a { width:auto }

#testPopupMenu { display:block; margin-top:1em }

body {font-family:sans-serif;background-color:threedface}
</style>
<!--[if !IE]>-->
<style type="text/css" media="all">
#myMenubarVertical.menubar li a, ul.menuPopup li a { display:block }
</style>
<!--<![endif]-->
<!--[if IE]>
<style type="text/css" media="all">
ul.menubar { display:inline-block } /* Adds layout to fix IE relative
positioning bug */
</style>
<![endif]-->
<!--[if gt IE 6]>
<style type="text/css" media="all">
#myMenubarVertical.menubar li a, ul.menuPopup li a { display:block }
</style>
<![endif]-->
<!--[if lt IE 7]>
<style type="text/css" media="all">
ul.menubar li, ul.menubar li a, ul.menuPopup li, ul.menuPopup li a
{ height:1% }
ul.menuPopup li, ul.menuPopup li a, #myMenubarVertical.menubar li,
#myMenubarVertical.menubar li a { width:6em; zoom:1 }
</style>
<![endif]-->
 
R

Richard Cornford

Brian said:
Yes, I find those quotes signficant.

I see what Randy was trying to say in his initial post - the
subtlety was lost on me. I was ignorant of the fact that my
carefully crafted XHTML 1.0 Strict code was being handled as
as HTML. I think I figured the 'text/html' content type was
simply to placate IE and that the XHTML capable browsers would
obey the doctype.

Which means that you were also not aware that there is a distinction
between HTML DOMs and XHTML DOMs. If you are scripting a DOM absolutely
the last thing you would want is to be scripting one type of DOM in one
browser and another type in the next.
From my research today, it's clear that this is an all too
common misconception.

All too common, and such that sufferers from the misconception are
extremely resistant to being corrected.
Bad book authors :)

Yes, but not only. Bad web page authors (directly, and those writing
pages on writing web pages) are probably more guilty (by weight of
numbers) , along with the participants in small circulation web forums,
mailing lists and other web development 'communities'.
This is certainly a heavily debated topic, but I've personally
been unable to find enough evidence to justify serving XHTML
up as HTML, so unless I turn up something significant in the
next few days, I think I'll switch to HTML 4.01 Strict and
continue to code in an XHTML style. At least I feel better
informed now.

While you are judging the 'significance' of what you find consider the
consequence of scripting such a document. Since almost no non-trivial
scripts will operate successfully with both and HTML DOM and an XHTML
DOM and a document served as text/html will result in the browser
exposing an HTML DOM to be scripted, will it ever make sense to be
marking-up a document as XHTML and then scripting with the pre-requisite
that it will _never_ be interpreted as being an XHTML document by a web
browser.
There certainly seems to be a strong trend in moving to XHTML
with a 'text/html' content type,

There certainly is a strong trend towards seeing increasing quantities
of XHTML-style mark-up in documents (whether they be XHTML or not). It
is most obvious that fundamental miscomputations are driving this trend
when you observe the number of documents where <br> appears alongside
so it appears that either a lot of major site operators are
misinformed, or I've yet to get all the relevant facts about
this.

The general standard of technical understanding, even on 'major sites',
is so low that 'misinformed' is probably pushing things too far. To be
misinformed you need to be (in some sense) 'informed' in the first
place. Plain old endemic ignorance is a much better explanation; these
people just don't know why they are doing what they are doing.

Much of the time when we get into this XHTML/HTML discussion here it
quickly obvious that the individuals being asked why they are using
XHTML-style mark-up while scripting an HTML DOM not only don't know that
there is a distinction between the types of DOM, but don't actually know
what an HTTP content-type header is. These is no informed decision
making in what they do, just the random outcome of the aggregation of an
extended sequence of 'learnt' mystical incantations.

The depth, and pervasive nature of, that endemic ignorance is best
illustrated by the current set of 'popular' javascript libraries. Where
people who don't know any better are importing the ignorance of others
into their own projects.

On Monday morning I have been asked to analyse why an 'AJAX' web
application written by one of our subsidiary companies runs so badly on
IE6 as to be non-viable. I have not seen the code yet and the only
things I know about it are that it was written by experienced Java
programmers (so I am expecting them to have made all the mistakes I made
when moving from Java to javascript) and that they have used to
'popular' dojo library. In preparation I thought it would be a good idea
to have a look at the dojo library code, so I spent a few hours
yesterday doing that, to discover (as I expected I would) that its
authors were not particularly knowledgeable about javascript or browser
scripting. To illustrate:-

From a file called "dojo.js" (re-wrapped for newsgroup posting):-

| document.write("<scr"+"ipt type='text/javascript' src='"+
| spath+
| "'></scr"+"ipt>"
| );

In the mark-up string being presented to the - document.write - method
you will see two string concatenation operations being used to
'split-up' the SCRIPT opening and closing tags. This is a mystical
incantation that ignorant script authors chant in the face of a real
issue. The real issues is that when an HTML parser encounters an opening
SCRIPT tag it must determine how much (if any) of the following material
is script code that should be sent to a script engine for processing and
when it should resume processing the input stream as HTML mark-up. The
obvious signal for the end of contents of a SCRIPT element would be the
closing SCRIPT tag. However, the HTML parser has no means of seeing the
characters in the input stream as anything other than characters so if a
javascript string contains the character sequence '</script>' the HTML
parser is going to see that as the character sequence it is interested
in; the end of the contents of the SCRIPT element. Resulting in an
unterminated string literal error in the javascript source and gibberish
text content in the HTML document.

There is also a formal issue that differs slightly form the real issue.
The HTML specification clearly states that CDATA contents of an element
(SCRIPT element contents are CDATA in HTML) may be terminated by the
first occurrence of the character sequence '</'. In practice no browsers
are known to follow this aspect of the HTML specification to the letter,
but a knowledgeable HTML author would have no excuse for complaint if a
future browser did terminate CDATA sections at the point of first
encountering the character sequence '</'.

So the things that make the above code a mystical incantation are:-

1. The string concatenation operation in the opening SCRIPT
tag is a needless runtime overhead to do something that
has no relationship to either the real issue or the formal
issue. It just should never have appeared in any code.
2. The string concatenation operation in the closing SCRIPT
element may deal with the real issue but it does not address
the formal issue. While any approach that did address the
formal issue by breaking up the '</' sequence would also
break up the '</script> sequence.
3. A concatenation operation is a poor approach to this issue
as the HTML parser is only able to see the raw source text
characters. Breaking up the problematic character sequences
with escape (backslash) characters would be just as effective
at concealing them form the HTML parser but would do so in a
way that had no consequences beyond the point where the
string literal was converted into a string primitive value
(during the compiling of the script into an executable). That
is, there is no need for the runtime overhead of two (or
four in this case) string concatenation operations. The
recommended approach is to turn the sequence '</script>' in
string literals into the sequence '<\/script>' and so address
the real and formal issues without any runtime overhead.
4. The code is actually in an external javascript resource and
so will never be presented to an HTML parser for examination.
Neither the real nor the formal issues apply to this code at
all.

Another illustration is to be found in dojo's 'dom.js:-

| if(
| elem == null ||
| ((elem == undefined)&&(typeof elem == "undefined"))
| ){
| dojo.raise("No element given to dojo.dom.setAttributeNS");
| }

Using the type-converting equality operator (- == -) there are precisely
two values that are equal to null. They are null (unsurprisingly) and
the undefined value. In the above tests whenever - elem - is null or
undefined the - elem == null - expression is true and the - ((elem ==
undefined)&&(typeof elem == "undefined")) - expression is not evaluated.
So whenever the - ((elem == undefined)&&(typeof elem == "undefined")) -
expression is evaluated the value of - elem - must be neither null nor
undefined. But if - elem - must be neither null nor undefined then -
(elem == undefined) - must always be false (as only undefined and null
equal (by type-converting equality) undefined), and as - (elem ==
undefined) - must be false the - (typeof elem == "undefined") - can
*never* be evaluated.

We are looking at code where the author has written the test
expression - elem == null - without understanding the operation being
performed and made that ignorance self-evident by following it with a
test that can only have one outcome in its context, and a third test
that can never be evaluated (though if it were evaluated the result
would be as predictably false as the - (elem == undefined) -
expression).

The annoying part of this nonsense is that in its normal use, when -
elem - will be a reference to a DOM Element, that - (elem ==
undefined) - is going to be evaluated, and it is going to produce its
predictably false result. Just another avoidable runtime overhead,
included for no reason other than ignorance.

It is unlikely that dojo is the work of a single individual, but we can
be certain that of everyone involved the individual with the greatest
knowledge of the subject does not know javascript well enough to
understand how the code written is going to behave (or enough to
distinguish between chanting mystical incarnations and browser
scripting). However, you find that the authors of these 'popular'
libraries acquire a strange status in the eyes of (presumably even more
ignorant) others, get invited to speak at conferences, feel themselves
qualified to instruct others on how they should be writing javascript,
and so on.

So yes we do live in a world where the operators of 'major sites' are
misinformed, and likely to stay that way because the odds are that the
next person to 'inform' them will likely be as misinformed themselves.

Richard.
 
P

Peter Michaux

On Sep 30, 11:31 am, "Richard Cornford" <[email protected]>
wrote:

[snip examples of less than optimal code in dojo]
So yes we do live in a world where the operators of 'major sites' are
misinformed, and likely to stay that way because the odds are that the
next person to 'inform' them will likely be as misinformed themselves.

Would you agree that most niches in the programming world are filled
with people that are misinformed and making decision that are randomly
good or bad? I would say mediocrity and satifaction with just enough
information to avoid being fired extends to some fraction of
individuals in every intellectual field with which I have had contact.

N.B. I'm not claiming I'm anything other than average. I have no
justification to do so.

Peter
 
R

Richard Cornford

Peter said:
On Sep 30, 11:31 am, Richard Cornford wrote:

[snip examples of less than optimal code in dojo]

"Less then optimal"? That code was well into the range of b***dy stupid.
Would you agree that most niches in the programming world are
filled with people that are misinformed and making decision
that are randomly good or bad?

Not in my experience.
I would say mediocrity and satifaction with just enough
information to avoid being fired extends to some fraction of
individuals in every intellectual field with which I have
had contact.
<snip>

There is a big difference between "most" and "some".

But what is your point? Does an inept programmer become competent when
the person sitting at the next desk is worse?

Richard.
 
B

Brian Adkins

Which means that you were also not aware that there is a distinction
between HTML DOMs and XHTML DOMs.
Correct.


While you are judging the 'significance' of what you find consider the
consequence of scripting such a document.

Just to clarify. What I meant by 'coding in an XHTML style' is things
like using lowercase attribute names with quotations, using closing
tags even if they're optional, etc. such that the markup is valid HTML
resulting in an HTML DOM.
The depth, and pervasive nature of, that endemic ignorance is best
illustrated by the current set of 'popular' javascript libraries. Where
people who don't know any better are importing the ignorance of others
into their own projects.

Could it be that people are simply doing their best to try and find a
library that is the lesser of evils to avoid the disadvantages of
writing everything themselves? In my short time on c.l.j, I have seen
many criticisms of JavaScript libraries but few recommendations. It
could be that they simply got lost in the noise.

Are there any JavaScript libraries that you can recommend over
reinventing wheels? I checked the FAQ and didn't see anything.

I'm also curious if the folks criticizing the 'popular' JavaScript
libraries (or their authors) have attempted to improve the code -
either by direct contribution or via educating the authors. Or are
they beyond hope?
 
P

Peter Michaux

Just to clarify. What I meant by 'coding in an XHTML style' is things
like using lowercase attribute names with quotations, using closing
tags even if they're optional, etc. such that the markup is valid HTML
resulting in an HTML DOM.

Lower case attribute names and quoted values are fine ideas. I
certainly find it easiest to read HTML written this way.

Could it be that people are simply doing their best to try and find a
library that is the lesser of evils to avoid the disadvantages of
writing everything themselves?

Many developers, particularly server-side developers, don't like UI
work and really hate dealing with browser bugs. They are happy to have
someone else do the work and if it works in the majority of their
target browser market that is good enough. These days the "good
enough" market is usually IE6/IE6/FF2/O9/S2 with JavaScript, ActiveX,
CSS and images all enabled.

I think many times on comp.lang.javascript the business goals of a
project are forgotten in favor of technical perfection. I wouldn't
want c.l.j to be any other way and focusing on business goals and
"good enough" has led to such a heap of released libraries that really
aren't ready for production.
In my short time on c.l.j, I have seen
many criticisms of JavaScript libraries but few recommendations. It
could be that they simply got lost in the noise.

Quite a few c.l.j regulars think libraries are a bad idea in general.
They do admit to reusing code of their own so in some sense do use
libraries.

A big issue with libraries is what is considered bloat. For a
particular page, how much library code sent to the browser is never
used? If this code is cached is it used on some other page of the
site? It is a balance of library granularity, page load times and
caching.
Are there any JavaScript libraries that you can recommend over
reinventing wheels? I checked the FAQ and didn't see anything.

I've never had the opportunity to use a pre-made high-level library
widget component (eg data table, in-window popup, accordion, tabbed
pane, etc). I always need to write custom widgets. These have features
no general purpose library widget would have and are usually something
like a tenth the size of some roughly similar library widget. What I
do find useful are the low-level libraries: Event, Ajax(XHR), DOM
searching/creating. Having most of the low-level browser bugs
normalized with these libraries makes writing custom widgets quite
easy and fast.

I'm also curious if the folks criticizing the 'popular' JavaScript
libraries (or their authors) have attempted to improve the code -
either by direct contribution or via educating the authors. Or are
they beyond hope?

On various lists for the popular libraries and in posts on c.l.j, I've
seen c.l.j regulars offer very good advice. Take the Prototype
developers as an example. They have had advice hurled at them in
various forms for years. The usual response is something along the
lines of "that just isn't cool"; however, a few months or a year down
the road they realize the wisdom of the advice and do make the change.
The Prototype library is big (~3400 lines) but it is small enough that
someone both familiar with the library and a knowledgeable JavaScript
programmer could sit down and rewrite it in under a month and remove a
slew of bugs and poor design decisions. This would require a big API
change which "just isn't cool"...yet.

I think the majority of c.l.j regulars prefer to roll their own. Then
if something needs changing they can just change it without going
through some long political process to have a patch accepted.

Peter
 
B

Brian Adkins

If you observe more closely, you will see that XHTML cannot be made fully
HTML-compatible this way, at least because IE renders <br></br> as *two*
lines. And XHTML 1.0, Appendix C (which is not normative BTW), fails to
recognize that in a non-tagsoup HTML parser `<br />' equals `<br>&gt;'.

My goal is *not* to make XHTML fully HTML-compatible. I stated "such
that the markup is valid HTML" and referred to "optional" closing
tags, not "forbidden" ones, yet you show <br> with an end tag which is
forbidden (as it is for <img>, <meta>, etc.) according to the spec
here:

http://www.w3.org/TR/html401/index/elements.html

On the other hand, the closing tag for <p> is optional (as is <body>,
<li>, etc.), but in keeping with an XHTML "style" I would choose to
include the closing tag when optional.

I appreciate your enthusiasm, but maybe you could channel this extra
energy into recommending a JavaScript library "that doesn't suck" :)
 
P

Peter Michaux

recommending a JavaScript library "that doesn't suck" :)

You realize how dangerous it is for someone to make such a
recommendation on c.l.j? If the library isn't perfect then it "sucks".

I'll stick my neck out and say I think mine doesn't suck. I need to
make a few changes which are mostly stylistic.

<URL: http://forkjavascript.org/>

I've never seen another library tested so widely

<URL: http://forkjavascript.org/welcome/browser_support>

Peter
 
T

Thomas 'PointedEars' Lahn

Brian said:
My goal is *not* to make XHTML fully HTML-compatible.

But that has to be your goal or you have neither Valid XHTML nor Valid HTML
markup that can be used to create a document tree in the DOM.
I stated "such that the markup is valid HTML" and referred to "optional"
closing tags, not "forbidden" ones, yet you show <br> with an end tag which
is forbidden (as it is for <img>, <meta>, etc.) according to the spec
here:

http://www.w3.org/TR/html401/index/elements.html

Utter nonsense. As you can observe in the specification, all HTML elements
with an empty content model, including the `br', `img' and `meta' elements,
have an *optional* end tag. That goes for HTML 3.2, HTML 4.01 Transitional,
Frameset, and Strict, and (so) even ISO HTML. It is that property of HTML
that can make XHTML 1.0 HTML-compatible generally, if it were not for the
faulty Trident.
On the other hand, the closing tag for <p> is optional

As is ` said:
[...]
I appreciate your enthusiasm, but maybe you could channel this extra
energy into recommending a JavaScript library "that doesn't suck" :)

Since I never had the need for a library for the feature you are looking for
(as the posted links should have proven already), I can not recommend one.
In fact, any script library that would be required for that (in contrast to
a little behavior-.htc) can safely be recommended against on the Web as it
will not degrade gracefully and so it will not be interoperable and the
outcome will not conform to accessibility guidelines.


PointedEars
 
T

Thomas 'PointedEars' Lahn

Thomas said:
Brian said:
I stated "such that the markup is valid HTML" and referred to "optional"
closing tags, not "forbidden" ones, yet you show <br> with an end tag which
is forbidden (as it is for <img>, <meta>, etc.) according to the spec
here:

http://www.w3.org/TR/html401/index/elements.html

Utter nonsense. As you can observe in the specification, all HTML elements
with an empty content model, including the `br', `img' and `meta' elements,
have an *optional* end tag. [...]

I can see now how and why you got the impression that the end tags would be
forbidden. You will have to ignore that column of this non-normative index
where there is an "F" for "Forbidden" (as you will have to ignore many
non-normative examples in W3C specifications that propose bad practice).
There is exactly nothing in the DTD that forbids to use an/the optional end
tag for those elements (or any other element), nor is there anything in the
corresponding normative sections of the specification that says so. For
example, the declaration for the HTML `br' element in HTML 4.01 Strict is as
follows:

| <!ELEMENT BR - O EMPTY -- forced line break -->
| <!ATTLIST BR
| %coreattrs; -- id, class, style, title --
| >

You will observe the `-' after the element type identifier `BR' that means
the start tag of the element is _not_ optional. You will also observe the
following `O' that means the end tag of the element *is indeed* optional
(and not at all forbidden), and the `EMPTY' which says that the content
model of the element is empty, i.e. it must not have any content (e.g.
`<br>foo</br>' is indeed forbidden).


HTH

PointedEars
 
J

John G Harris

Thomas said:
Brian said:
I stated "such that the markup is valid HTML" and referred to "optional"
closing tags, not "forbidden" ones, yet you show <br> with an end tag which
is forbidden (as it is for <img>, <meta>, etc.) according to the spec
here:

http://www.w3.org/TR/html401/index/elements.html

Utter nonsense. As you can observe in the specification, all HTML elements
with an empty content model, including the `br', `img' and `meta' elements,
have an *optional* end tag. [...]

I can see now how and why you got the impression that the end tags would be
forbidden. You will have to ignore that column of this non-normative index
where there is an "F" for "Forbidden" (as you will have to ignore many
non-normative examples in W3C specifications that propose bad practice).
There is exactly nothing in the DTD that forbids to use an/the optional end
tag for those elements (or any other element), nor is there anything in the
corresponding normative sections of the specification that says so. For
example, the declaration for the HTML `br' element in HTML 4.01 Strict is as
follows:

| <!ELEMENT BR - O EMPTY -- forced line break -->
| <!ATTLIST BR
| %coreattrs; -- id, class, style, title --
| >

You will observe the `-' after the element type identifier `BR' that means
the start tag of the element is _not_ optional. You will also observe the
following `O' that means the end tag of the element *is indeed* optional
(and not at all forbidden), and the `EMPTY' which says that the content
model of the element is empty, i.e. it must not have any content (e.g.
`<br>foo</br>' is indeed forbidden).

I'm afraid you haven't consulted section 3.3.3 of the HTML 4.01
standard. This says that :

- - means the end tag is compulsory;
- O means the end tag is optional;
- O EMPTY means the end tag is forbidden.

(Note the DTD language uses two bits to index three states).

One possibility is that section 3.3.3 is normative (i.e you are required
to obey it). On the other hand it might be non-normative (i.e it's a
serving suggestion), but then it's an accurate translation of a
normative part of the SGML standard. Either way, you'd better believe
it.

John
 
B

Bart Van der Donck

Thomas said:
Whether or not it [MSIE] still has a 90% market share or not (142%
of all Web statistics are flawed) is irrelevant

As with all statistics, they are not irrelevant on condition that the
measuring methods are acceptable.

Statistically every person has one breast.
 
R

Richard Cornford

Just to clarify. What I meant by 'coding in an XHTML style' is
things like using lowercase attribute names with quotations,
using closing tags even if they're optional, etc. such that
the markup is valid HTML resulting in an HTML DOM.

Self imposed discipline. That is usually a good idea in the absence of
externally imposed discipline (and perhaps regardless of it).
Could it be that people are simply doing their best to try and
find a library that is the lesser of evils to avoid the
disadvantages of writing everything themselves?

What are the "disadvantages" of writing everything yourself? Whatever
they may be on the plus side if you write something yourself you will
(or should) understand it, and there is a great deal to be said for
understanding the scripts that you use.

There a commonly asserted pre-supposition that the only alternatives
available are facilitating code re-use with the creation of large
general purpose libraries or to re-write everything from scratch each
time you do anything new. That polarized perception should be self
evidently nonsense to anyone who has copied an existing function form
one piece of exiting code to a new one, which will be pretty much
everyone who has got past pure copy-n-paste scripting.

Most of the bias in favour of large scale libraries comes with people
approaching javascript form other programming languages where having
large reservoirs of pre-created code always available to the programmer
makes perfect sense. Indeed so much sense that it becomes difficult to
see how that may not be true for all programming tasks. Which even
extends to the point where, when asked for justifications for using
large general purpose libraries some will not even consider answering
the question as a worthwhile exercise, even though articulating the
justifications would help to make it clear why the normal practice in
other programming environments does not necessarily extend well to
javascript.

There is not much thought given to the issues that follow from
broadcasting all the source code to the user and compiling it each time
it is executed. It should be fairly obviously that if it was necessary
to transmit over the internet, and then compile, all of the source code
for all the standard Java libraries, plus anything application
specifies, each time you wanted to execute any Java program then that
would make Java non-viable. But still that is the inevitable end point
of creating ever more capable (and so ever larger) general purpose
javascript libraries.

There also appears to be a tendency with the authors of such libraries
to react to criticism of the download size by seeking out code
compression strategies. This is something Dojo is attempting, and where
DoJo illustrates the folly of the exercise. In the 0.9.0 version the
file for distribution is 'dojo.js', which is 'compressed' (and actively
decompressed after it loads). The same code is available in full and
commented as 'dojo.js.uncompressed.js'. If you consider that HTTP 1.1
UAs tend to support compressed HTTP transmission it is significant to
consider how javascript source files will compress when considering
download size. When I zip compress 'dojo.js' the result is 25,903 bytes,
while if I remove the comments from 'dojo.js.uncompressed.js' and
compress it the result comes out at 25,862 (fractionally smaller). That
means that the 'compression' technique used in Dojo actually hinders zip
compression and so potentially increases download size, while its need
to de-compress on the client (with javascript) means that the total time
before the result is available to the user is increased by the process.

In truth code re-use is facilitated by any rendering of the specific
more general, from actions as simple as replacing inline code with
parameterised function calls. Given a huge spectrum of possible code
re-use strategies, with the large scale, highly capable, interdependent,
monolithic, general purpose javascript library being no more than a
point at one end of the spectrum, it is probably unwise to fixate on
that one strategy as being the only sensible option without being able
to articulate some pretty robust justifications for that position.

One of the issues faced by the author of a general purpose library is
the need to be truly general. This is well illustrated with one of the
much discussed browser scripting problems; the acquisition of accurate
position and dimension information relating to displayed DOM elements. A
general algorithm would take an element as input and determine the page
relative coordinates of its upper left corner and its width and height,
in some sense, as this is a description of some sort of 'containing
box', which does not necessarily have to be any specific box (in the
sense that CSS talks of boxes) but must be the same box for all
elements, and presumably a useful box to know about.

In the simplest case a DOM element will have offseTop/Left/Width/Height
and an offsetParent, and its position is the sum of all the
offsetTop/Lefts for all its offsetParents and its width/height is just
its offsetWidth/Height. But that is for CSS 'block' elements (as opposed
to inline, list-item, run-in, compact, marker, table, inline-table,
table-row-group, table-header-group, table-footer-group, table-row,
table-column-group, table-column, table-cell and table-caption) with no
borders or padding on the element and any of its offset parents, where
none of the offset parents have scrolling overflow, on browsers that
provide those dimension properties, and quite a bit else besides.

The general algorithm has never been worked out, though it is a
possibility and there are at least a few individuals on the planet that
could work it out and implement it. The reason that none of them have is
that they know that the result would be big (2000 plus statements),
complex, and far too slow to be of any practical use.

This leaves the general purpose library with a problem. It should have
element position and location reporting facilities, but if they are to
be truly general they will inevitably be non-viable because of their
performance and seriously contribute to the library's download bulk.

The best the general purpose library can be is proved a faculty that is
'good enough' for some set of common cases; a compromise. Which then
means that it will insufficient for less common cases (leaving anyone
using the library with no choice but add their own code for those tasks)
and at the same time the code is over the top for simplest cases,
risking sub-optimal performance for no good reason.

A less browser related example might be a 'safe' hash table
implementation. A very capable implementation may reproduce, say, all of
the Java HashTable class in javascript, with all of its methods and the
ability to have multiple live Iterators/Enumerators, while the simplest
may just facilitate the storing and retrieval of values using arbitrary
string keys. If a general purpose library is going to include such a
thing then the odds are it will tend toward the more capable end of the
range of possibilities, while the individual using it may only need the
minimum (making the runtime overheads of supporting live Enumerators
actively undesirable).

A third example of how the difference between the general and the
specific impacts on the general purpose library is the question of
framesets, and where any particular code is to be located in any
possible frame structure. You will often code testing - constructor -
properties against built-in constructor functions, or using -
instanceof - with the same subjects. That is all fine if you are working
with a single global object, but as soon as anyone is attempting to pass
objects about in a frameset structure such texts are invalid. There is
also the question of creating new DOM elements, where using the -
createElement - method of the wrong - document - object will be
disastrous in at lest some common browsers (including IE). So your
general purpose library has two choices; either assume a single global
object, and be insufficient for contexts where framesets are employed,
or do all the extra work to keep track of multiple frame contexts and so
be over the top whenever it is used in a single page site.

One of the characteristics of browser scripting is that it has become a
very diverse activity with many contexts of use; Intranet
sites/applications, web applications, e-commerce, promotional web sites,
public information services, entertainment, and so on. Some design
criteria for any one context do not necessarily even come into the
picture in some other contexts. And the starting point for design
decision making should be the purpose of the system being planned,
without any arbitrary a priori restrictions. And this is itself an issue
with general purpose libraries. Dojo, for example, only works (more or
less) with a few versions of half a dozen browsers (and will really fall
apart if exposed to anything else). That is too few for a public
information service in any jurisdiction that requires such services to
be accessible by law (as the fact that it will fall apart when it fails
will deny the possibility of clean degradation) but it may also be far
too many for a private web application (which may suffer from all the
branching inside the code in order to accommodate browsers that are just
not relevant in the application's context).

One of the arguments suggested in favour of general purpose libraries
(and also used to criticise them) is that learning the library avoids
the need to learn the details of handling web browsers directly. Once
you realise that the compromises that the general purpose libraries must
make (to be as capable as is realistic (for their authors) but no more)
mean that any single general purpose library cannot sensibly be used in
all application contexts you see the need is not to spend time learning
a single general purpose library but instead potentially a whole range
of such libraries, and the bigger and more capable any single example is
the more work is involved in learning to use it. And given that it can
make a lot more sense to spend time learning to script web browsers
directly than to learn the APIs for a series of libraries that may still
not be suitable for all to applications that may come up. (This is
particularly true when standardisation of browser object models means
that 80-odd% of what could be learnt would then be applicable to most
scriptable browsers).


So what is wanted is a code re-use strategy (as we will all agree that
writing everything form scratch for each project is insane) that
maximises the proportion of code being re-used, produces an easily
maintainable and reliable end result and is sufficiently flexible to
produce appropriate code of any given application context without
pre-imposing arbitrary restrictions on the design process or being over
the top in the less complex contexts.

Inevitably there is some disagreement as to how best to achieve this
outcome, but it is fairly obvious that larger-scale general purpose
libraries will not satisfy those considerations (with their overriding
emphasis on code re-use at the cost of seemingly all else).

My preferred strategy is to build code from a large collection of
relatively small interchangeable modules designed around interface
definitions, where any single interface may have numerous
implementations. The resulting architectures start out with a lowest
level that is a layer of modules that abstract out the differences
between browsers by handling them internally. Above that are more layers
of modules that depend upon the interfaces provided by the previous
layer and expose their own interfaces for more complex and task specific
actions, and above that some number of similar layers ending in the
application specific control logic code that must be unique to each
specific action.

The lowest layer includes only items form the collection of interface
implementations that are employed in the context, sufficient for the
context and no more, and usually very well tested. Given a particular
task, say the reporting of view port dimensions and scroll values, a
single interface is used, but any number of objects may exist to
implement that interface. So while a cross-browser version may attempt
to provide that information wherever it is available, in a context where
only a limited set of known browsers are to be used a much simpler
version exists to be used in its place. While any code that employs the
interface does not need to care about any more than getting a reference
to an object that implements the interface, and so does not care about
the specifics of how that is done in the context.

This strategy allows issues like the unreasonable complexity of the
truly general element position reporting algorithm to be avoided. In any
real context it is possible to know enough about which positioning
information is required and why it is required to sidestep most of
complexity of the general problem. If no elements of relevance are to
have scrollable contents, or borders, or be anything but block elements
the task goes from the complex back to the quick and simple, and indeed
enough can be known about the context that many optimisations can be
implemented inside the object providing the element position reporting
interface. It may be the case that a theoretically huge number of such
implementations would be necessary to accommodate all the permutations
but in practice if you start by only implementing the ones that are
needed when they become needed you end up implementing the most
recurrent requirements first (and so creating the most re-useable
objects) and may never actually encounter a real world situation where
the more involved position reporting problems need to be addressed.

Consider what happens when re-design results in maintenance issues. For
the positioning problem; suppose someone re-designs the presentation and
ends up adding elements with scrolling overflow where they had not
previously existed. The object implementing the position reporting
interface can no longer cope as it was never designed to do so. But
either the collection of objects implementing that interface already
contains one that can cope, or a new implementation can be created and
added to that collection. The problem is solved by swapping one object
for another that implements the same interface (but takes more into
account internally) and all of the rest of the code is unaffected by the
change.

The collection of such interchangeable modular interface implementations
from which actual projects are built may be regarded as being a library
(in some sense) but it is not something that can be presented to the
wider world as a library because it is inherently incomplete by design.
The design work, the intellectual effort, goes into designing interfaces
that can sit on top of varying implementations and usefully participate
in flexible hierarchical structures of code. The actual creation of the
objects implementing the interfaces is on an 'as needed' basis, and
while the expectation is that those objects created should then be very
re-useable (in similar contexts, with the likely re-occurring contexts
also being those likely to occur early in the process), the objects for
the more unusual situations may never be needed by any individual, and
so never be created and added to the collection.
In my short time on c.l.j, I have seen many criticisms of
JavaScript libraries

Yes, it can be very easy.
but few recommendations. It could be that they simply got
lost in the noise.

No, there are few recommendations, and the few people making such
recommendations tend not to be doing so on an informed basis.
Are there any JavaScript libraries that you can recommend over
reinventing wheels?

You are certain that those are the only two alternatives?
I checked the FAQ and didn't see anything.

I'm also curious if the folks criticizing the 'popular'
JavaScript libraries (or their authors) have attempted
to improve the code - either by direct contribution or
via educating the authors.

If the people making the criticism are of the opinion that 'improvement'
would involve a significant re-thinking of the concepts underlying the
enter library design any such attempts to 'improve' would be very likely
to be disregarded by the authors of those libraries.

On the other hand "the folks criticizing" have already done a great deal
to improve the code in these libraries, though maybe not that directly
(or with that specific intention). I my own case inventing what is
apparently destined to be called "Crockford's module pattern" in early
2003, and then participating, with other regular contributors to this
group (and also "folks criticizing ... "), in the development of its
applications over the following two years, has had a very visible impact
on the code in any recent library you could name, and mostly for the
better.

(It is interesting watching the wider world re-inventing the wheel when
it comes to the "Crockford's module pattern". For example, having
started with the singleton implementation used in the YUI blog to
demonstrate the idea (which is an application of the idea from August
2003)-

<URL:
http://nefariousdesigns.co.uk/archive/2007/08/javascript-module-pattern-variations/ >

- shows someone re-tracing the evolution of the scheme and ending up
back with a single function interface, which parallels in structure the
very first examples of non-'Class' related modules I published in May
2003).

Then there is the growth of the understating of, and subsequent use of,
javascript's closures in general. A trend that is also evident in recent
library code and a trend in the wider world that can almost entry be
traced back to my 2004 article on javascript closures written for the
group's FAQ.

Over the years various contributors to this group have written probably
the equivalent of sizable books on the subject of javascript. Mostly
directed towards the better understanding of the language and better
design in its applications. And this has all been done in public (and
archived) in a context where anyone who wants to is free to participate.
If the authors of those 'popular' libraries have preferred to ignore
that and hide away in their own little worlds then that is hardly the
fault of anyone who criticises their code here.
Or are they beyond hope?

They are beyond hope if the people responsible have their egos so
heavily invested in their creations that they can never recognise their
mistakes.

But to some extent these things become the victims of their own success
as once they have been 'released' and acquired a user base addressing
fundamental design faults becomes very difficult. Given the limited
technical understanding of javascript on the part of their authors, as
demonstrated in the code they write, in most cases it might have been
better if the authors had held off 'releasing' anything until they had
taken the time to learn the language, and gain some experience using it,
because during that time they may have learnt enough about browser
script design issues, and been exposed to more debate related to the
subject, to have started their libraries on a better footing (or in some
cases not started them at all).

Richard.
 
P

Peter Michaux

Richard said:
What are the "disadvantages" of writing everything yourself?

Not accounting for certain browser bugs/incompatibilities due to not
having a long enough history working with browsers to know about
particular bugs. Testing would show the bugs but testing in currently
rare browsers (eg. IE4, Sunrise, Lobo) may not be considered a wise
cost by the business. Using a prepackaged library that already works
in all cases would be much cheaper and more attractive to a project
manager even with a certain amount of increased download time. Project
timelines and idealism clash on occasion.

There is not much thought given to the issues that follow from
broadcasting all the source code to the user and compiling it each time
it is executed. It should be fairly obviously that if it was necessary
to transmit over the internet, and then compile, all of the source code
for all the standard Java libraries, plus anything application
specifies, each time you wanted to execute any Java program then that
would make Java non-viable. But still that is the inevitable end point
of creating ever more capable (and so ever larger) general purpose
javascript libraries.

That may be true of a library like Prototype that is distributed as a
single file. Libraries like Dojo, YUI are distributed in multiple
files so even as the library gains capabilities it doesn't mean the
whole library has to be downloaded for each page.

It seems granularity of a library's distributed source code is a core
source of contention. Should a library be distributed as one file like
Prototype and jQuery or distributed as many files like YUI to be
combined as needed by a developer working on a particular page? If
multiple files how small should each file be. If multiple files, how
should they be combined for a particular page?

This leaves the general purpose library with a problem. It should have
element position and location reporting facilities, but if they are to
be truly general they will inevitably be non-viable because of their
performance and seriously contribute to the library's download bulk.

The best the general purpose library can be is proved a faculty that is
'good enough' for some set of common cases; a compromise. Which then
means that it will insufficient for less common cases (leaving anyone
using the library with no choice but add their own code for those tasks)
and at the same time the code is over the top for simplest cases,
risking sub-optimal performance for no good reason.

I think there is another way to look at what the library is providing
to the user. In the terms you use below, the library is providing an
interface and a base implementation of that interface. In the simple
cases where it is excessively complex, I can see that some developers
would just say "we will never agree how simple the simplest
implementation of this interface should be." It may make sense to some
people that the simplest interface should at least work in the popular
80% of the browsers and having a simpler version for particular
browsers is just more code to maintain. Although performance and
download times are important it is also necessary to retain a small
code base and so a medium complexity implementation of the interface
as the base implementation may be better to some people than having
many simpler ones kicking around the hard drive.

So the library provides a medium complexity base implementation and
when it is insufficient for a less common case the developer writes
code to handle the less common case. This is very similar/identical to
what you are suggesting below.

A third example of how the difference between the general and the
specific impacts on the general purpose library is the question of
framesets, and where any particular code is to be located in any
possible frame structure. You will often code testing - constructor -
properties against built-in constructor functions, or using -
instanceof - with the same subjects. That is all fine if you are working
with a single global object, but as soon as anyone is attempting to pass
objects about in a frameset structure such texts are invalid. There is
also the question of creating new DOM elements, where using the -
createElement - method of the wrong - document - object will be
disastrous in at lest some common browsers (including IE). So your
general purpose library has two choices; either assume a single global
object, and be insufficient for contexts where framesets are employed,
or do all the extra work to keep track of multiple frame contexts and so
be over the top whenever it is used in a single page site.

Two implementation of the same library interface. It is not a mistake
to have either or both versions and, at a larger granularity, is
consistent with your ideas below about multiple implementations of a
common interface. Suppose there are two popular libraries and one is
for the single global object and the other is for framesets. The
mistake is that both groups of developers will hype their library as
the one true implementation and create a religious war. And then there
will be backlash against the idea of libraries because of the hype.

So what is wanted is a code re-use strategy (as we will all agree that
writing everything form scratch for each project is insane) that
maximises the proportion of code being re-used, produces an easily
maintainable and reliable end result and is sufficiently flexible to
produce appropriate code of any given application context without
pre-imposing arbitrary restrictions on the design process or being over
the top in the less complex contexts.

This would be ideal.

Inevitably there is some disagreement as to how best to achieve this
outcome, but it is fairly obvious that larger-scale general purpose
libraries will not satisfy those considerations (with their overriding
emphasis on code re-use at the cost of seemingly all else).

When you write "larger-scale" what do you mean? Do you mean
distributed in a single file or a low number of multiple files? Or do
you mean just many lines of code?

My preferred strategy is to build code from a large collection of
relatively small interchangeable modules designed around interface
definitions, where any single interface may have numerous
implementations. The resulting architectures start out with a lowest
level that is a layer of modules that abstract out the differences
between browsers by handling them internally. Above that are more layers
of modules that depend upon the interfaces provided by the previous
layer and expose their own interfaces for more complex and task specific
actions, and above that some number of similar layers ending in the
application specific control logic code that must be unique to each
specific action.

I think this is a great strategy overall. It formalizes the idea of "a
sufficient implementation."

The lowest layer includes only items form the collection of interface
implementations that are employed in the context, sufficient for the
context and no more, and usually very well tested.

Why adhere so strictly to this "and no more" requirement? It is ok
performance-wise or more profitable overall (development dollars vs
net income) in many situations to send 5-10% unused code to the
browser.

Given a particular
task, say the reporting of view port dimensions and scroll values, a
single interface is used, but any number of objects may exist to
implement that interface. So while a cross-browser version may attempt
to provide that information wherever it is available, in a context where
only a limited set of known browsers are to be used a much simpler
version exists to be used in its place. While any code that employs the
interface does not need to care about any more than getting a reference
to an object that implements the interface, and so does not care about
the specifics of how that is done in the context.

This strategy allows issues like the unreasonable complexity of the
truly general element position reporting algorithm to be avoided. In any
real context it is possible to know enough about which positioning
information is required and why it is required to sidestep most of
complexity of the general problem. If no elements of relevance are to
have scrollable contents, or borders, or be anything but block elements
the task goes from the complex back to the quick and simple, and indeed
enough can be known about the context that many optimisations can be
implemented inside the object providing the element position reporting
interface. It may be the case that a theoretically huge number of such
implementations would be necessary to accommodate all the permutations
but in practice if you start by only implementing the ones that are
needed when they become needed you end up implementing the most
recurrent requirements first (and so creating the most re-useable
objects) and may never actually encounter a real world situation where
the more involved position reporting problems need to be addressed.

Consider what happens when re-design results in maintenance issues. For
the positioning problem; suppose someone re-designs the presentation and
ends up adding elements with scrolling overflow where they had not
previously existed. The object implementing the position reporting
interface can no longer cope as it was never designed to do so. But
either the collection of objects implementing that interface already
contains one that can cope, or a new implementation can be created and
added to that collection. The problem is solved by swapping one object
for another that implements the same interface (but takes more into
account internally) and all of the rest of the code is unaffected by the
change.

This seems to be an argument against having the absolute simplest
implementations in the collections of objects. By having a medium-
complexity interface implementation as the base implementation, when a
CSS re-design occurs there _may_ be no need to swap out the simplest
implementation for a slightly more complex implementation. This saves
developer time which is more expensive then production server time.

The collection of such interchangeable modular interface implementations
from which actual projects are built may be regarded as being a library
(in some sense) but it is not something that can be presented to the
wider world as a library because it is inherently incomplete by design.
The design work, the intellectual effort, goes into designing interfaces
that can sit on top of varying implementations and usefully participate
in flexible hierarchical structures of code. The actual creation of the
objects implementing the interfaces is on an 'as needed' basis, and
while the expectation is that those objects created should then be very
re-useable (in similar contexts, with the likely re-occurring contexts
also being those likely to occur early in the process), the objects for
the more unusual situations may never be needed by any individual, and
so never be created and added to the collection.

It seems that to implement your strategy of multiple interface
implementations that multiple source files grouped in a directory for
each interface would be a practical approach. Some sort of
configuration file and build process could concatenate the various
appropriate files together for a particular page. Is that similar to
the approach you take?

Peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,818
Latest member
SapanaCarpetStudio

Latest Threads

Top