Various DOM-related wrappers (Code Worth Recommending Project)

P

Peter Michaux

Peter said:
[...] Peter Michaux [...] wrote:
[...] David Mark [...] wrote:
[...] Peter Michaux [...] wrote:
[...] David Mark [...] wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Peter Michaux wrote:
[...]
There is no need to check for the existence of "document" as no
browser, NN4+ and IE4+, missing "document".
There is no need to check that document.getElementById is a callable
since there has never been a known implementation where
document.getElementById that is not callable.
If these are the guidelines for your project, to produce code that is not
interoperable and inherently unreliable, I don't want to contribute.
I somewhat agree with that sentiment,
I would say your codes says otherwise and that you agree that a line
for feature testing should be drawn somewhere.
I was talking about the issue he was referring to (gEBTN for document
vs. element.)

JFTR: I was not referring to that at all.
[...] I think I meant the following.
if (document.getElementById &&
typeof getAnElement != 'undefined' &&
getAnElement().getElementById) {
var getEBI = function(id, d) {
return (d||document).getElementById(id);
};

The nonsense gets worse.

I don't see what is wrong with the code above.

No, it is not.

I think it is. The portion of the spec that you quote below shows it
is specified spearately in the Document and the Element interfaces.
I don't know which DOM 2 Spec you have been reading,
but the one I have been reading specifies only:

,-<http://www.w3.org/TR/DOM-Level-2-Core/core.html#i-Document>
|
| [...]
| interface Document : Node {
| [...]
| // Introduced in DOM Level 2:
| Element getElementById(in DOMString elementId);
| };

,-<http://www.w3.org/TR/DOM-Level-2-Core/core.html#ID-745549614>
|
| [...]
| interface Element : Node {
| readonly attribute DOMString tagName;
| DOMString getAttribute(in DOMString name);
| void setAttribute(in DOMString name,
| in DOMString value)
| raises(DOMException);
| void removeAttribute(in DOMString name)
| raises(DOMException);
| Attr getAttributeNode(in DOMString name);
| Attr setAttributeNode(in Attr newAttr)
| raises(DOMException);
| Attr removeAttributeNode(in Attr oldAttr)
| raises(DOMException);
| NodeList getElementsByTagName(in DOMString name);
| // Introduced in DOM Level 2:
| DOMString getAttributeNS(in DOMString namespaceURI,
| in DOMString localName);
| // Introduced in DOM Level 2:
| void setAttributeNS(in DOMString namespaceURI,
| in DOMString qualifiedName,
| in DOMString value)
| raises(DOMException);
| // Introduced in DOM Level 2:
| void removeAttributeNS(in DOMString namespaceURI,
| in DOMString localName)
| raises(DOMException);
| // Introduced in DOM Level 2:
| Attr getAttributeNodeNS(in DOMString namespaceURI,
| in DOMString localName);
| // Introduced in DOM Level 2:
| Attr setAttributeNodeNS(in Attr newAttr)
| raises(DOMException);
| // Introduced in DOM Level 2:
| NodeList getElementsByTagNameNS(in DOMString namespaceURI,
| in DOMString localName);
| // Introduced in DOM Level 2:
| boolean hasAttribute(in DOMString name);
| // Introduced in DOM Level 2:
| boolean hasAttributeNS(in DOMString namespaceURI,
| in DOMString localName);
| };

The item on the left is the *return type* of the method, not its
(additional) owner.
a feature test for one does not assure the other is present.
A test for both is necessary if the getEBI
function is to be documented as suitable for both.

If what you stated above were the case,

I still think it is the case.
a test on an arbitrary element
object would not be sufficient. But it is not the case, so that point
is rather moot.

I don't understand what you are saying.

If the spec has getElementById in two places then it should be feature
tested in both places. An implementation may have one, the other, none
or both.
 
D

David Mark

I don't see what is wrong with the code above.

As I had never before tried to call gEBI on anything but document
nodes, I was unsure if it was even supported on element nodes. A
quick test confirmed it is not. As is typical, "PointedEars" couldn't
be expected to just say that as such, but instead posted the DOM
phonebook. I propose that in the interest of time management, his
often indecipherable scrutiny should be ignored. He has said he won't
contribute and so far he hasn't. Worse still, he is impeding progress
by constantly confusing the issues at hand.
 
T

Thomas 'PointedEars' Lahn

[Sorry for misquoting you before, it is corrected below.]

Peter said:
Peter said:
[...] I think I meant the following.
if (document.getElementById
&& typeof getAnElement != 'undefined'
&& getAnElement().getElementById)
{
var getEBI = function(id, d)
{
return (d||document).getElementById(id);
};
}
The nonsense gets worse.

I don't see what is wrong with the code above.

I explained that below.
I think it is. The portion of the spec that you quote below shows it is
specified spearately in the Document and the Element interfaces.

It shows the exact opposite. Watch closely:
,-<http://www.w3.org/TR/DOM-Level-2-Core/core.html#i-Document>
|
| [...]
| interface Document : Node {
| [...]
| // Introduced in DOM Level 2:
| Element getElementById(in DOMString elementId);
| };

[...]
The item on the left is the *return type* of the method, not its
(additional) owner.
a test on an arbitrary element object would not be sufficient. But it
is not the case, so that point is rather moot.

I don't understand what you are saying.

If the spec has getElementById in two places then it should be feature
tested in both places. An implementation may have one, the other, none or
both.

I don't know what anElement() returns, but ISTM you are drawing the false
conclusion that because one object implements the interface the passed one
also has to.


PointedEars
 
P

Peter Michaux

[Sorry for misquoting you before, it is corrected below.]



Peter said:
Peter Michaux wrote:
[...] I think I meant the following.
if (document.getElementById
&& typeof getAnElement != 'undefined'
&& getAnElement().getElementById)
{
var getEBI = function(id, d)
{
return (d||document).getElementById(id);
};
}
The nonsense gets worse.
I don't see what is wrong with the code above.

I explained that below.
I think it is. The portion of the spec that you quote below shows it is
specified spearately in the Document and the Element interfaces.

It shows the exact opposite. Watch closely:


,-<http://www.w3.org/TR/DOM-Level-2-Core/core.html#i-Document>
|
| [...]
| interface Document : Node {
| [...]
| // Introduced in DOM Level 2:
| Element getElementById(in DOMString elementId);
| };
[...]
The item on the left is the *return type* of the method, not its
(additional) owner.
a test on an arbitrary element object would not be sufficient. But it
is not the case, so that point is rather moot.
I don't understand what you are saying.
If the spec has getElementById in two places then it should be feature
tested in both places. An implementation may have one, the other, none or
both.

I don't know what anElement() returns,

Some element. In any modern browser it will return
document.documentElement
but ISTM you are drawing the false
conclusion that because one object implements the interface the passed one
also has to.

Sorry I've been reading this thread too fast. (I've actually been busy
having more fun making a testing framework for the repository.)

There is no Element getElementById in the spec. My function needs to
revert to

if (document.getElementById) {
var getEBI = function(id, d) {
return (d||document).getElementById(id);
};
}

// id is a string
// d is some optional node that implements the Document interface.

Now, what is wrong with that?
 
P

Peter Michaux

[snip]
As is typical, "PointedEars" couldn't
be expected to just say that as such, but instead posted the DOM
phonebook. I propose that in the interest of time management,

There is no time line.
his
often indecipherable scrutiny should be ignored.

I think the project must be inclusive to all participation.
He has said he won't
contribute and so far he hasn't.

I think he has. We have five and a half agreements with him so far.
Worse still, he is impeding progress
by constantly confusing the issues at hand.

I'm guilty of that too.
 
T

Thomas 'PointedEars' Lahn

Peter said:
Peter said:
[...] Thomas 'PointedEars' Lahn [...] wrote:
a test on an arbitrary element object would not be sufficient. But it
is not the case, so that point is rather moot.
I don't understand what you are saying.
If the spec has getElementById in two places then it should be feature
tested in both places. An implementation may have one, the other, none or
both.
I don't know what anElement() returns,

Some element.

Some element *object*, if that.
In any modern browser it will return document.documentElement

Which would confirm my suspicion below, though. BTW, what will it return in
any not so modern browser?
but ISTM you are drawing the false conclusion that because one object
implements the interface the passed one also has to.

[...] My function needs to revert to

if (document.getElementById) {
var getEBI = function(id, d) {
return (d||document).getElementById(id);
};
}

// id is a string
// d is some optional node that implements the Document interface.

Now, what is wrong with that?

Besides the fact that your code is still lacking the necessary feature test,
you falsely assume that, because `document.getElementById' yields a
true-value, `d.getElementById' has to be callable.


Please, trim your quotes as explained in the FAQ and FAQ Notes. The entire
part about the DOM2 Spec was irrelevant to your followup, or at least could
have been shortened to very few lines.


PointedEars
 
P

Peter Michaux

Peter Michaux wrote:
[snip]
if (document.getElementById) {
var getEBI = function(id, d) {
return (d||document).getElementById(id);
};
}
// id is a string
// d is some optional node that implements the Document interface.
Now, what is wrong with that?

Besides the fact that your code is still lacking the necessary feature test,
you falsely assume that, because `document.getElementById' yields a
true-value, `d.getElementById' has to be callable.

Have you ever seen a host where where document.getElementById is not
callable?

If you are concerned that someone might send a "d" that is not a
document then they will have violated what will be the documentation
saying "d" must be a document.
 
V

VK

I think the project must be inclusive to all participation.

So far it mostly going "wild on the party" :) as the major
participant s keep making monstrous paranoid all-in-one don't-trust-
anyone snippets of code where each snippet intended to accomplish a
single very primitive task. If the Code Worth project ever gets big
then it will be an army of soldiers where everyone carries on his
armor, food for the whole campaign, all necessary items, 200 pounds of
old family albums and beloved books. And before to ask for fire from
another solder, each one first makes a full scale fortification around
him so taking the lighter through the small hole in the wall after
triple check. Evidently the one who asked does exactly the same before
to get the lighter back: so in order of two to fire their cigars the
army has to make a stop for a day or two each time.

Such snippets may have some limited usability for bookmarklets and
thingies (inline anonymous functions for element event handlers).
God forbids to make a _library_ of all these things together. If one
at least pretending to be on OOP path then one shold start from the
basics: from the modularity. OOP is not about C out of B out of A out
of ground bottom - it is just a possible but not necessary consequence
of OOP approache. OOP is about everyone doing its own thing and this
thing only.
And what an anancastic sub is that so far? Are you trying to make the
resulting lib being able to run only on Cray clusters? First of all
the whole problem is totally artificial, because as of the end of 2007
the options are:
1) use document.getElementById
2) drop the sick sh** someone is using to run your script on.

Being a paranoiac it could be 3 options
1) use document.getElementById
2) use document.all if no 1)
3) drop the sick sh** someone is using to run your script on if
neither 1) nor 2)

So the entire - already paranoid for practical usage - code is:

if (document.getElementById) {
$ = function(id){
return document.getElementById(id);
};
}
else if (document.all) {
$ = function(id){return document.all(id);};
}
else {
$ = function(id){/* undefined */};
}

And then in Help file one writes something like:
com.something.utils.$(String id)
Takes single string argument and returns a reference to DOM element
with specified ID.
No behavior specified for arguments other than string type.
If DOM element with requested ID doesn't exists then returns null
value.
If the page contains several elements with the same ID then returns
the first element on the page going from the top with such ID.
If user agent doesn't support neither standard nor proprietary tools
for DOM element reference retrieval then returns undefined value.

Other words:
1) The main logic is moved to specification, not to runtime method.
Who wants a working program he will read it, who really wants to crash
the program he will do it no matter what.
2) Runtime performance is the King, fall-back protection is the sh** -
as long as no security violation is involved.
 
P

Peter Michaux

So far it mostly going "wild on the party" :) as the major
participant s keep making monstrous paranoid all-in-one don't-trust-
anyone snippets of code where each snippet intended to accomplish a
single very primitive task.

That is what foundational functions do.

I think you've missed the point. There are far bigger issues being
discussed about what things should be feature tested and what can be
assumed. Granularity of a code repository is also a major issue.

If the Code Worth project ever gets big
then it will be an army of soldiers where everyone carries on his
armor, food for the whole campaign, all necessary items, 200 pounds of
old family albums and beloved books. And before to ask for fire from
another solder, each one first makes a full scale fortification around
him so taking the lighter through the small hole in the wall after
triple check. Evidently the one who asked does exactly the same before
to get the lighter back: so in order of two to fire their cigars the
army has to make a stop for a day or two each time.

I think it would be interesting to meet you, VK.

Such snippets may have some limited usability for bookmarklets and
thingies (inline anonymous functions for element event handlers).

A big piece of code would likely use little pieces. The alternative is
one huge function.

God forbids to make a _library_ of all these things together.

The individual pieces need to be quality or the whole thing may as
well be tossed.

If one
at least pretending to be on OOP path then one shold start from the
basics: from the modularity. OOP is not about C out of B out of A out
of ground bottom - it is just a possible but not necessary consequence
of OOP approache. OOP is about everyone doing its own thing and this
thing only.

I don't think OOP comes into this.

And what an anancastic sub is that so far? Are you trying to make the
resulting lib being able to run only on Cray clusters? First of all
the whole problem is totally artificial, because as of the end of 2007
the options are:
1) use document.getElementById
2) drop the sick sh** someone is using to run your script on.

Very creative.

Being a paranoiac it could be 3 options
1) use document.getElementById
2) use document.all if no 1)
3) drop the sick sh** someone is using to run your script on if
neither 1) nor 2)

So the entire - already paranoid for practical usage - code is:

if (document.getElementById) {
$ = function(id){
return document.getElementById(id);
};
}
else if (document.all) {
$ = function(id){return document.all(id);};
}
else {
$ = function(id){/* undefined */};
}

This doesn't address the IE id/name attribute issue.

And then in Help file one writes something like:
com.something.utils.$(String id)

I think you would be the only one in favor of this API.

Takes single string argument and returns a reference to DOM element
with specified ID.
No behavior specified for arguments other than string type.
If DOM element with requested ID doesn't exists then returns null
value.
If the page contains several elements with the same ID then returns
the first element on the page going from the top with such ID.
If user agent doesn't support neither standard nor proprietary tools
for DOM element reference retrieval then returns undefined value.

Even the documentation doesn't address the IE id/name problem.

Other words:
1) The main logic is moved to specification, not to runtime method.

No one has suggested an implementation that makes the
document.getElementById and document.all tests at "runtime".

Who wants a working program he will read it, who really wants to crash
the program he will do it no matter what.
2) Runtime performance is the King, fall-back protection is the sh** -
as long as no security violation is involved.

A tangential thinker, eh?
 
P

Peter Michaux

On Dec 9, 12:02 am, Peter Michaux <[email protected]> wrote:
[snip]
I realize that the task of finalizing, adding and documenting each is
a bottleneck,
Yes. I am working on an automated testing system now and that will
That's interesting. How will that work?

The interesting part is that the testing framework needs to limit how
much client side scripting is used as it is testing a client side
scripting library. This is so even very old browsers can be tested.

I'm using a series of communications between the client and server.
The server orchestrates everything with a series redirects from test
page to test page using redirects until all test pages have been run.

The client logs test assertions (pass and fail) to the server by
setting (new Image()).src since this is the oldest(?) way to
communicate with the server. There will be extremely old browsers that
can't do this and those browsers will require manual testing if they
are to be tested at all.

The only other requirement is that the browser can set
window.location.

and setTimeout.

The testing framework is in the repository now. It seems to be working
with IE5+ and NN6+. I can't figure out why it doesn't work with IE4
since that browser can do all the things needed. I didn't try NN4.
Regardless even if it only works on IE5+ and NN6+ it will save a huge
amount of time and a very sore index finger.

There is a file trunk/test/README.txt that should contain enough
information to help someone get the automatic testing going. It uses
PHP and I use Apache.

It is good enough for the time being. Now back to the features...
 
P

Peter Michaux

[snip]
I didn't make a time test but I think the simulation of
Array.prototype.filter will be very slow. I made a CSS selector

Considering that the use of the filter wrapper only helps IE5 and
under, it does make sense to duplicate its functionality in the gEBTN
wrapper. But that function and other JS1.6 array method wrappers are
useful for other purposes where speed is less of an issue. I think
that we should deal with those shortly.

Prototype.js focuses heavily on these sort of iterators (e.g. filter,
each, find, etc). The other code in Prototype.js uses these iterators
and this makes the library difficult to use in pieces. The
foundational code is very large and must be included just for a small
piece of higher level functionality. I want to avoid this type of
interdependence. I think that code should be able to stand on it's own
or have very few dependencies.

[snip]
 
P

Peter Michaux

Even though the two getEBTN wrappers use different fallbacks for IE4
The source of that piece of information was Richard Cornford. The
discussion concerning splitting up addEventListener and gEBTN wrappers
was in a single thread. A search for "addEventListener" and
"getElementsByTagName" in the last two months or so should find the
posts.

I found it.

<URL: http://groups.google.com/group/comp.lang.javascript/browse_frm/thread/9857be01265be432>

I don't think that thread particularly suggests that a gEBTN wrapper
must be split into two functions although I suppose it could be.
 
D

David Mark

On Dec 9, 12:02 am, Peter Michaux <[email protected]> wrote:
[snip]
I realize that the task of finalizing, adding and documenting each is
a bottleneck,
Yes. I am working on an automated testing system now and that will
That's interesting. How will that work?

The interesting part is that the testing framework needs to limit how
much client side scripting is used as it is testing a client side
scripting library. This is so even very old browsers can be tested.

I'm using a series of communications between the client and server.
The server orchestrates everything with a series redirects from test
page to test page using redirects until all test pages have been run.

The client logs test assertions (pass and fail) to the server by
setting (new Image()).src since this is the oldest(?) way to
communicate with the server. There will be extremely old browsers that
can't do this and those browsers will require manual testing if they
are to be tested at all.

The only other requirement is that the browser can set
window.location.

I think this is the ultimate in a low tech automated system for
testing.

Sounds good to me. I have always tested manually and it is time-
consuming.

[snip]
What I'm trying to do is focus which lower level functionality goes in
first. Lower level functionality that is required by the higher level
modules should have priority. The lower level functionality
prerequisite to a particular module will necessarily be included
before or at the time the higher level module is included.

Right. I am basing my recommendations for the initial low-level
functionality based on what is used by my higher-level modules.

[snip]
I no longer think that is even possible and window.onload is the only
option.

<URL:http://peter.michaux.ca/article/553>

It is impossible to exactly simulate DOMContentLoaded, but that has
never been a goal of mine. I simple stopped using onload by itself
(it is used as a fallback for situations when the DOMContentLoaded
event or simulation fails to work.) In a nutshell, a timeout set at
the bottom of the page calls the listener (and this is always optional
as onload used as a fallback.) As long as a page degrades gracefully
without script, then the split second that the content is visible
without enhancement is not an issue. In cases where the layout
changes dramatically on enhancement, I add style rules to hide certain
content during the page load.

[snip]
I can't keep up on more than is going on right now. I didn't think

This automated testing suite should help that. But if only one person
is handling documentation and everybody else is adding and
scrutinizing code, then a bottleneck will always exist.
think this project would be so popular!

It will certainly fill a glaring need once the basic foundation is in
place. After that, I'm sure the popularity will increase
exponentially.
 
D

David Mark

I didn't make a time test but I think the simulation of
Array.prototype.filter will be very slow. I made a CSS selector
Considering that the use of the filter wrapper only helps IE5 and
under, it does make sense to duplicate its functionality in the gEBTN
wrapper. But that function and other JS1.6 array method wrappers are
useful for other purposes where speed is less of an issue. I think
that we should deal with those shortly.

Prototype.js focuses heavily on these sort of iterators (e.g. filter,
each, find, etc). The other code in Prototype.js uses these iterators
and this makes the library difficult to use in pieces. The

I tend to avoid using them in low-level modules for performance
reasons, but the code itself is very compact, so I do include it in my
base module. I have considered moving it to its own module, which
would only be required by those higher-level modules that use it.
foundational code is very large and must be included just for a small
piece of higher level functionality. I want to avoid this type of
interdependence. I think that code should be able to stand on it's own
or have very few dependencies.

Depends on what it does. All but a couple of my modules require the
base module. Most have no other dependencies, but the highest level
modules and widgets typically depend on two or three lower level
modules. In general, as granularity is maximized, redundancy is
minimized and dependencies increase accordingly.
 
D

David Mark

I found it.

<URL:http://groups.google.com/group/comp.lang.javascript/browse_frm/thread...>

I don't think that thread particularly suggests that a gEBTN wrapper
must be split into two functions although I suppose it could be.

As I recall, my question was whether it would be excessively paranoid
to do so and the answer was no. As for addEventListener, it was
suggested that that would have to be done in at least two parts if one-
off testing was to be used. I ended up doing it in three parts
(element, document and window.) It didn't result in bloated code as I
used factories to generate the functions. In fact, the base module
that includes add/removeEventListener, gEBTN, gEBI, all of the feature
testing foundation previously discussed, plus iterator code and
several other low level abstractions is less then 10K minified. It
has become an arbitrary rule for me to keep all modules under 10K and
granularity requirements have tended to confirm that as a realistic
limit.
 
D

David Mark

So far it mostly going "wild on the party" :) as the major

Colorful, but what does it mean?
participant s keep making monstrous paranoid all-in-one don't-trust-
anyone snippets of code where each snippet intended to accomplish a
single very primitive task. If the Code Worth project ever gets big
then it will be an army of soldiers where everyone carries on his
armor, food for the whole campaign, all necessary items, 200 pounds of
old family albums and beloved books. And before to ask for fire from
another solder, each one first makes a full scale fortification around
him so taking the lighter through the small hole in the wall after
triple check. Evidently the one who asked does exactly the same before
to get the lighter back: so in order of two to fire their cigars the
army has to make a stop for a day or two each time.

Even more colorful and even less clear.
Such snippets may have some limited usability for bookmarklets and
thingies (inline anonymous functions for element event handlers).

What snippets?
God forbids to make a _library_ of all these things together. If one
Really?

at least pretending to be on OOP path then one shold start from the
basics: from the modularity. OOP is not about C out of B out of A out
of ground bottom - it is just a possible but not necessary consequence
of OOP approache. OOP is about everyone doing its own thing and this
thing only.

So far, nothing discussed has involved OOP, though it would be trivial
to add a thin OOP layer over the API functions. It would be a mistake
(often seen in currently popular JS libraries) to encapsulate
everything in objects (or a single object.)
And what an anancastic sub is that so far? Are you trying to make the
resulting lib being able to run only on Cray clusters? First of all

Unsurprisingly, I don't follow you on that either. Are you worried
about code size? If so, then you don't understand the basic concept
of the repository, which includes multiple versions of each function,
from the simplest (and least compatible) to the more complicated LCD
approach, which can ostensibly run on anything.
the whole problem is totally artificial, because as of the end of 2007
the options are:

Which "whole problem?"
1) use document.getElementById
2) drop the sick sh** someone is using to run your script on.

The gEBI wrapper is not the only issue at hand.
Being a paranoiac it could be 3 options
1) use document.getElementById
2) use document.all if no 1)

Which adds a couple of extra lines to one version of one function.
3) drop the sick sh** someone is using to run your script on if
neither 1) nor 2)

Right. As the only other approach would be to include support for
NS4. The way the API is designed, anyone who needs such support can
augment the existing LCD wrapper to suit.
So the entire - already paranoid for practical usage - code is:

if (document.getElementById) {
$ = function(id){
return document.getElementById(id);
};
}

That is your code. Nobody else is advocating the use of a $
identifier.
else if (document.all) {
$ = function(id){return document.all(id);};

Though the all object is typically callable, I prefer to reference its
properties.

That is basically what I presented as the LCD approach. You left out
the ID check though.
$ = function(id){/* undefined */};
}

This branch is nonsense. You don't create a function that does
nothing, which would break the feature detection of higher level
modules and widgets.
And then in Help file one writes something like:
com.something.utils.$(String id)

No one doesn't. For one, we aren't defining a namespace at the moment
and if we were, I doubt it would resemble that. Personally, I use
relatively flat "namespace" and that is what I recommend for any
future build processes.
Takes single string argument and returns a reference to DOM element
with specified ID.

That is correct.
No behavior specified for arguments other than string type.

No such behavior has been proposed.
If DOM element with requested ID doesn't exists then returns null
value.
Right.

If the page contains several elements with the same ID then returns
the first element on the page going from the top with such ID.
If user agent doesn't support neither standard nor proprietary tools
for DOM element reference retrieval then returns undefined value.

As noted, that would defeat the whole purpose of feature testing.
Other words:
1) The main logic is moved to specification, not to runtime method.

All proposed feature testing so far has been of the one-off variety,
so I don't understand your point.
Who wants a working program he will read it, who really wants to crash
the program he will do it no matter what.

That is another issue. You don't see any argument validation and
exception raising in anything proposed so far and I would be against
such excesses.
2) Runtime performance is the King, fall-back protection is the sh** -
as long as no security violation is involved.

Your "technical writing" is far too colorful. I don't understand half
of what you write. Judging by the other half, I don't think I am
missing out on much.
 
P

Peter Michaux

[snip]
I didn't make a time test but I think the simulation of
Array.prototype.filter will be very slow. I made a CSS selector
Considering that the use of the filter wrapper only helps IE5 and
under, it does make sense to duplicate its functionality in the gEBTN
wrapper. But that function and other JS1.6 array method wrappers are
useful for other purposes where speed is less of an issue. I think
that we should deal with those shortly.
Prototype.js focuses heavily on these sort of iterators (e.g. filter,
each, find, etc). The other code in Prototype.js uses these iterators
and this makes the library difficult to use in pieces. The

I tend to avoid using them in low-level modules for performance
reasons, but the code itself is very compact, so I do include it in my
base module.

That is a fair choice but very far removed from my choice. The large
base modules are one of my primary gripes with the popular JavaScript
libraries. I've watched a few project's base modules grow and it is a
slippery slope that will *definitely* slip when more than one person
wants to add functions to the base module.

I have considered moving it to its own module, which
would only be required by those higher-level modules that use it.


Depends on what it does. All but a couple of my modules require the
base module. Most have no other dependencies, but the highest level
modules and widgets typically depend on two or three lower level
modules. In general, as granularity is maximized, redundancy is
minimized and dependencies increase accordingly.

As granularity is increased the amount of code served to the client
that is not necessary for a given page is also increased. This is the
bloat problem that all of the popular libraries seem to have.
 
D

David Mark

As granularity is increased the amount of code served to the client
that is not necessary for a given page is also increased. This is the
bloat problem that all of the popular libraries seem to have.

Do you mean when granularity is decreased? By increasing granularity,
I mean breaking the code base up into smaller morsels, thereby
reducing the downloading of unused code.
 
P

Peter Michaux

I have always tested manually and it is time-consuming.

Very and especially when testing on a wide set of browsers. I think
the code in this repository should be tested on a wider set than the
following which took me days manually.

<URL: http://forkjavascript.org/welcome/browser_support>

[snip]
This automated testing suite should help that. But if only one person
is handling documentation and everybody else is adding and
scrutinizing code, then a bottleneck will always exist.

I think some pacing is needed anyway. In just couple weeks from now we
can have a full Ajax module and that is a substantial component to any
code base.

Given what we've already discussed in the various threads, I would
like to shoot for the following for the first milestone.

* DHTML
setOpacity
* DOM queries
getEBI
getEBTN
getEBCN (getElementsByClassName)
getEBCS (getElementsByCSSSelector) (a very simple one)
* Ajax
form serialization
XHR
* automated testing
* parsable documentation

We will need to have threads on Ajax and documentation over the next
while when the current threads have died down.

That is sufficient for the first round and enough to reflect on how
the process is going and if things need to be changed.

After that I will certainly make my own build process and start using
the code where I can in my own work.

It will certainly fill a glaring need once the basic foundation is in
place.

I hope so!

I have already learned enough to justify my time invested.

After that, I'm sure the popularity will increase
exponentially.

I'm not so concerned with that and that requires marketing which I
learned I am not interested in doing.

I'm interested to see if "Code Worth Recommending" is deemed to be
code worth recommending by many comp.lang.javascript contributors.
 
P

Peter Michaux

As I recall, my question was whether it would be excessively paranoid

So you recognized it was at least moderately paranoid ;-)
to do so and the answer was no. As for addEventListener, it was
suggested that that would have to be done in at least two parts if one-
off testing was to be used. I ended up doing it in three parts
(element, document and window.) It didn't result in bloated code as I
used factories to generate the functions. In fact, the base module
that includes add/removeEventListener, gEBTN, gEBI, all of the feature
testing foundation previously discussed, plus iterator code and
several other low level abstractions is less then 10K minified. It
has become an arbitrary rule for me to keep all modules under 10K and
granularity requirements have tended to confirm that as a realistic
limit.

I try to keep all source files under 1K. Programmers have a very
difficult time agreeing on granularity.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,990
Messages
2,570,211
Members
46,796
Latest member
SteveBreed

Latest Threads

Top