R
Richard Cornford
Peter said:On Jul 20, 5:40 am, Richard Cornford wrote:
[snip]
(but any of those clunky 'bind' methods so beloved of library
authors would do just as well as an example).
Partial application is a common technique in lambda languages
and not looked down upon when used appropriately.
True, and it is a very useful facility to have. But it is only very
occasionally a useful facility to have.
The implementation in JavaScript is not as aesthetic as in some
other languages somewhat due to the "this" issue in JavaScript;
however, conceptually the binding of some parameters to one
functions and producing another function taking no or less
parameters is the same. I don't understand why you would
apparently balk at this concept. The use of a "bind" function
is not clunky, in my opinion.
Take the last version of Prototype.js that I looked at (1.6.0.2).
Internally there are 28 calls to its - bind - method and precisely zero
of those pass more than one argument to the method. The method's code
is:-
| bind: function() {
| if (arguments.length < 2 && Object.isUndefined(arguments[0]))
| return this;
| var __method = this, args = $A(arguments), object = args.shift();
| return function() {
| return __method.apply(object, args.concat($A(arguments)));
| }
| },
- so each of those calls to - bind - test the length of arguments and
finds that it is 1 and so calls Object.isUndefined. If the first
argument is not undefined (and in 9 of those calls the argument passed
is - this -, which can never be undefined by detention) it goes on to
call - $A - in order to make an array of the arguments object, it then
renders that array empty by - shift-ing the one argument out of it, and
later it concatenates that empty array onto the array resulting form
another call to - $A -. The availability of a more efficient alternative
seems like a good idea, even if that alternative were only made
available internally. That alternative could be as simple as:-
simpleBind: function(obj) {
var fnc = this;
return (function() {
return fnc.apply(obj, arguments);
});
}
- and still fully satisfy all of the internal uses of bind that appear
in Prototype.js.
But it is the very fact that Prototype.js uses - bind - so much and yet
in using it never exploits its full capabilities that demonstrates that
the need for all of those capabilities is not that great. Even in code
as general as Prototype.js the ratio of common cases that don't need
'outer' arguments to cases that do is at least 28 to 1. It might be
argued that the real need is so infrequent that it should be handled
externally by the code that needs it. Or that a two stage process where
any 'outer' set of arguments is associated with one function object and
then it is that function that is passed into a much simplified - bind -
method, would provide a better interface. Something like:-
simpleBind: function(obj) {
var fnc = this;
return (function() {
return fnc.apply(obj, arguments);
});
}
complexBind:function(obj){
var fnc = this;
var args = $A(arguments);
args.shift();
return (function(){
return fnc.apply(this. args.concat($A(arguments)))
}.simpleBind(obj));
}
(untested) - where the common case enters at one point and the much less
common case enters at another. That uncommon case suffers for its
increased demands but usually, and certainly in Prototype.js's internal
use, the benefit outweighs the losses. And if a particular application
can be seen to employ the (normally) uncommon case extensively it can
employ the original process. Except that it cannot because it is in the
nature of these large-scale internally interdependent general-purpose
libraries that people using them cannot play with the APR safely
(maintenance and upgrade headaches follow form the attempt) and the
libraries themselves cannot change their external API without breaking
existing code that uses them.
So, yes 'clunky'. Certainly far short of being well-thought-out or
elegant. Indeed so far so that their only way out is to petition for a
new - bind - method in the new language versions so that faster native
code can rescue their authors from the consequences of original designs.
[snip]
fn.call( el, window.event);
}
el.attachEvent( 'on'+type, el[type+fn])
}
else
el[ 'on'+type] = fn
If this branch is ever good enough it is also always good
enough.
Functionally yes. I think, in this case, the third branch is
unnecessary even if implemented so all three branches have
the same behavior. If someone did think the third branch was
necessary then the first two branches (using addEventListener
or attachEvent) could be justified as performance boosts.
Which assumes that there would be a performance boost. Probably the
calling of event listeners is so fast that measuring a difference would
be difficult. But the extra function call overhead in the -
attachEvent - branch is unlikely to help in that regard.
A large portion of the remainder of your message below is
related to keeping code small which is really just for a
performance boost.
Not just, size and the internal complexity that manifests itself in size
have consequences for understandably and so maintainability.
Do you happen to know of a way to detect the Safari versions
which do not honour calls to preventDefault for click or
double click events when the listener was attached using
addEventListener?
No, it has not yet been an issue for me. But Safari browsers expose many
'odd' properties in their DOMs so I would bet that an object inference
test could be devised even if a direct feature test could not, and the
result would still be fare superior to the UA sniffing that seems to go
on at present.
There is a "legacy" workaround using onclick and ondblclick
properties of elements but these are a bit ugly and have some
drawbacks which need documentation. At this point, since those
versions of Safari have been automatically upgraded, I'd rather
put those versions of Safari down the degradation path as though
they didn't have event models at all. I just don't know how to
detect these versions of Safari.
[snip]
Recently I have been thinking about how to express what it is about
the
attempt to be general that tends to results in code that bloated and
inefficient.
[snip interesting thoughts]
You an Matt Kruse have faced off many times about this whole
"general" library business. I don't quite see the fuss.
Talking (even arguing) about them is better way of testing the veracity
of ideas than not.
Matt is being a bit extreme by suggesting that the general
position reporting function should be written
Matt's position has tended to be that someone else (someone other than
him) should write it.
even though you have stated it would be 2000+ statements and
far too slow to be practical. Your multiple implementations
approach seems more appropriate in this situation.
In messages like this one, you on the other hand seem to
eschew things "general" (though I don't think you do so 100%).
Take, for example, the scroll reporting code you wrote in the
FAQ notes
http://www.jibbering.com/faq/faq_notes/not_browser_detect.html#bdScroll
I consider that level of "multi-browserness" sufficiently "general".
That is, I would be comfortable using this type of code on the
unrestricted web.
But it is not general. The only dimension in which it is general is the
browser support dimension (though it should not be too bad in the
programmer understanding dimension). It does not even cover the general
case of wanting to find the degree to which a document has been scrolled
because it has no facility for reporting on any document other than the
one containing the SCRIPT element that contained/loaded its code. Add
support for multi-frame object models and you have something else again.
When you write that an "attempt to be general that tends to results
in code that bloated and inefficient" I think it is worth defining
where you draw the line between bloated and non-bloated code and
efficient and inefficient code.
I am not drawing a line. I am expression the direction of movement that
results form a cause and effect relationship. Understanding the
relationship allows people to draw their own lines at the points which
suite their situations/contexts/applications.
If a "general" event library could be written in 10 lines would
that be acceptable to use in cases where it is more general than
necessary?
Certainly if performance followed size.
What if it was 20 lines? 50 lines? 200 lines? 10000 lines? The
absolute size of the general code does matter to some extent.
Imagine the 200 line version's only drawback was download time
and initial interpretation, with no other runtime penalties.
If that code was already written, would it be worth using in
situations where code only 50% size could be used given the
smaller code is not already written. Writing 100 lines of event
library code is probably not trivial and require heavy testing.
I would use the 200 line version as it is ready, tested,
cacheable, not really a download burden for the majority of
today's network connections (even most mobile networks).
In that hypothetical situation, I probably would use the code as well.
I think that the extreme positions for and against "general"
are both faulty.
You are not seeing the question of how 'general' is "general". An event
library, no matter how large/capable is not 'general' in every sense. It
should have no element retrieve methods, no retrieval using CSS
selectors, no built-in GUI widgets, etc. It is (potentially) a task
specific component, even if it comprehensively addresses its task.
When general code has acceptable performance, then creating
and maintaining one version is the winner.
Given the number of dimension to 'general' it has not yet been
demonstrated that truly general code can be created so any "when general
code ..." statement is irrelevant. Certainly if you can get away with a
single implementation of a task-specific component then there is no need
for an alternative.
I think an event library falls into this category.
Of a component that only needs one good implementation? I doubt that, in
Aaron's code to date we have not even discussed the possibilities opened
up by having multiple frames (where normalising the event to -
window.event - is not necessarily the right thing to do). I would not
like the idea of importing a large-ish single document component into
every individual frame when a slightly larger multi frame system could
do the whole job, or of using that larger multiple frame version in a
single page application.
When the general code is unacceptably slow then more optimized
versions must be written. The position reporting
problem falls into this category.
But the position reporting problem also demonstrates that the fact that
actual applications of javascript (web sites/applications, etc) are
specific can be exploited to facilitate problem solving. That is,
because at least some of the possible permutations of applications will
not be present in any specific application those excluded permutations
do not need to be handled by code that will never see them. It is not
just speed, there is also the possibility of getting to cut the Gordian
knot rather then untying it. There is also an advantage in the speed
with which re-useable code can be created, because you do not have to
spend time writing code for situations that you know will not apply. And
there is also a testing advantage, because excluded permutations do not
need to be tested.
"Premature optimization is the root of all evil" seems to apply
and suggests the strategy to use general code until there is push
back from some human (possibly the programmer, testers, customers)
that the code really is too slow. Then, and only then, fallback
to the multiple implementations strategy which is, in many regards,
an optimization strategy.
It is not just optimisation. It is not solving problems that are
currently not your problems, it is not spending time writing code that
is not needed, and it is not spending time testing code that will not be
executed in the context of its use.
Richard.