Mostly pessimism ;-) Or optimism?
Ah, though perhaps always better employed in the presence of
supporting pessimetrics/optimetrics ;-)
Various implementations represent closures in differently,
so in the first place the claim only makes sense as a rule
of thumb.
Agreed. The variations in the implementations are exposed in many
areas when measures of efficiency are taken, and closures are expected
to be no different in that respect. The only requirement is that the
implementation meet the standard.
Conceptually the contents of a closure consists of the body
expression and the values of the free variables at the
time the closure was created.
The body expression is usually represented as a pointer
to the result of compiling the body. How the values of the
free variables are represented differ among implementations.
Some compiler actually have several different ways of
representing closures, and choose beween them based on
a static analysis of the program.
Again agreed, although for an interpreted language, the compiler would
not generally expect to be afforded a generous amount of time for
optimizations.
A common strategy of implementing closures is to use
flat closures. When the closure is created the values
of all free variables are stored in the closure.
The advantage of this is that references to free variables
in the body are fast - no need to search the chain of
activation frames. The disadvantage is of course that closure
creation is slowish.
Not knowing when and how a given JavaScript implementation
determines the set of free variables, I figured it were
better to use objects, since I can calculate the free
variables my self on compile time.
But, if I understand correctly, you would be doing that creation under
Javascript interpretation, versus the closure being created through
execution of fast non-interpreted (or at least faster interpreted code
if the interpreter was implement in Java) code?
So in short, it is possible that closures might be as
effective as objects, but given the range of implementation
choices it is hard to rely on.
The unreliability of execution efficiencies, because of the varying
underlying implementations, aren't restricted however to closures. So
for any application, execution time can vary significantly (as per
parser measures I gave earlier).
In the case of closures, though, my quick tests seem to indicate that
closure creation under Opera is very expensive. So if that's
considered an important environment, it may mitigate against heavy use
of closures in your application. The other issue is that Javascript
versions may change in efficiency as the development progresses.
I am not sure I had grasped exactly how the generalization would work.
My view, in brief, was that the program tree structure (or sub-
structure for closures) could be represented by objects, with local
variable properties, and outer function variable property inheritance.
So variable value fetch would just be the normal chained lookup
(possibly with caching for long lookups?).
Value assignment, however, would have to be different from the
standard Javascript action, since the assignment would need to be done
at the correct level/location, rather than to the immediate local
object.
Not that I've really thought anything through, or tried to. Compiler
writing isn't really my area.