JQuery button problem

D

David Mark

Asen said:
They often use the word "magic". That is marketing strategy. There is
many books which explain jQuery or better to say, those books are for
jQuery propaganda. If you open jquery.com you can see a bold
sentence:

| jQuery is designed to change the way that you write JavaScript.

If I am looking for framework for specific application and I see that
sentence I will hit the close button and gone away. If someone want to
change the language, he is free to implement own language. If they try
to change my style of programming that mean more time for development
of my project. And of course that changes do not guarantee nothing.
They not guarantee real improvements and unfortunately they are not
provide any improvements. They only provide excuses for unreasonable
development. Does anybody explain the meaning of:

jQuery.isPlainObject

What should mean "plain object"? In theirs documentation is written:

LOL. That was isObjectLiteral (which is even goofier) until I brought
it up on review.
| Description: Check to see if an object is a plain object
| (created using "{}" or "new Object").

That is just native object for which next object in prototype chain is
object referred by Object.prototype. Why they need to recognize
prototype chain of native objects, especially in this way? What non-
generic can do it on Object instance to need recognize?
Could someone explain these question.

In short, jQuery (and other similarly incompetent efforts like Dojo)
relies on untenable "overloading" strategies. The classic case is
requiring discrimination between host objects and Object objects. Taken
a step further in the absurd direction, arrays are often tossed into
that mix (hence the "need" to use an "isArray" function). You just
can't do that reliably in JS and there is never any need to try.

The long answer is that the jQuery authors are often ignorant about the
concepts they attempt to abstract. They don't even understand the
various methods for measuring element dimensions. You won't get far
with an abstraction that favors one form over another as "better". And
of course, attributes are not properties, properties are not attributes
and neither is "better" than the other. You've got people who don't
know what they are doing trying to provide a layer for others who don't
know what they are doing. That's why they constantly change the thing.
Not just little bits, but wholesale design changes that invalidate
previous applications (not to mention browsers). Of course, the biggest
(and most needed) changes can never happen until they tear down the
whole thing and start over with a design that is realistic for JS and
browser scripting. But that will require people with practical JS and
browser scripting experience (not just those who imagine contributing to
jQuery or Dojo or whatever counts in that column).

I recently saw where the Prototype people are proposing a "new
Prototype". I'm not even sure what that means (blobs of browser
sniffing JS do not a brand make). But if you've got architects who
created a condemned structure, requiring it to be torn down and rebuilt
from scratch, would you really bring them back for an encore? The same
goes for Dojo, YUI, extJS and other diaries of mad browser sniffers.
They had their shot and they muffed it. Nobody needs an old or new
Dojo. Nobody but Yahoo ever needed a YUI and sure as hell you don't
need to pay some slob from "Firebug University" for their bloated widgets.
 
G

Garrett Smith

David said:
Asen said:
Garrett Smith wrote:
[...]

I recently saw where the Prototype people are proposing a "new
Prototype". I'm not even sure what that means (blobs of browser
sniffing JS do not a brand make). But if you've got architects who
created a condemned structure, requiring it to be torn down and rebuilt
from scratch, would you really bring them back for an encore?

They may have learned from mistakes. There are existing codebases using
prototype. There may be an existing demand and in that case, there is
money to be made.

The same
goes for Dojo, YUI, extJS and other diaries of mad browser sniffers.
They had their shot and they muffed it. Nobody needs an old or new
Dojo. Nobody but Yahoo ever needed a YUI and sure as hell you don't

YUI Test was the most practical and comprehensive javascript unit
testing framework that I was able to find so far.
 
G

Garrett Smith

RobG said:
RobG wrote: [...]
where - hasOwnProperty - is a global variable that is a reference to
Object.prototype.hasOwnProperty in Firefox but in IE 6 returns
undefined.
No, `hasOwnProperty` is an identifier in containing scope. It has the
value of `Object.prototype.hasOwnProperty`:

| // Save a reference to some core methods
| toString = Object.prototype.toString,
| hasOwnProperty = Object.prototype.hasOwnProperty,

My error, those lines aren't indented and I missed the opening
(function... bit, didn't think they'd be *that* sloppy.

You've not been watching their checkins on github.

You're free to watch them and comment on them.

I comment rarely, though they don't seem so appreciative.

Owning a mistake sucks but you lose respect when you deny it. And
believe me, they did.

If you mention a mistake, and the person fixes it, would you rather hear
them complain about you, or thank you for it?

Reminds me of a recent post where I made notice of a strange construct
in the code. The author retorted with an unsupported and unrelated
negative remark about me.

[...]
Another solution for checking hasOwnProperty is to check, where
supported, the `__proto__`

more widely supported is - obj.propertyIsEnumerable - which doesn't go
down the [[prototype]] chain. However, it looks a bit weird doing:

for (var p in obj) {
if (obj.propertyIsEnumerable(p)) {
...

when the value of p should always be enumerable, but the test will
return false if p is on obj's [[prototype]] chain.

That's only viable where `hasOwnProperty` is unavailable and
`propertyIsEnumerable is available.
 
D

David Mark

Garrett said:
David said:
Asen said:
Garrett Smith wrote:
[...]

I recently saw where the Prototype people are proposing a "new
Prototype". I'm not even sure what that means (blobs of browser
sniffing JS do not a brand make). But if you've got architects who
created a condemned structure, requiring it to be torn down and rebuilt
from scratch, would you really bring them back for an encore?

They may have learned from mistakes.

That would signal an about face from previous behavior. Personally, I
wouldn't put a penny on them.
There are existing codebases using
prototype.

So what?
There may be an existing demand and in that case, there is
money to be made.

Existing demand for what? Another script called "Prototype?" I can't
imagine.
The same

YUI Test was the most practical and comprehensive javascript unit
testing framework that I was able to find so far.

So they wrote a crappy framework, but a splendid testbed. Whatever. I
like the YUI compressor too. Doesn't change my opinion of the YUI
scripts, style sheets, widgets, etc. It's all garbage.
 
D

David Mark

Garrett said:
RobG said:
RobG wrote: [...]
where - hasOwnProperty - is a global variable that is a reference to
Object.prototype.hasOwnProperty in Firefox but in IE 6 returns
undefined.
No, `hasOwnProperty` is an identifier in containing scope. It has the
value of `Object.prototype.hasOwnProperty`:

| // Save a reference to some core methods
| toString = Object.prototype.toString,
| hasOwnProperty = Object.prototype.hasOwnProperty,

My error, those lines aren't indented and I missed the opening
(function... bit, didn't think they'd be *that* sloppy.

You've not been watching their checkins on github.

Why when somewhere in the world there is paint drying.
You're free to watch them and comment on them.

Don't need to watch them to comment. They suck. That hasn't changed in
years.
I comment rarely, though they don't seem so appreciative.

Yes, petulant too. Deluded neophytes don't like you messing with their
fantasies.
Owning a mistake sucks but you lose respect when you deny it. And
believe me, they did.

They probably didn't understand the mistake the made. They just patch
things until they seem to work in their "core" browsers. Give it up.
If you mention a mistake, and the person fixes it, would you rather hear
them complain about you, or thank you for it?

By and large, I'd rather hear them complain. Especially about me.
Reminds me of a recent post where I made notice of a strange construct
in the code. The author retorted with an unsupported and unrelated
negative remark about me.

Yes, well that is a shame.
 
D

David Mark

Stefan said:
I use the YUI compressor a lot. It's completely independent of the YUI
JS library, and very stable.

Absolutely. I set it up ages ago and it has been a completely
transparent part of my build process since. I've always recommended it,
despite the marketing tie-in with the execrable YUI framework.
I've never had any problems with it, but
I've heard of people who used other compressors, like the Google Closure
compiler, and had their scripts trashed with the default settings. The
Closure compiler may be a better compressor, but as long as the defaults
aren't safe, I won't use it.

That's a good idea. There are a lot of things wrong with the
GoogClosure. The name (another language feature), calling their
compressor a "compiler" and the rotting Dojo code at the core come to
mind. All should be avoided (and thankfully seem to be). One need look
no further than Google's own pages to confirm that they are clueless
when it comes to cross-browser scripting (for one, they _all_ throw
exceptions in Opera, even the latest version).
 
G

Garrett Smith

Stefan said:
I use the YUI compressor a lot. It's completely independent of the YUI
JS library, and very stable. I've never had any problems with it, but
I've heard of people who used other compressors, like the Google Closure
compiler, and had their scripts trashed with the default settings. The
Closure compiler may be a better compressor, but as long as the defaults
aren't safe, I won't use it.

I first learned of Closure Compiler by searching Google for scope
analysis tool. When I found closure inspector, I found that it was
developed for Google Closure, then investigated that. Although it was
not what I was searching for, it caught my interest.

I realized that there was really great potential in such a tool and I
continued to evaluate it, using it to build and minify the source code
for my entire library.

I eventually realized that there were indeed some shortcomings, some of
those being pointed out to me, and so switched back to YUI Compressor.

I submitted my comments and had a very brief discussion with a few on
the team.

<http://groups.google.com/group/closure-compiler-discuss/browse_thread/thread/b6cb9be9e8cd4e2c>

The potential for such tool is to provide lint information, code
optimizations, and even generated documentation.

The source code is in Java, and so you may enjoy perusing it, as you
also program in Java.
 
D

David Mark

Stefan said:
...
I eventually realized that there were indeed some shortcomings, some of
those being pointed out to me, and so switched back to YUI Compressor.

I submitted my comments and had a very brief discussion with a few on
the team.

<http://groups.google.com/group/closure-compiler-discuss/browse_thread/thread/b6cb9be9e8cd4e2c>

Choice quotes from that thread:

| You seem to be expecting Closure-Compiler to be a general,
| conservative, code-safe compressor. It is not (nor do I believe it was
| ever intended to be).

| If you don't want to change or annotate your code, than Closure-
| Compiler with advanced optimizations is the wrong compressor for you.

That's exactly the point. What I like about the YUI compressor is that
it works in a very predictable, conservative way. You don't have to
think about it while you code (except to make identifier shortening
easier), and it will just work as expected. The Closure compiler, on the
other hand, requires you to code "for the compiler" - you have to
annotate code and export top-level symbols, for example (as suggested in
the documentation):

window["x"] = x; // where x is a top-level function

LOL. You have to augment a host object to make the thing work?
Without the advanced optimizations, its level of compression is more or
less comparable with the YUI compressor's.

I can understand why Google felt they needed something like this, and
how it can be useful for large projects which require the shortest
possible output.

But such projects would surely not be written with Google's Closure
library. Only Google can get away with such inconceivably shoddy
craftsmanship.
To me, it felt too much like adding a (very young,
unstable, and complex) dependency to our code, with the distinct
possibility of introducing new and hard-to-find bugs.

Sounds like a lot of JS libraries as well (except for the young part).
That's an interesting point. I can see how using it as a validator could
be helpful, and a tool which combines validation, optimization, and doc
generation makes sense.

I don't see Google as capable of producing a tool to optimize large
chunks of Javascript. Seemingly all of their sites throw exceptions in
the latest Opera and they aren't doing anything special with JS (from
what I can see). It's like I asked the Dojo people with regard to their
ludicrous loaders: who needs such a thing? Where are these huge,
envelope-pushing JS apps that require such tools? All I see out there
(and behind firewalls as well) are rickety structures stacked atop
Prototype, jQuery or whatever. You sure can't optimize those. All you
can do is put up with them until they teeter over and then (hopefully)
learn from the mistakes.

As for reducing the byte count. I have found the YUI compressor to be
more than sufficient and reliable. Of course, other than for mobile
devices, it doesn't really matter at all as most clients and servers can
use GZIP (and modems use compression as well). And, also of course,
virtually nothing built on top of the "major" libraries is worth
downloading to a mobile device. JS seems to be all dressed up with
nowhere to go.
 
G

Garrett Smith

Stefan said:
Stefan Weiss wrote:
[...]
window["x"] = x; // where x is a top-level function

Without the advanced optimizations, its level of compression is more or
less comparable with the YUI compressor's.

CC could avoid all the problems with removing the wrong code, such as
where the developer forgot to export something. It could optimize for
functions that are not called, could inline more function calls.

In order to be able to do that, they'd need to build a scope tree. What
they have now is not that. Look at the source -- they add identifier to
the containing scope, and when you see what the Java source code comment
above Scope, it describes something quite different than scope in
ECMAScript.

The source code for google's javascript typically relies very heavily on
error correction and interpretation of syntax errors where
FunctionDeclaration production appears only where statements are allowed.

They do this, ironically, all wrapped up in a try/catch.

try {
function n(){ ... }
.. hudreds more lines of code
}

And the problems with that are so well known and have been explained on
this NG in such great detail that I am not going to do it again.

Here is one:
I can understand why Google felt they needed something like this, and
how it can be useful for large projects which require the shortest
possible output. To me, it felt too much like adding a (very young,
unstable, and complex) dependency to our code, with the distinct
possibility of introducing new and hard-to-find bugs.

That might be good for job security.

They would start off with syntactically invalid code and then handle
that by adding the identifier to the containing scope. This approach
greatly overcomplicates the situation.

Not only that, it precludes the biggest advantages of such tools. The
primary goal of that tool is make the code smaller. by allowing
How can you have a lint tool where the source code is invalid? It defies
any sensibility.


Give closure compiler invalid syntax and it tries to handle it:
http://closure-compiler.appspot.com/home

Input:

try {
function x(){}
window.external.addFavorite
y = function(){ /* can use addFavorite */ }
} catch(ex) {
alert(ex);
}
y();


Output:
var x=function(){},y=function(){};

Warnings (1)
JSC_USELESS_CODE: Suspicious code. This code lacks side-effects. Is
there a bug? at line 3 character 0
window.external.addFavorite;
^

This is a completely different program.

The input has invalid syntax, having what appears to be a
FunctionDeclaration where only Statements are allowed. *That* should be
an ERROR level warning. Anytime Rhino they parse something like that, it
should generate an error.

It is well known that implementations handle this with
implementation-dependant behavior, employing proprietary syntax extensions.

As stated in that thread, Closure Complier doesn't seem to follow a
sensible strategy. The strategy employed is shown in part by what Nick
Santos lead me to and is demonstrated in part in the examples there and
here. It does not seem to be a sensible strategy.

The tool is coupled to the javascript code it is run on. Both rely on
syntax extensions. The source code for the closure javascript library
has comments that describe execution context as being something that is
other than what execution context means in standard terminology,
google's source code uses nonstandard syntax, and google's tool that
parses their javascript code -- closure compiler -- builds scope in a
nonstandard way.

It does not seem sensible to me.

With an elaborate setup that relies on invalid syntax, one coming into
the project with a bug to fix might find the task of figuring out why it
doesn't work a bit overwhelming.
That's an interesting point. I can see how using it as a validator could
be helpful, and a tool which combines validation, optimization, and doc
generation makes sense.

The first thing you need is a good parser; one that can build a scope
tree. From there, you can perform code lint checking, you can build a
doc tool (because it knows what is global). You can count references and
can get fancy with code rewriting and optimizations. As a developer, I
would like to have all three, starting with code lint validation.

That would be nice to have.
 
D

David Mark

Garrett said:
Stefan said:
Stefan Weiss wrote:
[...]
window["x"] = x; // where x is a top-level function

Without the advanced optimizations, its level of compression is more or
less comparable with the YUI compressor's.

CC could avoid all the problems with removing the wrong code, such as
where the developer forgot to export something. It could optimize for
functions that are not called, could inline more function calls.

In order to be able to do that, they'd need to build a scope tree. What
they have now is not that. Look at the source -- they add identifier to
the containing scope, and when you see what the Java source code comment
above Scope, it describes something quite different than scope in
ECMAScript.

The source code for google's javascript typically relies very heavily on
error correction and interpretation of syntax errors where
FunctionDeclaration production appears only where statements are allowed.

They do this, ironically, all wrapped up in a try/catch.

try {
function n(){ ... }
.. hudreds more lines of code
}

And the problems with that are so well known and have been explained on
this NG in such great detail that I am not going to do it again.

Here is one:
I can understand why Google felt they needed something like this, and
how it can be useful for large projects which require the shortest
possible output. To me, it felt too much like adding a (very young,
unstable, and complex) dependency to our code, with the distinct
possibility of introducing new and hard-to-find bugs.

That might be good for job security.

They would start off with syntactically invalid code and then handle
that by adding the identifier to the containing scope. This approach
greatly overcomplicates the situation.

Not only that, it precludes the biggest advantages of such tools. The
primary goal of that tool is make the code smaller. by allowing
How can you have a lint tool where the source code is invalid? It defies
any sensibility.


Give closure compiler invalid syntax and it tries to handle it:
http://closure-compiler.appspot.com/home

Input:

try {
function x(){}
window.external.addFavorite
y = function(){ /* can use addFavorite */ }
} catch(ex) {
alert(ex);
}
y();


Output:
var x=function(){},y=function(){};

Warnings (1)
JSC_USELESS_CODE: Suspicious code. This code lacks side-effects. Is
there a bug? at line 3 character 0
window.external.addFavorite;
^

This is a completely different program.

Yes. So not only is the library a dead issue, but this "compiler" thing
plays like a practical joke on programmers. Google's own futility with
JS on the Web makes a hell of a lot of sense after you see their tools.
What were they thinking releasing this crap? And thank God nobody is
using it (My Forum gets more posts than theirs). It's basically the
next Dojo (another huge, complex, thoughtless bunch of JS that people
get caught up with because it has prefabricated widgets). Fitting as
the library is basically a fork of Dojo. One of the "authors" swore to
me that it wasn't but he was just playing with semantics. It's Dojo (I
should know). ;)
The input has invalid syntax, having what appears to be a
FunctionDeclaration where only Statements are allowed. *That* should be
an ERROR level warning. Anytime Rhino they parse something like that, it
should generate an error.

I'm reminded of a Monty Python sketch involving a very bad English to
Hungarian phrase book, which was clearly written by somebody poorly
versed in at least one of those languages. IIRC, the guilty party
pleaded incompetence.
It is well known that implementations handle this with
implementation-dependant behavior, employing proprietary syntax extensions.

As stated in that thread, Closure Complier doesn't seem to follow a
sensible strategy. The strategy employed is shown in part by what Nick
Santos lead me to and is demonstrated in part in the examples there and
here. It does not seem to be a sensible strategy.

It surely isn't. Nor would it be a sensible strategy to rely on "tools"
this these. If you want giant blobs of incompetent JS and a tool that
is sure to silently change their logic during the critical transition
from development to deployment, why not use Dojo and get it over with?
By "it", I mean the developer or consultant's reputation (and possibly
career if the project is notorious enough).

Did I mention that you should never use Dojo under any circumstances?
If you see that slob pitchman/founder (or his little friend) coming,
ready the boiling oil. He's an incompetent idiot (masquerading as a
"pragmatist") who just wants your money. Everyone involved with that
code knows deep down that it is a lost cause. I already told them (and
I should sure as hell know). Building anything with Dojo is flushing
money down the toilet. Upgrading anything built with older Dojo
versions is throwing good money after bad. And don't forget that
Closure (the library) _is_ an older version of Dojo (with a few futile
flourishes tacked on). I've talked extensively with the people
twiddling with the Closure (old Dojo) library. They haven't got a clue
what they are doing. The whole thing is based on UA sniffing and they
see nothing wrong with that. In 2010. After all of their problems with
JS over the years. They are oblivious to their own mistakes and
therefore doomed to repeat them.
The tool is coupled to the javascript code it is run on. Both rely on
syntax extensions. The source code for the closure javascript library
has comments that describe execution context as being something that is
other than what execution context means in standard terminology,
google's source code uses nonstandard syntax, and google's tool that
parses their javascript code -- closure compiler -- builds scope in a
nonstandard way.

Yes, forget the library. Like YUI, jQuery, etc. they don't seem to know
what scope means either. Though I think jQuery finally changed theirs
to "context" (not related to execution context I hope). I wonder where
they could have gotten that idea? It sounds like a familiar naming
convention.
It does not seem sensible to me.

It isn't. It reminds me of CSS selector query engines that stack QSA
calls on top of code that can do maybe half of what QSA can do. So
their stuff "works" (by doing almost nothing) in the very latest
browsers and is a coin flip in anything else. But I guess if it is good
enough for Ninjas...
With an elaborate setup that relies on invalid syntax, one coming into
the project with a bug to fix might find the task of figuring out why it
doesn't work a bit overwhelming.

No question. Gives me a headache just thinking about it. Thank God I
can stop thinking about it after I hit send. Build something more than
"Hello World" with Closure or Dojo and you've got a permanent migraine.
The first thing you need is a good parser; one that can build a scope
tree.

JSLint's seems to do the trick. I don't see any need to combine it with
an automatic optimization tool. I simply avoid huge, interdependent GP
libraries in the first place. There are no uncalled functions in my
production apps as I don't put any in. As for docs, I still do them
manually for the most part. There's only so much that an automated tool
can document.
 
G

Garrett Smith

RobG said:
RobG wrote:
[...]
more widely supported is - obj.propertyIsEnumerable - which doesn't go
down the [[prototype]] chain. However, it looks a bit weird doing:
Safari 2 does not support Object.prototype.propertyIsEnumerable, but
Safari 2.0.4 added Object.prototype.hasOwnProperty.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,079
Messages
2,570,574
Members
47,205
Latest member
ElwoodDurh

Latest Threads

Top