Using "new function() {...}" as a Singleton

D

David Mark

Garrett said:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
[...]


But jQuery might be selecting that element, depending on the browser,
its version, the rendering mode, and other css rules. The article says
that jQuery finds the elements whose "width is 600px".
They have no idea what they are doing.  That's not news.  But I checked
their latest and they don't even call their own "attr" function in the
query portion, so the height/width thing I was referring to does not
apply.  Apparently you meant that their docs were wrong.  That's not
news either.

Apparently I was referring to an article that the jQuery team tweeted
about. I've stated that several times now. Get with the picture.


That's why you should use my patented avoidance technique.

That one's obviously no good. If you'd tested it you would probably have
realized that.

Don't be silly. It's quite good and I did test it. No surprises at
all. That doesn't mean it is perfect for every context, but I covered
a couple of common use cases, as well as some rare ones. IIRC, I ran
it through the usual gamut of IE5-8 (quirks and standards modes),
Opera 6-10, FF1-3.6, etc., getting the width/height and then setting
it to make sure it didn't warp the elements. If you managed to
stumble on to something I missed, I would be quite surprised.

It's a simplified version of the same basic logic I've been using for
years. Beats the hell out of relying (solely) on computed styles.
Usually I do some testing to determine if computed styles can be
trusted and fall back to setting, measuring offset*, adjusting and
reseting. I've tried to explain these concepts to you many times, but
you always start reading aloud from the specs.
Where are the unit tests?

That's all you ever say. Where is your understanding of the basic
logic. IIRC, that one was posted to refute your assertion that
computed styles should be used to determine positions. At the time,
you seemed to be the only one who didn't get it.
So, if that element is matched in the author's selector query
"img[width=600]" the query would not be doing what he says it does.
Namely, it would not match "all the images whose width is 600px".
Yes, it's all nonsense.  Don't rely on these silly query engines (or
their documentation).

Pass on that.

Whether you pass or not, it is quite a demonstration of the futility
of the query-based libraries.
I think you are a bit confused about how jQuery's Sizzle thing works.

[...]

Are you bluffing?
No.


What jQuery does for attribute selectors is inconsistent with the CSS
2.1-based Selectors API (draft).

I know. I've pointed that out repeatedly over the years. And it
wasn't too long ago that you tried to dismiss such talk as "anti-
jQuery propaganda". So what now, you are on the bandwagon? Great.
jQuery results are inconsistent and
depend on the element, its applied style values, the browser, its
version, the rendering mode, whether or not attributes are quoted,
whether or not the "context" param is present.

Yes, we've been over this ad nauseam. Much of it is illustrated by my
test page (the one you "passed" on).
jQuery wraps QSA in try catch that will often, depending on the code,
result in the catch block being entered. At that point the hand-rolled
query engine is used.

No kidding. They should never have dropped QSA on top of their old
nonsense. The two are obviously wildly different. Also demonstrated
on my test page.
Throwing errors is a performance hit, and one
which I would certainly want to avoid, however considering the other
inefficiencies in jQuery, I could see how this could seem insignificant
to the author(s).

It's the least of their problems.
The author of that article stated that the attribute selectors provide a
"very accurate" way to "select the elements knowing their attributes and
values".

Yes, he's obviously clueless.
The examples he provided have a comment that contradicts what he says it
does in the article; that: "img[width=600]" matches "all the images
whose width is 600px".

Yes, that was a stupid thing to say.
That selector will do that in certain cases and in certain browsers, but
not consistently across the browsers that are supported. What the
comment says sure doesn't match anything close to native Selector API
(and the query itself is invalid anyway).

Yes, we already went over all of this (in this very thread).
Sizzle matches nodes based on the DOM that the browser builds. The DOM
varies, especially in various versions IE. Trying to accommodate for
those shortcomings is, as I stated earlier, not outweighed by the
complexity entailed by that.

It's a fool's errand. That much is clear (and has been clear for some
time).
Results depending on the selector text used, quotes on attribute values,
the presence of a `context` arg (second param for jQuery(selector,
context)), can have a result that seems paradoxical, like a
"Shroedinger's Cat" result.

All old news (even for this post). I've posted a link to the
discussion about the QSA context divergence several times. The funny
thing is that the library authors knew they were screwing, but went
ahead and dumped QSA on top of their old crap anyway.
[...]


Are you being silly?  I've always said the (completely optional) query
portion was basically a parody and should not be used.  And the
comparisons between mine and the products of the "many eyes" projects
are quite illuminating.  As was well documented last winter, it didn't
take more than a few weekends to accomplish what they have failed to do
in five years.  You better believe it is superior (in virtually every
way possible).  So if you must, you best use mine.

YOu wanna know what's silly? Designing an API "to make a mockery of
jQuery".

Why is that silly?
Worse yet, providing an interface with methods "E" and "Q" and
whatever the other one-letter method names are, along with other badly
named abstractions that provide value, such as gEBI or getEBI, etc. Just
use the real method: document.getElementById.

For one, "E" and "Q" are no worse than "$" and "Y". And they aren't
method names at all, but optional constructors for the OO interface,
which sits atop the API. E is for Element, Q is for Query. What is
"$" for?

And the getEBI function does provide added value (as was discussed
back in 2007 when it was first published here). Where have you been?
All this, while claiming to be superior. Where are those unit tests you
keep mentioning, BTW?

At least some of them are right under your nose (if you cared to
look). You should subscribe to My Library GG so you can keep up on
the progress of the project.

Nothing to say? Oh, that's right; you "passed" on those.
[...]
By the way, CSS2.1 states that attribute selectors take a string
(quoted) or identifier, as in img[width="600"], and not img[width=600],
but provided examples showing unquoted values as strings.
That's the least of their worries.

I think you are a bit confused about how jQuery's Sizzle thing works.
LOL.
Now that QSA is here, using a two-level QSA with a side of mish-mash
library is beyond ludicrous.  Of course QSA behaves very differently
than the handmade query engines, which are demonstrably incomplete and

You don't say? So was there something to attribute selectors being
unquoted after all?

You seem to be going around in circles.
Using QSA might make sense in the near future, depending on how widely
it is implemented. It is incompatible with the libraries today.

No kidding.
As to what "they" are thinking, you can either guess or assume. You
could, for example ask:

| What is the rationale for using Sizzle as a fallback when QSA is
| unsupported? Are you aware that the two are incompatible?

I don't care to ask them. Their rationale for bizarre design
decisions is not my concern.
You could start a new thread on that and even post a link from jQuery
forums (or tweet or blog, etc). Keep in mind that if you call names and
insults, it becomes a lot easier to dismiss your criticism as biased hate..

It's been done to death. Again, where have you been?
 
D

David Mark

Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
[...]

That one's obviously no good. If you'd tested it you would probably have
realized that.

I meant the following (position) is no good. The previous one (size)
looks OK.

OK?! Quirks mode, standards mode, any units, box-model variations,
etc. It's a rock. Compare and contrast to jQuery's nonsense (as
cited in the page).

Unit tests for a test page? And I tested it to death in a heart-
stopping number of browsers. I did far more tests than are
demonstrated on the page, including fixed positioning and elements
positioned with only a single rule (e.g. right).

Perhaps you don't understand the concept? It is designed to retrieve
a pair (e.g. left/top, right/bottom, right/top, etc.) The caller must
know which pair they are interested in. For example, given an element
positioned like this:-

#myelement {
top:20px;
}

....the test function will dutifully fill in two of the three
"blanks" (left and right). Obviously you can't use both of them
together. In this case, you could make use of the top/left or top/
right pairs.

My plan is to eventually replace the more complex
getElementPositionStyle function in My Library (which is limited to
figuring left/top) with this version. For that, I will add an
argument for the caller to specify which pair they want and it will
return null if that pair cannot be determined. Get it?

Beats the hell out of relying on getComputedStyle and the like (for
many reasons which we discussed to death just a month or so ago).
 
G

Garrett Smith

On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :

[...]

That one's obviously no good. If you'd tested it you would probably have
realized that.

I meant the following (position) is no good. The previous one (size)
looks OK.

OK?! Quirks mode, standards mode, any units, box-model variations,
etc. It's a rock. Compare and contrast to jQuery's nonsense (as
cited in the page).

Unit tests for a test page? And I tested it to death in a heart-
stopping number of browsers. I did far more tests than are
demonstrated on the page, including fixed positioning and elements
positioned with only a single rule (e.g. right).

That is a demo, or a "functional test".

A unit test would check all edge cases and ensure that all code paths
are executed.

Try putting the element in a table, getting the position values from a
child of BODY, elements with display: inline, floats. For any case that
you don't want to support, you could create the test and annotate it as
IGNORE with a comment.

// This case is not supported because [reason]

You'll probably run into edges.

Copy'n'pasting your function from position.html into a simple test page
that we discussed last year, I get different results in different
versions of IE.

I removed 'testinput', had the function call onload, and set ids = ["i7"];

http://newsgroups.derkeiler.com/Archive/Comp/comp.lang.javascript/2009-11/msg00010.html
Perhaps you don't understand the concept? It is designed to retrieve
a pair (e.g. left/top, right/bottom, right/top, etc.) The caller must
know which pair they are interested in. For example, given an element
positioned like this:-

I see what it's supposed to do and what it does.

Anything using offsetTop, offsetLeft, or offsetParent must not be trusted.
#myelement {
top:20px;
}

...the test function will dutifully fill in two of the three
"blanks" (left and right). Obviously you can't use both of them
together. In this case, you could make use of the top/left or top/
right pairs.

My plan is to eventually replace the more complex
getElementPositionStyle function in My Library (which is limited to
figuring left/top) with this version. For that, I will add an
argument for the caller to specify which pair they want and it will
return null if that pair cannot be determined. Get it?

So along the lines of:

getElementPositionStyle(el, "left");

?

Or:

Element.getPositionStyle("left")

?
Beats the hell out of relying on getComputedStyle and the like (for
many reasons which we discussed to death just a month or so ago).

getComputedStyle doesn't work in IE.

What perturbs me about getting styles is there's no good way to get
styles in a particular unit. At least I haven't figured out a way to do
animation in other units, such as EM, for example.

Garrett
 
G

Garrett Smith

[...]
That one's obviously no good. If you'd tested it you would probably have
realized that.

Don't be silly.

That one looks fine. I meant the other. I did write that in my follow up.
That's all you ever say. Where is your understanding of the basic
logic. IIRC, that one was posted to refute your assertion that
computed styles should be used to determine positions. At the time,
you seemed to be the only one who didn't get it.

It would be helpful to look at them to see what was being tested. You
wrote that you had tests, so where are they? Given time, I'd like to
look into them.

My offsetTop knowledge has has waned in the last 2 years; the number of
problems with that and friends are too much to retain, however I know
enough not to trust anything that uses them, not without tests and
testing all the edge cases I mentioned in my other reply. A good test
can provide quicker verification than a demo.
So, if that element is matched in the author's selector query
"img[width=600]" the query would not be doing what he says it does.
Namely, it would not match "all the images whose width is 600px".
Yes, it's all nonsense. Don't rely on these silly query engines (or
their documentation).

Pass on that.

Whether you pass or not, it is quite a demonstration of the futility
of the query-based libraries.

A little. I'd rather more see expected results and failed results; speed
is secondary to correctness.

The first column would be a good place for that. For examaple:

+------------------+----------+------------------+
| Selector | Expected | NWMatcher 1.2.1 |
| div[class^=dia] | Error | ??? |
+------------------+----------+------------------+

Who has any idea what nonstandard selectors should do? Based on what?
jQuery documentation? The libraries all copied the jQuery library and
the jQuery library has always had vague documentation and inconsistent
results across browsers an between versions of jQuery.

What can "td[colspan!=1]" or "div[class!=madeup]" be expected to do,
other than throw an error? To answer that, you need documentation. The
most obvious place to look for documentation would be the w3c Selectors
API, and that will tell you that an error must be thrown because the
attribute value is neither an identifier nor a string.

MSDN documentation, is wrong, too

http://msdn.microsoft.com/en-us/library/aa358822(VS.85).aspx
| att Must be either an Identifier or a String.
| val Must be either an Identifier or a String.

`att` should be an attribute name, not an identifier or a string.

In CSS, identifiers (including element names, classes, and IDs in
selectors) can contain only the characters [a-zA-Z0-9] and ISO 10646
characters U+00A1 and higher, plus the hyphen (-) and the underscore
(_); they cannot start with a digit, or a hyphen followed by a digit.
Identifiers can also contain escaped characters and any ISO 10646
character as a numeric code (see next item). For instance, the
identifier "B&W?" may be written as "B\&W\?" or "B\26 W\3F".

(quoted from the CSS2.1 specification).

What happens if jQuery removes support for a particular selector?
Haven't they done that in the past for nth-of-type or attribute style
selectors, XPath, and @attr?

What should a:link match? Should it throw an error? What should be the
expected result set of `null`?

If that is not what you want it to do; if you want something other than
an error, you need to state what and why; you need documentation.

A better test would be side-by-side comparison of NodeSelector. A good
test case might be to take the w3c NodeSelector interface and rewrite it
to make sure it fails for all the invalid syntax that is allowed in
these things.

I think it is time to wake up. For you and for all web developers.

You've written a long test that makes assertions about a loosely defined
interface -- the query function, that is specified in documentation that
is incredibly vague, and was aptly titled with an enigmatic identifier
-- the dollar function. That such an interface would be mimicked so many
times over, and with variations, is alarming and I think indicates a
problem with the industry.
That expectation is based on observations of design bug of jQuery and
even at that, the query will fail, matching an element whose display is
"none" in those recent browsers that implement getComputedStyle. In IE8
and below, it will not match the element, and so it will do what the
author says it does.
I think you are a bit confused about how jQuery's Sizzle thing works.

[...]

Are you bluffing?

No.

OK. Take a look at ATTR:

ATTR: function(elem, match){
var name = match[1],
result = Expr.attrHandle[ name ] ?
Expr.attrHandle[ name ]( elem ) :
elem[ name ] != null ?
elem[ name ] :
elem.getAttribute( name ),
value = result + "",
type = match[2],
check = match[4];

This line:

elem[ name ] != null ? elem[ name ]

- checks to see if the element's property is either null or undefined.
If that is the case, getAttribute is used as a fallback.

Where the property name is `width`, the property gets its value from how
wide the element actually is. It could be either offsetWidth or
computedStyle width, depending on the browser. It's not specified.

http://www.w3.org/TR/DOM-Level-2-HTML/html.html#ID-13839076

Says that width is "The width of the image in pixels. " and refers to
HTML 4.01 for more information. It does not provide any more detail such
as how that "width" measurement is calculated. The width could be taken
from computedStyle and then rounded (because it is of type long), it
might include padding or border.

Regardless, that is the definition for the width property, not the width
attribute.

Observation shows that styles that have been rendered affect an img's
`width` property in all tested browsers and that in IE, if the img has
display: none, then `img.width` reports 0 in IE.

img {
padding: 1px;
}

<div onclick="alert(this.firstChild.width)"><img style="display: none"
width="100" src="http://www.w3.org/Icons/w3c_home">Clicke here!</div>

<img style="width: 40px" width="100" onclick="alert(this.width)"
src="http://www.w3.org/Icons/w3c_home">

The first img.width reports 100 in most browsers and 0 in IE.

The second, 40 in most browsers, 42 in IE in standards mode, 42 in
quirks mode.

HTML 5 codifies what most browsers do; that is, if the image is not
rendered, it gets the intrinsic width. If it is rendered, then it gets
the rendered width.

All that aside, the point of this is not to figure out how the `width`
property works; focusing on the anomalies would be totally missing the
point.

The width property is not the width attribute. Properties != attributes.
It's as simple as that.

The query img[width=600] will fail to match elements with display "none"
in IE8 and below. That outcome can only be considered a "failure" when
the success has been defined as performing a match and not throwing an
error. That expectation is nonstandard because the Selectors API
requires an error to be thrown. So, since the outcome is nonstandard, it
begs the question: What nonstandard results are you expecting?

If it is expected to match of all element's whose width is 600px, and if
so, does that mean "intrinsic" width, the "width" defined by CSS, the
offsetWidth, the width property, or the width attribute? If the width
attribute is wanted, then what is that expectation coming from? The
expectation be more in line with what the selectors API specifies, but
it is already established now that the selector is invalid and that the
expectation is nonstandard behavior, so it cannot be reasonably assumed
that other parts of the specification are expected to be upheld, and
certainly not in light of looking at the ATTR function which does
property matching. What is expected is not defined.

The result of not matching img that have display none can be explained
by looking at the code for ATTR and realizing that it is matching
property values. The algorithm for the value of that property is
unspecified by DOM 2 HTML. It is completely inconsistent with the
Selectors API.

The design of jQuery is not explained by the documentation. It is not
explained in that article. It is explained in the code, but what the
code says matches not what the jQuery docs state, nor the author of the
article states; not completely at least. The design of jQuery is
explained in the source code of jQuery.

Anyone who wants to know what anything does can just read the code.
Mine, yours, jQuery. doen'st matter, really.

Of course, therein lies a potential problem an that is code cleanliness.
Code that has long and complicated methods such as having nested if
else, typechecking, and variable behavior, tends to be harder to read
and understand.

For one who does not do javascript as his primary task, cross browser
coding is likely to be less familiar and natural. Such person is going
to be less likely to read the source code to learn what it does, and for
an API that has methods that have too much complexity, even if he does
read the source code, he's probably not going to grasp it quickly. It's
complicated. More likely, he'll learn from examples such as those in the
article.
Yes, he's obviously clueless.

He seems not to know the difference between attributes and properties.
Perhaps he has learned how selectors work by using jQuery.

Does the jQuery team know the difference between attributes and properties?

After all the code review that's been done, I see jQuery team advocating
this article and it begs the question: Do they know what that code does?
The code does not match attributes, so in that regard, it does not do
what the author says, but then the code does, in some cases, match
elements whose width is 600px, but only by virtue of having the rendered
width being reflected in the corresponding property.

jQUery has differeng attribute/property accessors. One is in attr,
you've raked that one over and over, the other, however, is in ATTR, and
that one still resolves properties.

If the only basis for design is retrofitting a public (permanent) API
with workarounds that addressed its initial design, and only in reaction
to having those problems drawn out in public, repeatedly, over and over
and over again, what was in the original API design that made it so
damned attractive that caused so many library authors to want to copy it?

Moreover, what does such copying say about the state of the industry?
How is the web doing?
The examples he provided have a comment that contradicts what he says it
does in the article; that: "img[width=600]" matches "all the images
whose width is 600px".

Yes, that was a stupid thing to say.

That's a fine example to illustrate my point. It is time to wake up.
Why is that silly?

It does not serve a practical purpose. In the big picture, it is hardly
an improvement, it misses the point of what "API" stands for.
For one, "E" and "Q" are no worse than "$" and "Y". And they aren't
method names at all, but optional constructors for the OO interface,
which sits atop the API. E is for Element, Q is for Query. What is
"$" for?

E could mean Euler's log, Event, or Error. The identifier doesn't identify.
And the getEBI function does provide added value (as was discussed
back in 2007 when it was first published here). Where have you been?

Added value? What added value? Do you mean the workaround for giving an
element an ID where another element has that for its NAME? Avoiding
doing that is the best workaround.

[...]

Garrett
 
D

David Mark

On 5/29/2010 11:05 PM, Garrett Smith wrote:
On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
[...]
http://www.cinsoft.net/size.html
That one's obviously no good. If you'd tested it you would probably have
realized that.
I meant the following (position) is no good. The previous one (size)
looks OK.
OK?!  Quirks mode, standards mode, any units, box-model variations,
etc.  It's a rock.  Compare and contrast to jQuery's nonsense (as
cited in the page).
Unit tests for a test page?  And I tested it to death in a heart-
stopping number of browsers.  I did far more tests than are
demonstrated on the page, including fixed positioning and elements
positioned with only a single rule (e.g. right).

That is a demo, or a "functional test".

Semantics. Let's call it something I put on my server in response to
the latest revisiting of undeniably flawed computed styles vs. my
perfect positioning solution.

Remember this thread?

http://groups.google.com/group/comp...a9e69cb88f/e310c3518c91a1bd?#e310c3518c91a1bd

Or perhaps the one before it (or the one before that). I've been
trying to make this point for years and you always pop up to muddy it.
A unit test would check all edge cases and ensure that all code paths
are executed.

Thanks for that. You seem to have it in your head that nothing could
possibly work without unit tests.
Try putting the element in a table, getting the position values from a
child of BODY, elements with display: inline, floats.

You seem to be latching on to calculating absolute offset positions,
which is odd as those have nothing to do with computed styles. The
point of the exercise to get the computed left/top, right/bottom or
whatever coordinate pairs so that they can be set back without moving
the elements. None of the above matter. The only caveat is that the
position must not be *static* as then the concept of positioning by
coordinates is meaningless.
For any case that
you don't want to support, you could create the test and annotate it as
IGNORE with a comment.

No need to do any of that in this case. The algorithm used (for
years) is basic math and I gave you enough tests to get you started if
you wish to investigate further.
// This case is not supported because [reason]

You'll probably run into edges.

There's no such thing as edge cases in my book. Things work as
documented or they don't. If they don't, you adjust the code or
documentation.
Copy'n'pasting your function from position.html into a simple test page
that we discussed last year, I get different results in different
versions of IE.

That's unsurprising as I'm sure you are confused about what you are
testing.
I removed 'testinput', had the function call onload, and set ids = ["i7"];

http://newsgroups.derkeiler.com/Archive/Comp/comp.lang.javascript/200...

I see something about list items, which seems to confirm confusion on
the part of the tester.
I see what it's supposed to do and what it does.

But you don't understand it. Apparently you don't understand what it
is even supposed to do, which would preclude understanding the
results.
Anything using offsetTop, offsetLeft, or offsetParent must not be trusted..

For one, offsetParent is _not_ used. For two, and we went over this
fifty times in the last thread, the algorithm used factors out the
inconsistencies. That's why it works so consistently, despite
utilizing properties that are inconsistent cross-browser.
So along the lines of:

getElementPositionStyle(el, "left");

You could do that, but more like:-

getElementPositionStyles(el, "topleft");
getElementPositionStyles(el, "bottomright");

etc.

At the moment, it fills in all of the blanks that it can, which has
apparently led to some confusion about what the results mean (at least
I thought that was the source of your confusion). If you get back
top, left *and* right. You can use top/left or top/right (but not all
three).
?

Or:

Element.getPositionStyle("left")

Of course not.
?


getComputedStyle doesn't work in IE.

Groan. That's my line.
What perturbs me about getting styles is there's no good way to get
styles in a particular unit. At least I haven't figured out a way to do
animation in other units, such as EM, for example.

If you are lacking code as described here:-

http://www.cinsoft.net/position.html

....and presuming you wish to support IE and do not wish to use pixels
for everything (would make your site very unpopular with IE users),
you have no choice but to use other units. They aren't any harder
than pixels though. The letters tacked on to the end of the numbers
are just different (e.g. "px" vs. "em" vs. whatever).
 
D

David Mark

On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
That's why you should use my patented avoidance technique.
http://www.cinsoft.net/size.html
That one's obviously no good. If you'd tested it you would probably have
realized that.
Don't be silly.

That one looks fine. I meant the other. I did write that in my follow up.

Yes. I saw it when I got to it (after I had replied to the
original). And I still contend it is more than fine. Like the
position one, it is the perfect solution to a problem that seems to
afflict virtually every library, framework, widget, etc. on the
market. It baffles me as I've been using similar code since 1998.
Yet, as seen in the last message, I am still having to argue the
points in 2010.
It would be helpful to look at them to see what was being tested. You
wrote that you had tests, so where are they?

Groan. All I want to know is what is the name of the guy on first
base.
Given time, I'd like to
look into them.

You have looked at them. In fact, I put up those two primers partly
for your benefit. And I can't believe they didn't help.
My offsetTop knowledge has has waned in the last 2 years; the number of
problems with that and friends are too much to retain,

You don't have to retain *any* of it. Zero. I don't think about
their quirks either. I create equations that account for *any* quirks
by factoring them out of the answer. It's really not that complicated
if you think about it (and read my many examples and previous
explanations).
however I know
enough not to trust anything that uses them, not without tests and
testing all the edge cases I mentioned in my other reply. A good test
can provide quicker verification than a demo.

You would rather trust something you know is broken and/or absent
(e.g. getComputedStyle/currentStyle)? My solutions work whether the
properties in question are broken or not. And I've tested in
virtually every major browser that has come out this century (and a
few from the last century).
So, if that element is matched in the author's selector query
"img[width=600]" the query would not be doing what he says it does..
Namely, it would not match "all the images whose width is 600px".
Yes, it's all nonsense.  Don't rely on these silly query engines (or
their documentation).
http://www.cinsoft.net/slickspeed.html
Pass on that.
Whether you pass or not, it is quite a demonstration of the futility
of the query-based libraries.

A little. I'd rather more see expected results and failed results; speed
is secondary to correctness.

Expected results would be a good addition and I plan to add those.
For now, note that the two non-fantasy columns (e.g. mine), which
query with and without QSA agree in the latest major browsers in all
rendering modes. Then go back one version. Then another. Then
another. As they all agree, you can be pretty damned sure that the
answers are expected as that's what I did before I published them.
Now, look at all of the columns to the left. A horror show, right?
Blacked out squares everywhere. Exceptions thrown in browsers that
just came out a couple of years ago, miscounts in browsers that came
out yesterday. jQuery 1.4 disagrees with 1.3, which disagrees with
1.2. Dojo gets more answers wrong than right in Opera 9. And so on,
and so on... Lots of flailing and destruction and still nothing close
to a solid foundation for the "major" libraries and frameworks.
Somehow those things are seen as an "easier" way to do the most
important task in browser scripting (e.g. read documents).

The first column would be a good place for that. For examaple:

+------------------+----------+------------------+
| Selector         | Expected | NWMatcher 1.2.1  |
| div[class^=dia]  | Error    |    ???           |
+------------------+----------+------------------+

Who has any idea what nonstandard selectors should do? Based on what?
jQuery documentation? The libraries all copied the jQuery library and
the jQuery library has always had vague documentation and inconsistent
results across browsers an between versions of jQuery.

Understand that CSS selector queries came long before the standard
specification. In fact, there would be no such specification if not
for such scripts. The specification is not retroactive and the tests
in question predate the documentation on the W3C site as well.

What can "td[colspan!=1]" or "div[class!=madeup]" be expected to do,

Obviously, the same thing that they would do with quotes around them.
You are focusing on irrelevant minutiae. The point is that jQuery and
the like often trip over such queries (with or without the quotes).
How can that be? Well, see my discussion here with Resig around
Halloween 2007 (or search the archive for about a hundred follow-ups,
usually involving Matt Kruse who spent years swearing such obvious
mistakes were to be expected). That should answer all of the
questions. If only it had for them. :(

other than throw an error? To answer that, you need documentation. The
most obvious place to look for documentation would be the w3c Selectors
API,

Ugh. Chicken and the egg. The code and tests predate the writeup.
Get it?

and that will tell you that an error must be thrown because the
attribute value is neither an identifier nor a string.

This isn't one of those Terminator movies. The W3C can't go back in
time and abort jQuery, SlickSpeed and the like.
MSDN documentation, is wrong, too

Wouldn't shock me.
http://msdn.microsoft.com/en-us/library/aa358822(VS.85).aspx
| att Must be either an Identifier  or a String.
| val Must be either an Identifier or a String.

`att` should be an attribute name, not an identifier or a string.

Depends on what you consider to be right. Clearly you don't have a
grasp on that yet, so I won't bother investigating the alleged mistake
on MSDN. I presume they are documenting their rendition of QSA. Just
as they previously documented their rendition of innerHTML,
offsetHeight, etc. When and if these things get written up as
recommendations by the W3C, it will not render the previous behavior
and documentation "wrong".
In CSS, identifiers (including element names, classes, and IDs in
selectors) can contain only the characters [a-zA-Z0-9] and ISO 10646
characters U+00A1 and higher, plus the hyphen (-) and the underscore
(_); they cannot start with a digit, or a hyphen followed by a digit.
Identifiers can also contain escaped characters and any ISO 10646
character as a numeric code (see next item). For instance, the
identifier "B&W?" may be written as "B\&W\?" or "B\26 W\3F".

(quoted from the CSS2.1 specification).

What happens if jQuery removes support for a particular selector?

They only support about half of them at this point I think. Certainly
they are nowhere near compliance with CSS3 (which they disingenuously
claim to be). And their script is not 24K by any comparison rooted in
reality (i.e. prior to GZIP). They simply feed tripe to those naive
enough to swallow it. Did you know My Library is roughly 42K after
compression? The whole thing (which does 100 times more than jQuery,
including queries). :)
Haven't they done that in the past for nth-of-type or attribute style
selectors, XPath, and @attr?

nth-of-type? No idea what they did with that. IIRC, mine is the only
one I know of that attempts to support those (and as I've recently
realized, I didn't quite nail it). The SlickSpeed tests can be
confusing to the uninitiated as most of the libraries now hand off to
QSA, so it may appear that unsupported selectors work. Of course,
that's a big part of the problem. Neophytes like to test in just the
latest browsers and avoid IE as much as possibly. IE8 has QSA. IE9
will likely augment its capabilities in that department. So you can
have applications that appear to work if tested just in IE8 standards
mode and/or IE9 but have no shot at all in Compatibility View or IE <
8. That's why jQuery 1.2.6 (and not 1.2.1 as that was a misprint)
remains on the test page along side 1.3.x and 1.4.x. That's the last
one that didn't hand off queries to the browser.

What should a:link match? Should it throw an error? What should be the
expected result set of `null`?

What do you think it should match? It won't match anything in any
query engine I know of. No coincidence it is not featured on any of
the test pages.
If that is not what you want it to do; if you want something other than
an error, you need to state what and why; you need documentation.

Who are you talking to? I've said (and demonstrated) from the start
that these query engines are a waste of time. Lack of documentation
is the least of the worries.
A better test would be side-by-side comparison of NodeSelector. A good
test case might be to take the w3c NodeSelector interface and rewrite it
to make sure it fails for all the invalid syntax that is allowed in
these things.

Nope. More wasted time.
I think it is time to wake up. For you and for all web developers.

Me?! You are just parroting my lines years later. Where do you get
the gall?
You've written a long test that makes assertions about a loosely defined
interface -- the query function, that is specified in documentation that
is incredibly vague, and was aptly titled with an enigmatic identifier
-- the dollar function.

Wrong. MooTools wrote that stupid test years ago. And all of those
stupid libraries that it was written to test used the "$" function. I
just added more test cases and gave my getEBCS function a "$" alias.
Where have you been? These are discussions from over two years ago.
That such an interface would be mimicked so many
times over, and with variations, is alarming and I think indicates a
problem with the industry.

Mine is not mimicry. It is parody to make a point. And I think it is
(finally) getting through loud and clear.
That expectation is based on observations of design bug of jQuery and
even at that, the query will fail, matching an element whose displayis
"none" in those recent browsers that implement getComputedStyle. In IE8
and below, it will not match the element, and so it will do what the
author says it does.
I think you are a bit confused about how jQuery's Sizzle thing works.
[...]
Are you bluffing?

OK. Take a look at ATTR:

Are you really showing jQuery's attribute handling to *me*?!
ATTR: function(elem, match){
     var name = match[1],
     result = Expr.attrHandle[ name ] ?
         Expr.attrHandle[ name ]( elem ) :
         elem[ name ] != null ?
         elem[ name ] :
         elem.getAttribute( name ),
         value = result + "",
         type = match[2],
         check = match[4];

This line:

   elem[ name ] != null ? elem[ name ]

Same as you! Same as YOU! I throw the ball to who. Whoever it is drops
the ball and the guy runs to second. Who picks up the ball and throws
it to What. What throws it to I Don't Know. I Don't Know throws it
back to Tomorrow, Triple play. Another guy gets up and hits a long fly
ball to Because. Why? I don't know! He's on third and I don't give a
darn!
 
D

David Mark

On 5/29/2010 11:05 PM, Garrett Smith wrote:
On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
[...]
http://www.cinsoft.net/size.html
That one's obviously no good. If you'd tested it you would probably have
realized that.
I meant the following (position) is no good. The previous one (size)
looks OK.
OK?!  Quirks mode, standards mode, any units, box-model variations,
etc.  It's a rock.  Compare and contrast to jQuery's nonsense (as
cited in the page).
Unit tests for a test page?  And I tested it to death in a heart-
stopping number of browsers.  I did far more tests than are
demonstrated on the page, including fixed positioning and elements
positioned with only a single rule (e.g. right).

That is a demo, or a "functional test".

A unit test would check all edge cases and ensure that all code paths
are executed.

Try putting the element in a table, getting the position values from a
child of BODY, elements with display: inline, floats. For any case that
you don't want to support, you could create the test and annotate it as
IGNORE with a comment.

// This case is not supported because [reason]

You'll probably run into edges.

Copy'n'pasting your function from position.html into a simple test page
that we discussed last year, I get different results in different
versions of IE.

And (for crying out loud), of course you can get different numbers
back in different browsers, rendering modes, etc. That's irrelevant
(and completely expected). What matters is whether setting the
retrieved coordinates moves the element. Fails if it does (for
reasons that should be obvious), passes if it does not. Imagine
trying to animate the left/top coordinates without knowing them (in
pixels) in advance. If you are off by even a pixel, the result will
start off with a twitch. Same for positioning by dragging. I
explained all of this in the primer (but it is such an old "problem"
that it should need no introduction).

Just so happens my test case returns the same numbers for all browsers
by design. Though I explained that this was only a convenient
coincidence (which I hoped would make it easier for beginners to
understand). I can just hear them saying "aw, I got 201 in Safari and
202 in IE; that's not consistent!" :)
 
D

David Mark

On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
That's why you should use my patented avoidance technique.
http://www.cinsoft.net/size.html
That one's obviously no good. If you'd tested it you would probably have
realized that.
Don't be silly.

That one looks fine. I meant the other. I did write that in my follow up.


That's all you ever say.  Where is your understanding of the basic
logic.  IIRC, that one was posted to refute your assertion that
computed styles should be used to determine positions.  At the time,
you seemed to be the only one who didn't get it.

It would be helpful to look at them to see what was being tested. You
wrote that you had tests, so where are they? Given time, I'd like to
look into them.

My offsetTop knowledge has has waned in the last 2 years; the number of
problems with that and friends are too much to retain, however I know
enough not to trust anything that uses them, not without tests and
testing all the edge cases I mentioned in my other reply. A good test
can provide quicker verification than a demo.


So, if that element is matched in the author's selector query
"img[width=600]" the query would not be doing what he says it does..
Namely, it would not match "all the images whose width is 600px".
Yes, it's all nonsense.  Don't rely on these silly query engines (or
their documentation).
http://www.cinsoft.net/slickspeed.html
Pass on that.
Whether you pass or not, it is quite a demonstration of the futility
of the query-based libraries.

A little. I'd rather more see expected results and failed results; speed
is secondary to correctness.

The first column would be a good place for that. For examaple:

+------------------+----------+------------------+
| Selector         | Expected | NWMatcher 1.2.1  |
| div[class^=dia]  | Error    |    ???           |
+------------------+----------+------------------+

Who has any idea what nonstandard selectors should do? Based on what?
jQuery documentation? The libraries all copied the jQuery library and
the jQuery library has always had vague documentation and inconsistent
results across browsers an between versions of jQuery.

What can "td[colspan!=1]" or "div[class!=madeup]" be expected to do,
other than throw an error? To answer that, you need documentation. The
most obvious place to look for documentation would be the w3c Selectors
API, and that will tell you that an error must be thrown because the
attribute value is neither an identifier nor a string.

MSDN documentation, is wrong, too

http://msdn.microsoft.com/en-us/library/aa358822(VS.85).aspx
| att Must be either an Identifier  or a String.
| val Must be either an Identifier or a String.

`att` should be an attribute name, not an identifier or a string.

In CSS, identifiers (including element names, classes, and IDs in
selectors) can contain only the characters [a-zA-Z0-9] and ISO 10646
characters U+00A1 and higher, plus the hyphen (-) and the underscore
(_); they cannot start with a digit, or a hyphen followed by a digit.
Identifiers can also contain escaped characters and any ISO 10646
character as a numeric code (see next item). For instance, the
identifier "B&W?" may be written as "B\&W\?" or "B\26 W\3F".

(quoted from the CSS2.1 specification).

What happens if jQuery removes support for a particular selector?
Haven't they done that in the past for nth-of-type or attribute style
selectors, XPath, and @attr?

What should a:link match? Should it throw an error? What should be the
expected result set of `null`?

If that is not what you want it to do; if you want something other than
an error, you need to state what and why; you need documentation.

A better test would be side-by-side comparison of NodeSelector. A good
test case might be to take the w3c NodeSelector interface and rewrite it
to make sure it fails for all the invalid syntax that is allowed in
these things.

I think it is time to wake up. For you and for all web developers.

You've written a long test that makes assertions about a loosely defined
interface -- the query function, that is specified in documentation that
is incredibly vague, and was aptly titled with an enigmatic identifier
-- the dollar function. That such an interface would be mimicked so many
times over, and with variations, is alarming and I think indicates a
problem with the industry.


That expectation is based on observations of design bug of jQuery and
even at that, the query will fail, matching an element whose displayis
"none" in those recent browsers that implement getComputedStyle. In IE8
and below, it will not match the element, and so it will do what the
author says it does.
I think you are a bit confused about how jQuery's Sizzle thing works.
[...]
Are you bluffing?

OK. Take a look at ATTR:

ATTR: function(elem, match){
     var name = match[1],
     result = Expr.attrHandle[ name ] ?
         Expr.attrHandle[ name ]( elem ) :
         elem[ name ] != null ?
         elem[ name ] :
         elem.getAttribute( name ),
         value = result + "",
         type = match[2],
         check = match[4];

This line:

   elem[ name ] != null ? elem[ name ]

- checks to see if the element's property is either null or undefined.
If that is the case, getAttribute is used as a fallback.

Where the property name is `width`, the property gets its value from how
wide the element actually is. It could be either offsetWidth or
computedStyle width, depending on the browser. It's not specified.

http://www.w3.org/TR/DOM-Level-2-HTML/html.html#ID-13839076

Says that width is "The width of the image in pixels. " and refers to
HTML 4.01 for more information. It does not provide any more detail such
as how that "width" measurement is calculated. The width could be taken
from computedStyle and then rounded (because it is of type long), it
might include padding or border.

Regardless, that is the definition for the width property, not the width
attribute.

Observation shows that styles that have been rendered affect an img's
`width` property in all tested browsers and that in IE, if the img has
display: none, then `img.width` reports 0 in IE.

img {
     padding: 1px;

}

<div  onclick="alert(this.firstChild.width)"><img style="display: none"
width="100" src="http://www.w3.org/Icons/w3c_home">Clicke here!</div>

<img style="width: 40px" width="100" onclick="alert(this.width)"
src="http://www.w3.org/Icons/w3c_home">

The first img.width reports 100 in most browsers and 0 in IE.

The second, 40 in most browsers, 42 in IE in standards mode, 42 in
quirks mode.

HTML 5 codifies what most browsers do; that is, if the image is not
rendered, it gets the intrinsic width. If it is rendered, then it gets
the rendered width.

All that aside, the point of this is not to figure out how the `width`
property works; focusing on the anomalies would be totally missing the
point.

The width property is not the width attribute. Properties != attributes..
It's as simple as that.

The query img[width=600] will fail to match elements with display "none"
in IE8 and below. That outcome can only be considered a "failure" when
the success has been defined as performing a match and not throwing an
error. That expectation is nonstandard because the Selectors API
requires an error to be thrown. So, since the outcome is nonstandard, it
begs the question: What nonstandard results are you expecting?

If it is expected to match of all element's whose width is 600px, and if
so, does that mean "intrinsic" width, the "width" defined by CSS, the
offsetWidth, the width property, or the width attribute? If the width
attribute is wanted, then what is that expectation coming from? The
expectation be more in line with what the selectors API specifies, but
it is already established now that the selector is invalid and that the
expectation is nonstandard behavior, so it cannot be reasonably assumed
that other parts of the specification are expected to be upheld, and
certainly not in light of looking at the ATTR function which does
property matching. What is expected is not defined.

The result of not matching img that have display none can be explained
by looking at the code for ATTR and realizing that it is matching
property values. The algorithm for the value of that property is
unspecified by DOM 2 HTML. It is completely inconsistent with the
Selectors API.

The design of jQuery is not explained by the documentation. It is not
explained in that article. It is explained in the code, but what the
code says matches not what the jQuery docs state, nor the author of the
article states; not completely at least. The design of jQuery is
explained in the source code of jQuery.

Anyone who wants to know what anything does can just read the code.
Mine, yours, jQuery. doen'st matter, really.

Of course, therein lies a potential problem an that is code cleanliness.
Code that has long and complicated methods such as having nested if
else, typechecking, and variable behavior, tends to be harder to read
and understand.

For one who does not do javascript as his primary task, cross browser
coding is likely to be less familiar and natural. Such person is going
to be less likely to read the source code to learn what it does, and for
an API that has methods that have too much complexity, even if he does
read the source code, he's probably not going to grasp it quickly. It's
complicated. More likely, he'll learn from examples such as those in the
article.


Yes, he's obviously clueless.

He seems not to know the difference between attributes and properties.
Perhaps he has learned how selectors work by using jQuery.

Does the jQuery team know the difference between attributes and properties?

After all the code review that's been done, I see jQuery team advocating
this article and it begs the question: Do they know what that code does?
The code does not match attributes, so in that regard, it does not do
what the author says, but then the code does, in some cases, match
elements whose width is 600px, but only by virtue of having the rendered
width being reflected in the corresponding property.

jQUery has differeng attribute/property accessors. One is in attr,
you've raked that one over and over, the other, however, is in ATTR, and
that one still resolves properties.

If the only basis for design is retrofitting a public (permanent) API
with workarounds that addressed its initial design, and only in reaction
to having those problems drawn out in public, repeatedly, over and over
and over again, what was in the original API design that made it so
damned attractive that caused so many library authors to want to copy it?

Moreover, what does such copying say about the state of the industry?
How is the web doing?


The examples he provided have a comment that contradicts what he says it
does in the article; that: "img[width=600]" matches "all the images
whose width is 600px".
Yes, that was a stupid thing to say.

That's a fine example to illustrate my point. It is time to wake up.
Why is that silly?

It does not serve a practical purpose. In the big picture, it is hardly
an improvement, it misses the point of what "API" stands for.

No, you miss the point entirely. My "API" is a great API and is
neither tangled up in syntactic sugar, nor buried in self-imposed
performance penalties. In short, it's a large collection of functions
implemented as properties of an object (appropriately named "API").
The OO crap I stacked on top of it makes a mockery of jQuery (and
turned out to quite usable at the same time). Some trick, huh?
E could mean Euler's log, Event, or Error. The identifier doesn't identify.

But it doesn't mean any of those things and you know it. If you
don't, it's documented. Furthermore, there's nothing stopping
developers from doing this:-

var MyElement = E;

Of course, they could also eschew my OO examples entirely and put
their own face on the API.

Get it?
Added value? What added value?

The added value discussed back in 2007.
Do you mean the workaround for giving an
element an ID where another element has that for its NAME?

No. That function does not set any ID's. What an odd side effect
that would be.
Avoiding
doing that is the best workaround.

Exactly. And you can't do that unless you use a wrapper that prevents
such cases from silently slipping through (hint). This is the same
exact thing recently complained about in a list of things that were
"broken" in My Library. Of course, as with most of the list, it
turned out to be a misunderstanding. The getEBI function deliberately
returns null in the event that the markup is broken (e.g. an element
*named* with the ID, but lacking such an ID is found by IE). This
behavior was discussed and decided in 2007. I still think it is the
best way to go as it brings the underlying issue (ill-advised markup)
to the developers attention *immediately*. It would be counter-
productive to silently find the "right" element in document.all (as
neurotically implemented in many of the "major" libraries). IIRC, one
of the Prototypians sensed an opening here and tried to riff on this
"punt" of mine. Of course, he didn't get it either (and quickly
scuttled off, never to be heard from again). Now, apparently, you
missed this entire exchange (which was just a couple of months back).
Hopefully you get it now as I am really tired of revisiting this
subject (among many others).

And furthermore, my getEBI works whether getElementById is present or
not. I know this doesn't matter today, but there it is (and there it
will remain as it is all of two or three lines extra baggage).
 
G

Garrett Smith

[...]
It would be helpful to look at them to see what was being tested. You
wrote that you had tests, so where are they?

No tests? I read earlier that you had unit tests. Either you do or you
don't.
Groan. All I want to know is what is the name of the guy on first
base.


You have looked at them. In fact, I put up those two primers partly
for your benefit. And I can't believe they didn't help.

I haven't seen them.

I saw a demo of your function. Is that what you refer to as a "unit
test". You can call it that, if you like, but don't be surprised if
people who know what a unit test are puzzled by your flagrantly
deceptive misuse of the term.
You don't have to retain *any* of it. Zero.

I'm not convinced. I'd want to see a test where the element has top:
auto and BODY has a margin.

I've also seen the case where it failed in MSIE in the example from
thread "getComputedStyle where is my LI". It got inconsistent results in
different versions of IE.

If the burden of proof is on you to prove that the code works, you've
failed to do that.

If the burden of proof is no me to prove that it doesn't, I've succeeded
in one case.

I don't think about
their quirks either. I create equations that account for *any* quirks
by factoring them out of the answer. It's really not that complicated
if you think about it (and read my many examples and previous
explanations).


You would rather trust something you know is broken and/or absent
(e.g. getComputedStyle/currentStyle)? My solutions work whether the
properties in question are broken or not. And I've tested in
virtually every major browser that has come out this century (and a
few from the last century).

getComputedStyle is broken in implementations, but the problems are
avoidable by defining left/top values in the stylesheet.

In contrast, offsetTop/Left/Parent have divergent behavior.

The approach I have taken is to follow the specification and write
tests. It is not infallible, as getCOmputedStyle has problems. I am
somewhat optimistic that the edge cases where it fails -- which are
avoidable by specifying a top and left value in the stylesheet -- are
being fixed.
So, if that element is matched in the author's selector query
"img[width=600]" the query would not be doing what he says it does.
Namely, it would not match "all the images whose width is 600px".
Yes, it's all nonsense. Don't rely on these silly query engines (or
their documentation).

Pass on that.
Whether you pass or not, it is quite a demonstration of the futility
of the query-based libraries.

A demonstration of futility - I agree.
Expected results would be a good addition and I plan to add those.
For now, note that the two non-fantasy columns (e.g. mine), which
query with and without QSA agree in the latest major browsers in all
rendering modes. Then go back one version. Then another. Then
another. As they all agree, you can be pretty damned sure that the
answers are expected as that's what I did before I published them.
Now, look at all of the columns to the left. A horror show, right?
Blacked out squares everywhere. Exceptions thrown in browsers that
just came out a couple of years ago, miscounts in browsers that came
out yesterday. jQuery 1.4 disagrees with 1.3, which disagrees with
1.2. Dojo gets more answers wrong than right in Opera 9. And so on,
and so on... Lots of flailing and destruction and still nothing close
to a solid foundation for the "major" libraries and frameworks.
Somehow those things are seen as an "easier" way to do the most
important task in browser scripting (e.g. read documents).

The idea of copying an API without knowing what the expected outcome of
its core method is ludicrous. Even worse when the main goal of the API
is popularity.
The first column would be a good place for that. For examaple:

+------------------+----------+------------------+
| Selector | Expected | NWMatcher 1.2.1 |
| div[class^=dia] | Error | ??? |
+------------------+----------+------------------+

Who has any idea what nonstandard selectors should do? Based on what?
jQuery documentation? The libraries all copied the jQuery library and
the jQuery library has always had vague documentation and inconsistent
results across browsers an between versions of jQuery.

Understand that CSS selector queries came long before the standard

The Internet *is* like an idiot amplifier, isn't it?
specification. In fact, there would be no such specification if not
for such scripts. The specification is not retroactive and the tests
in question predate the documentation on the W3C site as well.

You've still not read the CSS 2.1 specification and your ignorance is
shining.
What can "td[colspan!=1]" or "div[class!=madeup]" be expected to do,

Obviously, the same thing that they would do with quotes around them.

No, not obviously; not at all. Unquoted, they're nonstandard and
proprietary.

Read the code and you'll see that unquoted, they resolve properties.
Quoted, they use a standard interface. In jQuery, anyway.
You are focusing on irrelevant minutiae. The point is that jQuery and
the like often trip over such queries (with or without the quotes).
How can that be? Well, see my discussion here with Resig around
Halloween 2007 (or search the archive for about a hundred follow-ups,
usually involving Matt Kruse who spent years swearing such obvious
mistakes were to be expected). That should answer all of the
questions. If only it had for them. :(



Ugh. Chicken and the egg. The code and tests predate the writeup.

If you're referring to the W3C Selectors API as "the writeup", then
you're right, however, consider that to be irrelevant here; Selectors
come from CSS2. Get it?

Me get what? Where selectors come from? Or your ignorance regarding
that? It's pretty clear that I get both. How about you?

I've posted links to the CSS 2.1 specification I don't know how many
times. You usually reply by saying it's irrelevant in some colorful form
(e.g. "off in the weeds", "barking up the wrong tree").

Read down below where I wrote "(quoted from the CSS2.1 specification)".

And if you want to know why the section of CSS 2.1 that defines
"identifier" was quoted, the pertinent part of the spec that makes
reference to the term `identifier` is in CSS 2.1 is described in
attribute values.

CSS 2.1[2] states:

| Attribute values must be identifiers or strings.

So now it is necessary to see what an identifier is and I already cited
CSS2.1 definition of identifier.

<http://www.w3.org/TR/CSS2/selector.html#matching-attrs>

So that is a W3C Candidate Recommendataion 2009; W3C Candidate
Recommendation 08 September 2009. That means it's still a draft, and so
cannot be normatively cited. e.g. "It is inappropriate to cite this
document as other than work in progress."

However, an official specification that preceded it is CSS 2, dated from
1998 and contained the same text, verbatim. You can copy that text
above, go to the 1998 draft of CSS 2, and paste the value into your
browser's "find" feature and see that it has been unchanged since 1998.

This isn't one of those Terminator movies. The W3C can't go back in
time and abort jQuery, SlickSpeed and the like.

I see you are making reference to Terminator movies and the w3c. I don't
see any relevance to anything in this thread.
Wouldn't shock me.

If you don't read it, no, it wouldn't.
Depends on what you consider to be right. Clearly you don't have a
grasp on that yet, so I won't bother investigating the alleged mistake
on MSDN. I presume they are documenting their rendition of QSA. Just
as they previously documented their rendition of innerHTML,
offsetHeight, etc. When and if these things get written up as
recommendations by the W3C, it will not render the previous behavior
and documentation "wrong".

I hear you saying again that I don't understand.

What I understand is that the MSDN documentation referenced above
conflicts with the CSS2.1 specification and its predecessor CSS2, both
cited above.

`att` cannot be, as MSDN states it must, "a String." It must be the name
of the attribute. The MSDN article calls this an HTML feature and goes
on to list nonstandard HTML.

Syntax:
+-------------+------------------------+
| HTML | [att=val] { sRules } |
| Scripting | N/A |
+-------------+------------------------+

Although they call this an HTML feature, they really mean it is a CSS
feature. The document is linked from:

<URL:
http://msdn.microsoft.com/en-us/library/cc351024(VS.85).aspx#attributeselectors
I also read that you've made a presumption that MSDN's about IE's
implementation of NodeSelector. Do I understand you correctly?

As stated numerous times on this NG, including threads that you have
replied to, offsetHeight has made it into a w3c recommendation. It is
not a matter of "when and if"; It is called CSSOM. It was a massive f**k
up by Anne van Kesteren. The details have been discussed here and have
involved you, Lasse, and me.
In CSS, identifiers (including element names, classes, and IDs in
selectors) can contain only the characters [a-zA-Z0-9] and ISO 10646
characters U+00A1 and higher, plus the hyphen (-) and the underscore
(_); they cannot start with a digit, or a hyphen followed by a digit.
Identifiers can also contain escaped characters and any ISO 10646
character as a numeric code (see next item). For instance, the
identifier "B&W?" may be written as "B\&W\?" or "B\26 W\3F".

(quoted from the CSS2.1 specification).

Did you read that?
They only support about half of them at this point I think. Certainly
they are nowhere near compliance with CSS3 (which they disingenuously
claim to be). And their script is not 24K by any comparison rooted in
reality (i.e. prior to GZIP). They simply feed tripe to those naive
enough to swallow it. Did you know My Library is roughly 42K after
compression? The whole thing (which does 100 times more than jQuery,
including queries). :)

The script is 166k before minification. jQuery.com claims 155k. And of
course, if they used proper space formatting (not tabs), it would be a
lot larger.
nth-of-type? No idea what they did with that. IIRC, mine is the only
one I know of that attempts to support those (and as I've recently
realized, I didn't quite nail it). The SlickSpeed tests can be
confusing to the uninitiated as most of the libraries now hand off to
QSA, so it may appear that unsupported selectors work. Of course,
that's a big part of the problem. Neophytes like to test in just the
latest browsers and avoid IE as much as possibly. IE8 has QSA. IE9
will likely augment its capabilities in that department. So you can
have applications that appear to work if tested just in IE8 standards
mode and/or IE9 but have no shot at all in Compatibility View or IE<
8. That's why jQuery 1.2.6 (and not 1.2.1 as that was a misprint)
remains on the test page along side 1.3.x and 1.4.x. That's the last
one that didn't hand off queries to the browser.

IE8 has QSA but not in quirks mode and not in IE7 mode (as by using
EmulateIE7 in meta tag).
What do you think it should match? It won't match anything in any
query engine I know of. No coincidence it is not featured on any of
the test pages.

Remember that jQuery tries to use QSA; a:link matches links where QSA
does not throw errors.
Who are you talking to? I've said (and demonstrated) from the start
that these query engines are a waste of time. Lack of documentation
is the least of the worries.


Nope. More wasted time.

Testing things against a standard might seem like a waste of time if the
specification is not understood and if reading it is perceived as a
waste of time.

To me, comparing APIs that aren't clearly specified seems like a waste
of time. It misses the point of what an API is for.

Seems our opinions differ here.
Me?! You are just parroting my lines years later. Where do you get
the gall?

I am not parroting your lines.

"My Library" was never a good idea. Start with no unit tests and copy
other library query selectors? That is what all the other libraries are
doing. The only point in that is to try and attract more users, and that
is something you have tried to do, too.

Public APIs are forever. Notice that the one I wrote was AFL for nearly
two years. I made a lot of mistakes and did not want to be tied to a
public (=permanent) API. That is what all of the other libraries did.

In contrast, you did not learn from the others' mistakes. You actually
copied the API approach and then advocated it as superior.
Wrong. MooTools wrote that stupid test years ago. And all of those
stupid libraries that it was written to test used the "$" function. I
just added more test cases and gave my getEBCS function a "$" alias.
Where have you been? These are discussions from over two years ago.

I see on your page: "Additional tests have been added."

Did MooTools add those additional tests or did you?

Mine is not mimicry. It is parody to make a point. And I think it is
(finally) getting through loud and clear.

Copying APIs like that is stupid. Is that your point? Because if it is,
then we agree on that. Not entirely, though; one of my points of
contention is on the wild deviations of W3C specifications being
published from 1998-2010, so wild that it appears that none have even
read the specifications before writing and subsequently publishing a
public (=permanent) API.

My other point of contention is that publishing public APIs, as the
library authors have done, is not something that is to be undertaken
without clear goals and understanding, as the library authors do not
have. The APIs that have been published, jQuery, Mootools, Dojo, and
most others, have caused significant and substantial harm to the web.
They have done so by creating inconsistency and instability but also by
appeasing to the ignorant developer who is unwilling to read the
pertinent specifications.

Most any library can seem attractive to allowing the developer to
quickly solve problems by using familiar-looking css selector syntax,
while providing attractive and impressive demos that can be
copy'n'pasted and modified.

It sounds attractive, but unfortunately most likely not the right
approach for solving a given set of requirements. Even if the selectors
APIs worked correctly and quickly, they would still be misused as they
are today, often to do things like add an event handler to a list of
objects or toggle the styles of a list of elements instead of using
style cascades and event delegation, both of which are faster and
usually result in much simpler code.

Garrett
 
D

David Mark

On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]


Positions too:
http://www.cinsoft.net/position.html
Where are the unit tests?
That's all you ever say.  Where is your understanding of the basic
logic.  IIRC, that one was posted to refute your assertion that
computed styles should be used to determine positions.  At the time,
you seemed to be the only one who didn't get it.
It would be helpful to look at them to see what was being tested. You
wrote that you had tests, so where are they?

No tests? I read earlier that you had unit tests. Either you do or you
don't.

You keep chattering about unit tests. I never know what you are
referring to. I remember the recent "you keep talking about unit
tests" comment, but that was my line. I presumed you meant My
Library. This test page we are talking about is not part of My
Library. Just a proving ground for a replacement for
API.getElementPositionStyle, which I will soon be deprecating.
I haven't seen them.

Of course you have. We've been discussing them here for days.
I saw a demo of your function. Is that what you refer to as a "unit
test".

That's one of the primers! And no, I never called it a "unit tests".
You are the one that keeps chattering about unit tests, not me.
You can call it that, if you like, but don't be surprised if
people who know what a unit test are puzzled by your flagrantly
deceptive misuse of the term.

Groan. Back on third base. :)
I'm not convinced. I'd want to see a test where the element has top:
auto and BODY has a margin.

If you had the slightest clue what we were talking about, you'd know
that BODY margin is irrelevant. As for automatic top, left, right,
bottom. That's the whole point. It actually works whether you define
the styles in your CSS or not.
I've also seen the case where it failed in MSIE in the example from
thread "getComputedStyle where is my LI". It got inconsistent results in
different versions of IE.

Groan again. I already replied to that line. Once again, you have no
clue what you are testing or what results to expect. Zero. Of course
it can return different numbers in different browsers, rendering
modes, etc. That doesn't mean the results are wrong. Do you
understand what makes a result right for these functions? I explained
it just an hour ago in response to an identical suggestion.
If the burden of proof is on you to prove that the code works, you've
failed to do that.
LOL.


If the burden of proof is no me to prove that it doesn't, I've succeeded
in one case.

You most assuredly have not. You don't even know what you are trying
to prove.
I don't think about



getComputedStyle is broken in implementations, but the problems are
avoidable by defining left/top values in the stylesheet.

Wrong. That's a very spotty solution and not always possible. For
instance, you may not wish to use pixel units or define all of the
coordinates.
In contrast, offsetTop/Left/Parent have divergent behavior.

That are fully accounted for by my (very simple) equations. How many
times?!
The approach I have taken is to follow the specification and write
tests.

All together: IE doesn't have getComputedStyle.
It is not infallible, as getCOmputedStyle has problems.

Does it ever. Including absence in IE.
I am
somewhat optimistic that the edge cases where it fails -- which are
avoidable by specifying a top and left value in the stylesheet -- are
being fixed.

Who cares what is in the process of being fixed in some unspecified
number of browsers? My solution works in anything. Again, basic math
is infallible.
So, if that element is matched in the author's selector query
"img[width=600]" the query would not be doing what he says it does.
Namely, it would not match "all the images whose width is 600px".
Yes, it's all nonsense.  Don't rely on these silly query engines (or
their documentation).
http://www.cinsoft.net/slickspeed.html
Pass on that.
Whether you pass or not, it is quite a demonstration of the futility
of the query-based libraries.

A demonstration of futility - I agree.

Other than the last two columns of course. :)
The idea of copying an API without knowing what the expected outcome of
its core method is ludicrous. Even worse when the main goal of the API
is popularity.

What are you talking about now? I've always recommended *against*
using *any* query engine. I created one to show how simple it is to
do so and therefore silly to rely on futile efforts like jQuery.
The first column would be a good place for that. For examaple:
+------------------+----------+------------------+
| Selector         | Expected | NWMatcher 1.2.1  |
| div[class^=dia]  | Error    |    ???          |
+------------------+----------+------------------+
Who has any idea what nonstandard selectors should do? Based on what?
jQuery documentation? The libraries all copied the jQuery library and
the jQuery library has always had vague documentation and inconsistent
results across browsers an between versions of jQuery.
Understand that CSS selector queries came long before the standard

The Internet *is* like an idiot amplifier, isn't it?

Yes. God knows.
You've still not read the CSS 2.1 specification and your ignorance is
shining.

We are talking about selector *queries* (e.g. Selectors API).
What can "td[colspan!=1]" or "div[class!=madeup]" be expected to do,
Obviously, the same thing that they would do with quotes around them.

No, not obviously; not at all. Unquoted, they're nonstandard and
proprietary.

Again, there was no standard for selector queries when these things
were created.
Read the code and you'll see that unquoted, they resolve properties.
Quoted, they use a standard interface. In jQuery, anyway.

jQuery is botched. They don't even know what their own code does with
attributes. That's one of the things I demonstrated (years ago).
If you're referring to the W3C Selectors API as "the writeup", then
you're right, however, consider that to be irrelevant here; Selectors
come from CSS2. Get it?

I get that you are grasping for straws at this point (on two different
fronts).
Me get what? Where selectors come from? Or your ignorance regarding
that? It's pretty clear that I get both. How about you?

Oh brother. Of course CSS selector queries use CSS selectors. So
what?
I've posted links to the CSS 2.1 specification I don't know how many
times. You usually reply by saying it's irrelevant in some colorful form
(e.g. "off in the weeds", "barking up the wrong tree").

Like with the positioning stuff? Yes, you always resort to quoting
specs when confused.
Read down below where I wrote "(quoted from the CSS2.1 specification)".

And if you want to know why the section of CSS 2.1 that defines
"identifier" was quoted, the pertinent part of the spec that makes
reference to the term `identifier` is in CSS 2.1 is described in
attribute values.

CSS 2.1[2] states:

| Attribute values must be identifiers or strings.

So now it is necessary to see what an identifier is and I already cited
CSS2.1 definition of identifier.

<http://www.w3.org/TR/CSS2/selector.html#matching-attrs>

So that is a W3C Candidate Recommendataion 2009; W3C Candidate
Recommendation 08 September 2009. That means it's still a draft, and so
cannot be normatively cited. e.g. "It is inappropriate to cite this
document as other than work in progress."

Indeed. So stop talking about "standards" for query engines. And no,
they don't have to follow CSS2.1 exactly. They never have. Not CSS3
either, despite claims to the contrary.
However, an official specification that preceded it is CSS 2, dated from
1998 and contained the same text, verbatim. You can copy that text
above, go to the 1998 draft of CSS 2, and paste the value into your
browser's "find" feature and see that it has been unchanged since 1998.

<http://www.w3.org/TR/2008/REC-CSS2-20080411/selector.html#q10>

What a waste of time.
I see you are making reference to Terminator movies and the w3c. I don't
see any relevance to anything in this thread.





If you don't read it, no, it wouldn't.

That doesn't make any sense.
Depends on what you consider to be right.  Clearly you don't have a
grasp on that yet, so I won't bother investigating the alleged mistake
on MSDN.  I presume they are documenting their rendition of QSA.  Just
as they previously documented their rendition of innerHTML,
offsetHeight, etc.  When and if these things get written up as
recommendations by the W3C, it will not render the previous behavior
and documentation "wrong".

I hear you saying again that I don't understand.

What I understand is that the MSDN documentation referenced above
conflicts with the CSS2.1 specification and its predecessor CSS2, both
cited above.

`att` cannot be, as MSDN states it must, "a String." It must be the name
of the attribute. The MSDN article calls this an HTML feature and goes
on to list nonstandard HTML.

Syntax:
+-------------+------------------------+
| HTML        | [att=val] { sRules }   |
| Scripting   | N/A                    |
+-------------+------------------------+

Although they call this an HTML feature, they really mean it is a CSS
feature. The document is linked from:

<URL:http://msdn.microsoft.com/en-us/library/cc351024(VS.85).aspx#attr...
 >

I also read that you've made a presumption that MSDN's about IE's
implementation of NodeSelector. Do I understand you correctly?

As stated numerous times on this NG, including threads that you have
replied to, offsetHeight has made it into a w3c recommendation. It is
not a matter of "when and if"; It is called CSSOM. It was a massive f**k
up by Anne van Kesteren. The details have been discussed here and have
involved you, Lasse, and me.


In CSS, identifiers (including element names, classes, and IDs in
selectors) can contain only the characters [a-zA-Z0-9] and ISO 10646
characters U+00A1 and higher, plus the hyphen (-) and the underscore
(_); they cannot start with a digit, or a hyphen followed by a digit.
Identifiers can also contain escaped characters and any ISO 10646
character as a numeric code (see next item). For instance, the
identifier "B&W?" may be written as "B\&W\?" or "B\26 W\3F".
(quoted from the CSS2.1 specification).

Did you read that?

I don't care about that. Queries are a bad idea. End of story. My
creation of a query engine over a weekend two and a half years ago
notwithstanding. Get that?
The script is 166k before minification.


That's just as irrelevant as measuring compressed. The bigger the
better even (likely more comments).
jQuery.com claims 155k.

They lie about everything.
And of
course, if they used proper space formatting (not tabs), it would be a
lot larger.

They use tabs?! :(
IE8 has QSA but not in quirks mode and not in IE7 mode (as by using
EmulateIE7 in meta tag).

No kidding. :) Or you can just say Compatibility View (however it is
invoked).
Remember that jQuery tries to use QSA; a:link matches links where QSA
does not throw errors.

Again, my line. I've been saying it ever since "Sizzle" came out.
What a con to hand off queries to the browser, knowing that the
results would vary wildly from the fall back. Basically, QSA put all
of the "major" query engine in an untenable position. They tried a
big deception and apparently the masses bought it.
Testing things against a standard might seem like a waste of time if the
specification is not understood and if reading it is perceived as a
waste of time.

Nobody should care about queries at this point. I know I don't.
To me, comparing APIs that aren't clearly specified seems like a waste
of time. It misses the point of what an API is for.

Seems our opinions differ here.

No, you just misinterpret everything. It's impossible to carry on a
conversation.
I am not parroting your lines.

Certainly you are.
"My Library" was never a good idea.

LOL. The ideas it promoted were good enough for everyone else to
steal, blog about, etc. Where have you been?
Start with no unit tests

Start with no unit tests? What does that even mean. It most
assuredly has unit tests (and has for some time).
and copy
other library query selectors?

Nope. The (optional and discouraged) query module was never the
point.
That is what all the other libraries are
doing.

All of the other libraries are slowly, painfully evolving to look like
mine (as predicted).
The only point in that is to try and attract more users, and that
is something you have tried to do, too.

What the hell are you talking about now?
Public APIs are forever. Notice that the one I wrote was AFL for nearly
two years. I made a lot of mistakes and did not want to be tied to a
public (=permanent) API. That is what all of the other libraries did.


So it all comes back to your knock-off. Whatever.
In contrast, you did not learn from the others' mistakes. You actually
copied the API approach and then advocated it as superior.

Copied what API approach? My (dynamic) API is nothing like the rest
of them. How could you miss that as I stressed that point back in the
CWR days (fall of 2007). Then there are the advanced feature testing
techniques, which were also unheard of at the time. Superior? You
bet. ;)

I see on your page: "Additional tests have been added."

Yes. Truth in advertising. :)
Did MooTools add those additional tests or did you?

See above.
Copying APIs like that is stupid.

Please stop blithering about the query engine.

[...]

I don't have time for any more of this nonsense.
 
G

Garrett Smith

On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]



Positions too:

Where are the unit tests?
That's all you ever say. Where is your understanding of the basic
logic. IIRC, that one was posted to refute your assertion that
computed styles should be used to determine positions. At the time,
you seemed to be the only one who didn't get it.
It would be helpful to look at them to see what was being tested. You
wrote that you had tests, so where are they?

No tests? I read earlier that you had unit tests. Either you do or you
don't.

You keep chattering about unit tests. I never know what you are
referring to.

You said you had unit tests and now you don't know what that means. I'm
confused.

I started off building a test runner. It is incomplete, but has unit
tests itself, so you can see what exactly it does (and doesn't).
tests" comment, but that was my line. I presumed you meant My
Library. This test page we are talking about is not part of My
Library. Just a proving ground for a replacement for
API.getElementPositionStyle, which I will soon be deprecating.

Got it.
Of course you have. We've been discussing them here for days.

Sorry but I am not following along. Please post a link to what it is
that you are referring to; it isn't clear.
That's one of the primers! And no, I never called it a "unit tests".
You are the one that keeps chattering about unit tests, not me.

So it's not the primer. It might be the slickspeed test. I really don't
like guessing games.

[...]

I'm not sure. I recall having arrived at the conclusion that in certain
cases, offsetTop would be inconsistent with itself in the same
implementation -- and for the same element on which it had been
previously used. I recall specifically for BODY element. Are there
others? I can't remember for sure.
If you had the slightest clue what we were talking about, you'd know
that BODY margin is irrelevant. As for automatic top, left, right,
bottom. That's the whole point. It actually works whether you define
the styles in your CSS or not.

I understand the code. For a tutorial, a diagram could show how the
function moves the element, then reads its position values.

The code could be changed to make it more understandable. The reader has
to go back and forth to see what sides[1] means, for example. The basic
strategy is to find out for, say "style.left" what pixel value setting
wouuld change its offsetLeft by 0.

Here's what I understand the algorithm as doing.

// (GS) Save offsetTop and offsetLeft
offsetLeft = el.offsetLeft;
offsetTop = el.offsetTop;


// (GS) Set right and bottom to "auto".
el.style[sides[2]] = 'auto';
el.style[sides[3]] = 'auto';

// (GS) Compare saved offsetLeft to current.
// (GS) If there is a difference, it means the element had
// (GS) 'right' positioning. This, set result.left to
// (GS) null. Follow accordingly for offsetTop/result.top
if (offsetLeft != el.offsetLeft) {
result[sides[0]] = null;
}
if (offsetTop != el.offsetTop) {
result[sides[1]] = null;
}

// (GS) Store the offsetLeft and offsetTop once again in case
// (GS) they changed when setting style.right/bottom to 'auto'
offsetLeft = el.offsetLeft;
offsetTop = el.offsetTop;

// (GS) Set style.left to have offsetLeft and then do accordingly
// (GS) for top.
el.style[sides[0]] = offsetLeft + 'px';
el.style[sides[1]] = offsetTop + 'px';

// (GS) (checking to see if right position was not null...)
// (GS) If el.offsetLeft is different, take original offsetLeft,
// (GS) add to that the amount added (offsetLeft again, so
// (GS) 2 * offsetLeft), and then subtract the new found el.offsetLeft.
if (result[sides[0]] !== null && el.offsetLeft != offsetLeft) {

// (GS) When is sides[0] not "left" ?
if (sides[0] == 'left') {
result[sides[0]] = offsetLeft - el.offsetLeft + offsetLeft;
} else {
result[sides[0]] = el.offsetLeft;
}
}

Nit: Getting top/right is getting in the way of getting left/bottom.

Nit: "right" is easier to recognize than "sides[2]". Mostly because I
can read "left" without having to remember, but also because the code
imposes a proprietary ordering of "left, top, right, bottom". In
contrast, CSS usually uses "top right bottom left" ordering. But order
doesn't matter if property names are used.

Off the top of my head, there are two kinds possibilities where that
could fail. They are:
1) where setting style.left does not cause offsetLeft to be immediately
updated.
2) where offsetLeft is inconsistent with itself.

I can't recall ever setting style.left and having offsetLeft not
recalc'd immediately. I'd be a suspect of Safari 2, so that problem
might be ruled out.

I do recall that offsetTop is inconsistent with itself for BODY element,
and so if you are trying to apply this function to get position styles
for BODY, I am pretty sure it will fail.

Groan again. I already replied to that line. Once again, you have no
clue what you are testing or what results to expect. Zero. Of course
it can return different numbers in different browsers, rendering
modes, etc. That doesn't mean the results are wrong. Do you
understand what makes a result right for these functions? I explained
it just an hour ago in response to an identical suggestion.

Notice that the bottom and right values are either 0 or a much larger
number. Using i7.style.bottom = result.bottom results in moving the LI
in IE6 and IE7 (standards mode). It works fine in IE8 and other browsers
I tested.
If you don't take it seriously, why should anyone else?
You most assuredly have not. You don't even know what you are trying
to prove.

No, I can show the function doesn't get bottom styles correct in IE6 and
IE7 -- and by "correct", I mean, that the function returns a value for
bottom that, when applied to el.style.bottom, causes a big shift in the
element.

[...]
Wrong. That's a very spotty solution and not always possible. For
instance, you may not wish to use pixel units or define all of the
coordinates.

getComputedStyle gets pixel units for left, so as long as left has been
set to have a length.

Otherwise, getComputedStyle can result in value of "auto" or percentage
values.

Browsers vary on that, but by specifying a length for left, a length in
px will be returned by getComputedStyle.
That are fully accounted for by my (very simple) equations. How many
times?!

It is accounted for, but by the expectation that offsetTop values are
parallel to offsetTop.
All together: IE doesn't have getComputedStyle.


Does it ever. Including absence in IE.

When the values wanted are left/top. The strategy employed in your
function should also be able to get marginLeft and marginTop. Perhaps
the function can parameterized for a style value and delegate to a more
specific private (hidden) method.

It looks like that function might also be used internally for a getStyle
method.

function getStyle(el, prop) {
if(leftTopExp.test(prop)) {
getLeftTopPositionStyle(el, prop);
}
|

etc.

[...]

Garrett
 
G

Garrett Smith

On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]

So, if that element is matched in the author's selector query
"img[width=600]" the query would not be doing what he says it does.
Namely, it would not match "all the images whose width is 600px".
Yes, it's all nonsense. Don't rely on these silly query engines (or
their documentation).

Pass on that.
Whether you pass or not, it is quite a demonstration of the futility
of the query-based libraries.

A demonstration of futility - I agree.

Other than the last two columns of course. :)

What? "My Library"? Save the best for last? A mockery of jQuery?
Superior? Heh (remembers SNL church lady)
What are you talking about now? I've always recommended *against*
using *any* query engine. I created one to show how simple it is to
do so and therefore silly to rely on futile efforts like jQuery.

If you've copied jQuery and you're matching attributes for the "bare
words" selector, you've introduced divergent behavior over something
that was badly specified and probably not understood in the first place.

If you're implementing a Selectors API based on CSS2 selectors, then
you're implementing something that is different then that, too. See, you
can't win with this stuff, there is no "superior", only wasting time
with bs that nobody knows what it's supposed to do. The only thing it is
useful for is self-promotion; to impress others that you've done
"something".
The first column would be a good place for that. For examaple:
+------------------+----------+------------------+
| Selector | Expected | NWMatcher 1.2.1 |
| div[class^=dia] | Error | ??? |
+------------------+----------+------------------+
Who has any idea what nonstandard selectors should do? Based on what?
jQuery documentation? The libraries all copied the jQuery library and
the jQuery library has always had vague documentation and inconsistent
results across browsers an between versions of jQuery.
Understand that CSS selector queries came long before the standard

The Internet *is* like an idiot amplifier, isn't it?

Yes. God knows.

Really?
You've still not read the CSS 2.1 specification and your ignorance is
shining.

We are talking about selector *queries* (e.g. Selectors API).

Oh come on, we all know you don't read the specs. Selectors API does not
define any selectors; it just makes reference to CSS2.1. I've already
cited the spec; it's up to you to RTFM.

http://www.w3.org/TR/selectors-api/

Read the abstract; that'll tell you that it's all about CSS2.1 and
CSS2.1 got its selectors from CSS2, which was a standard in 1998.

[...]
What can "td[colspan!=1]" or "div[class!=madeup]" be expected to do,
Obviously, the same thing that they would do with quotes around them.

No, not obviously; not at all. Unquoted, they're nonstandard and
proprietary.

Again, there was no standard for selector queries when these things
were created.

Again, you're making false assertions. Please do us all a favor and go
RTFM. I'm not going over this again.

Selectors: http://www.w3.org/TR/selectors-api/
Based on CSS2.1:
http://www.w3.org/TR/2009/CR-CSS2-20090908/

Based on CSS2:
http://www.w3.org/TR/2008/REC-CSS2-20080411/

Goes all the way back to 1998 and also jQuery docs mention that they
"borrow" from CSS1-3. Doesn't link to any w3c specifications. Why should
it? jQuery selectors are different from and incompatible with CSS
selectors. Besides, I get the sense that jQuery users are almost
expected to *not* read the pertinent specs. It almost seems as if
keeping people in the dark is deliberate.
jQuery is botched. They don't even know what their own code does with
attributes. That's one of the things I demonstrated (years ago).


I get that you are grasping for straws at this point (on two different
fronts).

I RTFM. You didn't. You don't have to read the specs, david, but if you
don't, then you're not going to be in a position where you can argue
about their contents. See?

Oh brother. Of course CSS selector queries use CSS selectors. So
what?

What defines a valid CSS selector? The Selectors API? Nope -- it comes
right out of CSS specs - originally from CSS2.
Like with the positioning stuff? Yes, you always resort to quoting
specs when confused.

I often do end up searching for and reading a specs/documentation to
cure misunderstanding.
Read down below where I wrote "(quoted from the CSS2.1 specification)".

And if you want to know why the section of CSS 2.1 that defines
"identifier" was quoted, the pertinent part of the spec that makes
reference to the term `identifier` is in CSS 2.1 is described in
attribute values.

CSS 2.1[2] states:

| Attribute values must be identifiers or strings.

So now it is necessary to see what an identifier is and I already cited
CSS2.1 definition of identifier.

<http://www.w3.org/TR/CSS2/selector.html#matching-attrs>

So that is a W3C Candidate Recommendataion 2009; W3C Candidate
Recommendation 08 September 2009. That means it's still a draft, and so
cannot be normatively cited. e.g. "It is inappropriate to cite this
document as other than work in progress."

Indeed. So stop talking about "standards" for query engines. And no,
they don't have to follow CSS2.1 exactly. They never have. Not CSS3
either, despite claims to the contrary.

We're at a disconnect here. There is no way to have CSS selectors
without defining what CSS Selectors are. CSS does that; CSS2 defines
selector syntax.
What a waste of time.

Uh huh. Not surprised to read that coming from you.
That doesn't make any sense.

No, it does; you just didn't get it (and I'm tired to explain now).

[...]

LOL. The ideas it promoted were good enough for everyone else to
steal, blog about, etc. Where have you been?


Start with no unit tests? What does that even mean. It most
assuredly has unit tests (and has for some time).

Post a link or STFU about your unit tests. Please.
Nope. The (optional and discouraged) query module was never the
point.


All of the other libraries are slowly, painfully evolving to look like
mine (as predicted).


What the hell are you talking about now?

Library authors try to get more users. That is the motivation for them
copying each other.
So it all comes back to your knock-off. Whatever.

No knock off. I'm explaining the concept of what it means to write a
public API. The current library authors all have published an API, and
to a desparate public who quickly adopted it.

Many of these libraries were designed around, as you know, browser
sniffing and misconcpetions about how browsers work and how ecmascript
works. Retrofitting a design to account for bad practices that shaped
the initial design -- indeed the API itself -- is the problem.

Seeing that, I waited. I was asked by FuseJS authors for my code. I was
asked by John Resig for my code. Not until very recently did I change my
license to BSD. It was AFL for two years. Why? Because if I had done
like the other library authors and published it, it would have been
adopted. I didn't want that.

Except now what's happened is they've gone and cherry picked and
changed, without mentioning in comment (that would be a license violation).

So, they can copy, but I can also stand by not having my name on code
that wasn't definitely ready for public consumption.

Copied what API approach? My (dynamic) API is nothing like the rest
of them. How could you miss that as I stressed that point back in the
CWR days (fall of 2007). Then there are the advanced feature testing
techniques, which were also unheard of at the time. Superior? You
bet. ;)

Advanced feature testing? Like what? Appending nodes and checking them?
Did you invent that technique? Now I'd like to know where it came from
first.

[...]

It came across as "me too".

Garrett
 
D

David Mark

On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
Positions too:
http://www.cinsoft.net/position.html
Where are the unit tests?
That's all you ever say.  Where is your understanding of the basic
logic.  IIRC, that one was posted to refute your assertion that
computed styles should be used to determine positions.  At the time,
you seemed to be the only one who didn't get it.
It would be helpful to look at them to see what was being tested. You
wrote that you had tests, so where are they?
No tests? I read earlier that you had unit tests. Either you do or you
don't.
You keep chattering about unit tests.  I never know what you are
referring to.

You said you had unit tests and now you don't know what that means. I'm
confused.

You are certainly confused. I meant I don't know what you are asking
about (e.g. unit tests for *what*?)
I started off building a test runner. It is incomplete, but has unit
tests itself, so you can see what exactly it does (and doesn't).

For what?!
Got it.





Sorry but I am not following along. Please post a link to what it is
that you are referring to; it isn't clear.

The size and position primers. I've posted the links repeatedly.
So it's not the primer. It might be the slickspeed test. I really don't
like guessing games.

I don't like going around in circles. :(
[...]



I'm not sure. I recall having arrived at the conclusion that in certain
cases, offsetTop would be inconsistent with itself in the same
implementation -- and for the same element on which it had been
previously used. I recall specifically for BODY element. Are there
others? I can't remember for sure.

I have no idea what that means.
I understand the code. For a tutorial, a diagram could show how the
function moves the element, then reads its position values.

I suppose it would. No such tutorial exists at this time though. I
thought I explained it well enough (about a hundred times!)
The code could be changed to make it more understandable. The reader has
to go back and forth to see what sides[1] means, for example. The basic
strategy is to find out for, say "style.left" what pixel value setting
wouuld change its offsetLeft by 0.

Yes, you got it.
Here's what I understand the algorithm as doing.

// (GS) Save offsetTop and offsetLeft
   offsetLeft = el.offsetLeft;
   offsetTop = el.offsetTop;

// (GS) Set right and bottom to "auto".
   el.style[sides[2]] = 'auto';
   el.style[sides[3]] = 'auto';

// (GS) Compare saved offsetLeft to current.
// (GS) If there is a difference, it means the element had
// (GS) 'right' positioning. This, set result.left to
// (GS) null. Follow accordingly for offsetTop/result.top
   if (offsetLeft != el.offsetLeft) {
     result[sides[0]] = null;
   }
   if (offsetTop != el.offsetTop) {
     result[sides[1]] = null;
   }

// (GS) Store the offsetLeft and offsetTop once again in case
// (GS) they changed when setting style.right/bottom to 'auto'
       offsetLeft = el.offsetLeft;
       offsetTop = el.offsetTop;

// (GS) Set style.left to have offsetLeft and then do accordingly
// (GS) for top.
       el.style[sides[0]] = offsetLeft + 'px';
       el.style[sides[1]] = offsetTop + 'px';

// (GS) (checking to see if right position was not null...)
// (GS) If el.offsetLeft is different, take original offsetLeft,
// (GS) add to that the amount added (offsetLeft again, so
// (GS) 2 * offsetLeft), and then subtract the new found el.offsetLeft.
   if (result[sides[0]] !== null && el.offsetLeft != offsetLeft) {

// (GS) When is sides[0] not "left" ?
     if (sides[0] == 'left') {
       result[sides[0]] = offsetLeft - el.offsetLeft + offsetLeft;
     } else {
       result[sides[0]] = el.offsetLeft;
     }
   }

Nit: Getting top/right is getting in the way of getting left/bottom.

Not sure what you mean, but it is just a test function for
demonstration purposes (i.e. you can't specify which pair of
coordinates you want).
Nit: "right" is easier to recognize than "sides[2]". Mostly because I
can read "left" without having to remember, but also because the code
imposes a proprietary ordering of "left, top, right, bottom". In
contrast, CSS usually uses "top right bottom left" ordering. But order
doesn't matter if property names are used.

Yes. The demonstration was posted as a proof of concept. I never
meant it to be a tutorial (and thought we had discussed it more than
enough here in the days leading up to the posting).
Off the top of my head, there are two kinds possibilities where that
could fail. They are:
1) where setting style.left does not cause offsetLeft to be immediately
updated.

Anything is possible, but ISTM it will always be updated on being read
back. And that's been my experience (over the last ten years plus).
2) where offsetLeft is inconsistent with itself.

Inconsistent with itself?!
I can't recall ever setting style.left and having offsetLeft not
recalc'd immediately.

Me either and I've tested a hundred browsers/configurations at a
minimum.
I'd be a suspect of Safari 2, so that problem
might be ruled out.
What?


I do recall that offsetTop is inconsistent with itself for BODY element,

What does that mean?
and so if you are trying to apply this function to get position styles
for BODY, I am pretty sure it will fail.

If you are really intent on positioning the BODY and assuming your
memory of offsetLeft "being inconsistent with itself" is accurate (and
has meaning), then perhaps. But who on earth would try to position
the BODY? Where exactly would you position it and how do you figure
it would start out anywhere but its origin (i.e. left and top 0).
Notice that the bottom and right values are either 0 or a much larger
number. Using i7.style.bottom = result.bottom results in moving the LI
in IE6 and IE7 (standards mode). It works fine in IE8 and other browsers
I tested.

Notice? What's i7, how is it positioned and what left/top/right/
bottom styles did it start with? Also, you apparently failed to tack
on "px", which will indeed cause problems in standards mode in many
browsers. IIRC, quirks mode will let you get away with that.

If you really want me to notice something, post a test page.
If you don't take it seriously, why should anyone else?

I was laughing at the idea that you had somehow topped me in the area
of proving this thing. AFAICT, you haven't proven anything as of yet.
No, I can show the function doesn't get bottom styles correct in IE6 and
IE7 -- and by "correct", I mean, that the function returns a value for
bottom that, when applied to el.style.bottom, causes a big shift in the
element.

I haven't seen you prove anything of the sort.
[...]


Wrong.  That's a very spotty solution and not always possible.  For
instance, you may not wish to use pixel units or define all of the
coordinates.

getComputedStyle gets pixel units for left, so as long as left has been
set to have a length.

For the umpteenth time, getComputedStyle does not exist in IE. That's
one of the main points of this.
Otherwise, getComputedStyle can result in value of "auto" or percentage
values.

Browsers vary on that, but by specifying a length for left, a length in
px will be returned by getComputedStyle.

It depends. That method is full of bugs. If you specify a left and a
top, you will likely get away with it. Specify one and not the other
and all bets are off. And, of course, the method doesn't exist in IE
(at least not IE < 9).
It is accounted for, but by the expectation that offsetTop values are
parallel to offsetTop.

Again, what is that supposed to mean?
When the values wanted are left/top. The strategy employed in your
function should also be able to get marginLeft and marginTop.

Yes, you can calculate left/top margins in similar fashion. And a
similar technique works like a charm for height/width.
Perhaps
the function can parameterized for a style value and delegate to a more
specific private (hidden) method.

It looks like that function might also be used internally for a getStyle
method.

function getStyle(el, prop) {
   if(leftTopExp.test(prop)) {
     getLeftTopPositionStyle(el, prop);
   }
|

etc.

Yes, but I don't believe in tangling up such low-level functions.
That's why I created higher-level functions to deal with these issues.

Thanks for (some of) the input. I really would be interested to see
an example that fails. I just haven't seen it yet. And I have used
this technique with positioned list items in menu scripts in the past
(and tested every version of IE back to 5.0). Of course, that was
only left/top. The right/bottom bit is a new addition.
 
D

David Mark

On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
So, if that element is matched in the author's selector query
"img[width=600]" the query would not be doing what he says it does.
Namely, it would not match "all the images whose width is 600px"..
Yes, it's all nonsense.  Don't rely on these silly query engines (or
their documentation).
http://www.cinsoft.net/slickspeed.html
Pass on that.
Whether you pass or not, it is quite a demonstration of the futility
of the query-based libraries.
A demonstration of futility - I agree.
Other than the last two columns of course.  :)

What? "My Library"? Save the best for last? A mockery of jQuery?
Superior? Heh (remembers SNL church lady)

You are babbling. Certainly the one bit that is tested by that page
(queries) is superior to jQuery's catastrophic implementations.
If you've copied jQuery and you're matching attributes for the "bare
words" selector, you've introduced divergent behavior over something
that was badly specified and probably not understood in the first place.

Quotes are optional for the attribute-related queries. And I really
don't care what the specs say about that.
If you're implementing a Selectors API based on CSS2 selectors, then
you're implementing something that is different then that, too. See, you
can't win with this stuff, there is no "superior",

There sure as hell is. :)
only wasting time
with bs that nobody knows what it's supposed to do.

Everybody (except perhaps the authors/users of jQuery, Dojo, YUI,
etc.) knows what a query by attribute is supposed to do. How hard is
it to grasp?
The only thing it is
useful for is self-promotion; to impress others that you've done
"something".

I've done lots of things. What have you done?
The first column would be a good place for that. For examaple:
+------------------+----------+------------------+
| Selector         | Expected | NWMatcher 1.2.1  |
| div[class^=dia]  | Error    |    ???           |
+------------------+----------+------------------+
Who has any idea what nonstandard selectors should do? Based on what?
jQuery documentation? The libraries all copied the jQuery library and
the jQuery library has always had vague documentation and inconsistent
results across browsers an between versions of jQuery.
Understand that CSS selector queries came long before the standard
The Internet *is* like an idiot amplifier, isn't it?
Yes.  God knows.

Really?

Yes; it's a euphemism for "ain't that the truth".
Oh come on, we all know you don't read the specs.

The royal "we" I presume. And what specs do you know that I don't
read?
Selectors API does not
define any selectors; it just makes reference to CSS2.1.

So what? It also specifies lots of other things that have nothing to
do with the CSS specs.
I've already
cited the spec; it's up to you to RTFM.

No it isn't. I've stated repeatedly that queries should not be used
by anyone, regardless of what specs they do or do not follow.
http://www.w3.org/TR/selectors-api/

Read the abstract; that'll tell you that it's all about CSS2.1 and
CSS2.1 got its selectors from CSS2, which was a standard in 1998.

I've read it (of course) and it is not "all about" CSS2.1. It's about
"standardizing" the loopy CSS selector queries popularized by jQuery
and the like. It got them "wrong" per the established "standards" in
more ways than can be described by citing the CSS2.1 specs. That's
what makes it all the more appalling that Resig and co. dropped them
on top of their old stuff and announced a huge speed boost (neglecting
to warn of the myriad inconsistencies that would arise from this ill-
advised parlor trick).
[...]


What can "td[colspan!=1]" or "div[class!=madeup]" be expected todo,
Obviously, the same thing that they would do with quotes around them.
No, not obviously; not at all. Unquoted, they're nonstandard and
proprietary.
Again, there was no standard for selector queries when these things
were created.

Again, you're making false assertions. Please do us all a favor and go
RTFM.

Us? You are the only one complaining. You've spent more time
complaining about it than it took me to write it. The rest of the
world either shrugged or got the point that queries were poison.
I'm not going over this again.
Good.


Selectors:http://www.w3.org/TR/selectors-api/
Based on CSS2.1:http://www.w3.org/TR/2009/CR-CSS2-20090908/

Based on CSS2:http://www.w3.org/TR/2008/REC-CSS2-20080411/

Goes all the way back to 1998 and also jQuery docs mention that they
"borrow" from CSS1-3.

The jQuery docs are worthless.
Doesn't link to any w3c specifications. Why should
it? jQuery selectors are different from and incompatible with CSS
selectors.

Each version of jQuery is incompatible with the previous and next.
And the authors seem oblivious as they patch and hack their way
through to results that appear to work in a handful of the latest
browsers. What else is new?
Besides, I get the sense that jQuery users are almost
expected to *not* read the pertinent specs.

Judging from most of their writings, it would seem a stretch to assume
they can read anything more advanced than a mystery novel.
It almost seems as if
keeping people in the dark is deliberate.

Of course it is. Why do you think Resig and his shills tell their
users to avoid this newsgroup.
I RTFM. You didn't.

You have no clue what I did or did not do.
You don't have to read the specs, david, but if you
don't, then you're not going to be in a position where you can argue
about their contents. See?

See above.
What defines a valid CSS selector? The Selectors API? Nope -- it comes
right out of CSS specs - originally from CSS2.

Again, so? Who said these query things have ever been based on "valid
CSS selectors" (jQuery's well-documented fibs notwithstanding).
I often do end up searching for and reading a specs/documentation to
cure misunderstanding.

But they don't always lead to understanding. Sometimes they have the
opposite effect.
Read down below where I wrote "(quoted from the CSS2.1 specification)"..
And if you want to know why the section of CSS 2.1 that defines
"identifier" was quoted, the pertinent part of the spec that makes
reference to the term `identifier` is in CSS 2.1 is described in
attribute values.
CSS 2.1[2] states:
| Attribute values must be identifiers or strings.
So now it is necessary to see what an identifier is and I already cited
CSS2.1 definition of identifier.
<http://www.w3.org/TR/CSS2/selector.html#matching-attrs>
So that is a W3C Candidate Recommendataion 2009; W3C Candidate
Recommendation 08 September 2009. That means it's still a draft, and so
cannot be normatively cited. e.g. "It is inappropriate to cite this
document as other than work in progress."
Indeed.  So stop talking about "standards" for query engines.  And no,
they don't have to follow CSS2.1 exactly.  They never have.  Not CSS3
either, despite claims to the contrary.

We're at a disconnect here. There is no way to have CSS selectors
without defining what CSS Selectors are. CSS does that; CSS2 defines
selector syntax.

Just forget queries. Then you don't have to worry about them.
Uh huh. Not surprised to read that coming from you.

I was referring to this entire, interminable thread.
No, it does; you just didn't get it (and I'm tired to explain now).

You are certainly tired.
[...]


LOL.  The ideas it promoted were good enough for everyone else to
steal, blog about, etc.  Where have you been?
Start with no unit tests?  What does that even mean.  It most
assuredly has unit tests (and has for some time).

Post a link or STFU about your unit tests. Please.

OFU. You are clearly blind as a bat. I posted several on my Build
Test page months ago (and I can hear it now that there are too few).
I have several hundred more that I use here and will update the Build
Test whenever I damn well please. Fair enough?
Library authors try to get more users. That is the motivation for them
copying each other.

I'm sure it is. Nobody can accuse me of copying the others though
(it's clearly the other way around).
No knock off. I'm explaining the concept of what it means to write a
public API. The current library authors all have published an API, and
to a desparate public who quickly adopted it.
Great.


Many of these libraries were designed around, as you know, browser
sniffing and misconcpetions about how browsers work and how ecmascript
works.

Yes, I know. Mine was an answer for that.
Retrofitting a design to account for bad practices that shaped
the initial design -- indeed the API itself -- is the problem.

My API is nothing like the rest. It suits the language, does not use
browser sniffing, adapts to the environment, etc., etc. How many
times?
Seeing that, I waited. I was asked by FuseJS authors for my code.

Authors? I only know of one and who cares if he asked for your code?
Isn't your code posted in public for all to see?
I was
asked by John Resig for my code.

Well, at least he *asked* you. :)
Not until very recently did I change my
license to BSD. It was AFL for two years.

I don't know BSD or AFL from MLB and NFL (and don't really care).

Speaking of not caring. :)
Because if I had done
like the other library authors and published it, it would have been
adopted. I didn't want that.

What are you talking about? APE? It was posted not long after My
Library IIRC.
Except now what's happened is they've gone and cherry picked and
changed, without mentioning in comment (that would be a license violation).

I don't care.
So, they can copy, but I can also stand by not having my name on code
that wasn't definitely ready for public consumption.

Join the club. They didn't credit me for any of their gratuitous
mimicry either. Not that I would want my name associated with jQuery
and the like anyway.
Advanced feature testing? Like what?
Groan.

http://www.cinsoft.net/host.html

Appending nodes and checking them?

That's a pretty gross generalization for someone who just now figured
out the how the size and position primers works.
Did you invent that technique?

Invent what technique? You will really have to be more specific.
Now I'd like to know where it came from
first.

I suggest you read the above cited article (again).
[...]



It came across as "me too".

No it didn't. The people who actually understood it got it. Others
just changed their tune from "where's your library" to "aw, you are
just jealous coz nobody uses your library".

Perhaps you forgot that for two years after I posted it, I told anyone
and everyone not to use and to learn browser scripting instead. ;)
 
D

David Mark

On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
Positions too:
http://www.cinsoft.net/position.html
Where are the unit tests?
That's all you ever say.  Where is your understanding of the basic
logic.  IIRC, that one was posted to refute your assertion that
computed styles should be used to determine positions.  At the time,
you seemed to be the only one who didn't get it.
It would be helpful to look at them to see what was being tested. You
wrote that you had tests, so where are they?
No tests? I read earlier that you had unit tests. Either you do or you
don't.
You keep chattering about unit tests.  I never know what you are
referring to.

You said you had unit tests and now you don't know what that means. I'm
confused.

I started off building a test runner. It is incomplete, but has unit
tests itself, so you can see what exactly it does (and doesn't).
tests" comment, but that was my line.  I presumed you meant My
Library.  This test page we are talking about is not part of My
Library.  Just a proving ground for a replacement for
API.getElementPositionStyle, which I will soon be deprecating.

Got it.


Of course you have.  We've been discussing them here for days.

Sorry but I am not following along. Please post a link to what it is
that you are referring to; it isn't clear.


That's one of the primers!  And no, I never called it a "unit tests".
You are the one that keeps chattering about unit tests, not me.

So it's not the primer. It might be the slickspeed test. I really don't
like guessing games.

[...]



I'm not sure. I recall having arrived at the conclusion that in certain
cases, offsetTop would be inconsistent with itself in the same
implementation -- and for the same element on which it had been
previously used. I recall specifically for BODY element. Are there
others? I can't remember for sure.
If you had the slightest clue what we were talking about, you'd know
that BODY margin is irrelevant.  As for automatic top, left, right,
bottom.  That's the whole point.  It actually works whether you define
the styles in your CSS or not.

I understand the code. For a tutorial, a diagram could show how the
function moves the element, then reads its position values.

The code could be changed to make it more understandable. The reader has
to go back and forth to see what sides[1] means, for example. The basic
strategy is to find out for, say "style.left" what pixel value setting
wouuld change its offsetLeft by 0.

Here's what I understand the algorithm as doing.

// (GS) Save offsetTop and offsetLeft
   offsetLeft = el.offsetLeft;
   offsetTop = el.offsetTop;

// (GS) Set right and bottom to "auto".
   el.style[sides[2]] = 'auto';
   el.style[sides[3]] = 'auto';

// (GS) Compare saved offsetLeft to current.
// (GS) If there is a difference, it means the element had
// (GS) 'right' positioning. This, set result.left to
// (GS) null. Follow accordingly for offsetTop/result.top
   if (offsetLeft != el.offsetLeft) {
     result[sides[0]] = null;
   }
   if (offsetTop != el.offsetTop) {
     result[sides[1]] = null;
   }

// (GS) Store the offsetLeft and offsetTop once again in case
// (GS) they changed when setting style.right/bottom to 'auto'
       offsetLeft = el.offsetLeft;
       offsetTop = el.offsetTop;

// (GS) Set style.left to have offsetLeft and then do accordingly
// (GS) for top.
       el.style[sides[0]] = offsetLeft + 'px';
       el.style[sides[1]] = offsetTop + 'px';

// (GS) (checking to see if right position was not null...)
// (GS) If el.offsetLeft is different, take original offsetLeft,
// (GS) add to that the amount added (offsetLeft again, so
// (GS) 2 * offsetLeft), and then subtract the new found el.offsetLeft.
   if (result[sides[0]] !== null && el.offsetLeft != offsetLeft) {

// (GS) When is sides[0] not "left" ?

When it is "right". Read the rest of the code.

     if (sides[0] == 'left') {
       result[sides[0]] = offsetLeft - el.offsetLeft + offsetLeft;
     } else {
       result[sides[0]] = el.offsetLeft;
     }
   }

Nit: Getting top/right is getting in the way of getting left/bottom.

I take it you mean a case where bottom/left or right/top are both
auto. For clarity, put left/right first (e.g. right/top). As
mentioned, this is only a test function. If it were a real function,
it would let you specify the coordinate pairs.
Nit: "right" is easier to recognize than "sides[2]".

That's internal and if you look closely, you will see that it needs to
be an array.
Mostly because I
can read "left" without having to remember, but also because the code
imposes a proprietary ordering of "left, top, right, bottom". In
contrast, CSS usually uses "top right bottom left" ordering. But order
doesn't matter if property names are used.

Or for internal arrays.

[...]
Notice that the bottom and right values are either 0 or a much larger
number. Using i7.style.bottom = result.bottom results in moving the LI
in IE6 and IE7 (standards mode). It works fine in IE8 and other browsers
I tested.

As I've said, nothing is guaranteed. You've found a bug in IE6/7
where setting the bottom and/or right style of a relatively positioned
LI does not update the offsetLeft/Top (ever apparently). Such results
are easy enough to detect and disallow. Or I may just add an issues
section (this is the first one in ten years).

Thanks for your input.

On a related note, how are you coming on your quest to find out where
my other (somewhat related) inventions (e.g. injecting, manipulating
and measuring DIV's in one-off feature testing) came from?

I can save you some time. Like this stuff, they were first published
here by me at a time when most of the world was transfixed by browser
sniffing libraries.

I don't know how you forgot as you were around here at the time.
Perhaps during your research you will stumble across some of your own
comments about the techniques. You really seemed to like them and I
see you (like many others) make use of them to this day. You're
welcome. :)

On yet another related note, instead of wasting time trying to prove
that Santa Claus is responsible for my stuff, why not update the FAQ
entry for viewport measurement:-

http://www.cinsoft.net/viewport.asp

Yes, that's another variation on the same theme. Unsurprisingly, I am
pretty good at leveraging my own techniques. But credit the Easter
Bunny if you like. :)
 
D

David Mark

On 6/9/2010 12:02 PM, David Mark wrote:
On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
Positions too:
http://www.cinsoft.net/position.html
Where are the unit tests?
That's all you ever say.  Where is your understanding of the basic
logic.  IIRC, that one was posted to refute your assertion that
computed styles should be used to determine positions.  At the time,
you seemed to be the only one who didn't get it.
It would be helpful to look at them to see what was being tested. You
wrote that you had tests, so where are they?
No tests? I read earlier that you had unit tests. Either you do or you
don't.
You keep chattering about unit tests.  I never know what you are
referring to.
You said you had unit tests and now you don't know what that means. I'm
confused.
I started off building a test runner. It is incomplete, but has unit
tests itself, so you can see what exactly it does (and doesn't).
Sorry but I am not following along. Please post a link to what it is
that you are referring to; it isn't clear.
So it's not the primer. It might be the slickspeed test. I really don't
like guessing games.
My offsetTop knowledge has has waned in the last 2 years; the number of
problems with that and friends are too much to retain,
You don't have to retain *any* of it.  Zero.
I'm not sure. I recall having arrived at the conclusion that in certain
cases, offsetTop would be inconsistent with itself in the same
implementation -- and for the same element on which it had been
previously used. I recall specifically for BODY element. Are there
others? I can't remember for sure.
I understand the code. For a tutorial, a diagram could show how the
function moves the element, then reads its position values.
The code could be changed to make it more understandable. The reader has
to go back and forth to see what sides[1] means, for example. The basic
strategy is to find out for, say "style.left" what pixel value setting
wouuld change its offsetLeft by 0.
Here's what I understand the algorithm as doing.
// (GS) Save offsetTop and offsetLeft
   offsetLeft = el.offsetLeft;
   offsetTop = el.offsetTop;
// (GS) Set right and bottom to "auto".
   el.style[sides[2]] = 'auto';
   el.style[sides[3]] = 'auto';
// (GS) Compare saved offsetLeft to current.
// (GS) If there is a difference, it means the element had
// (GS) 'right' positioning. This, set result.left to
// (GS) null. Follow accordingly for offsetTop/result.top
   if (offsetLeft != el.offsetLeft) {
     result[sides[0]] = null;
   }
   if (offsetTop != el.offsetTop) {
     result[sides[1]] = null;
   }
// (GS) Store the offsetLeft and offsetTop once again in case
// (GS) they changed when setting style.right/bottom to 'auto'
       offsetLeft = el.offsetLeft;
       offsetTop = el.offsetTop;
// (GS) Set style.left to have offsetLeft and then do accordingly
// (GS) for top.
       el.style[sides[0]] = offsetLeft + 'px';
       el.style[sides[1]] = offsetTop + 'px';
// (GS) (checking to see if right position was not null...)
// (GS) If el.offsetLeft is different, take original offsetLeft,
// (GS) add to that the amount added (offsetLeft again, so
// (GS) 2 * offsetLeft), and then subtract the new found el.offsetLeft.
   if (result[sides[0]] !== null && el.offsetLeft != offsetLeft) {
// (GS) When is sides[0] not "left" ?

When it is "right".  Read the rest of the code.
     if (sides[0] == 'left') {
       result[sides[0]] = offsetLeft - el.offsetLeft + offsetLeft;
     } else {
       result[sides[0]] = el.offsetLeft;
     }
   }
Nit: Getting top/right is getting in the way of getting left/bottom.

I take it you mean a case where bottom/left or right/top are both
auto.  For clarity, put left/right first (e.g. right/top).  As
mentioned, this is only a test function.  If it were a real function,
it would let you specify the coordinate pairs.

And going back in to detect (and disallow the results from) the IE6/7
relative LI bug, I don't see any problems related to this other
"nit". Everything appears set to turn this into a new
getElementPositionStyle function for My Library, allowing the caller
to specify left/top, right/bottom, left/bottom or right/top while
avoiding computed/cascaded styles entirely.

So what was the "getting in the way of" comment referring to?
 
D

David Mark

On 6/9/2010 12:02 PM, David Mark wrote:
On 5/25/2010 4:09 AM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 6:57 PM, David Mark wrote:
Garrett Smith wrote:
On 5/24/2010 2:11 PM, David Mark wrote:
Garrett Smith wrote:
On 5/22/2010 1:25 PM, David Mark wrote:
Ry Nohryb wrote:
On 22/05/10 16:22, Johannes Baagoe wrote:
Dmitry A. Soshnikov :
[...]
Positions too:
http://www.cinsoft.net/position.html
Where are the unit tests?
That's all you ever say.  Where is your understanding of the basic
logic.  IIRC, that one was posted to refute your assertion that
computed styles should be used to determine positions.  At the time,
you seemed to be the only one who didn't get it.
It would be helpful to look at them to see what was being tested.. You
wrote that you had tests, so where are they?
No tests? I read earlier that you had unit tests. Either you do oryou
don't.
You keep chattering about unit tests.  I never know what you are
referring to.
You said you had unit tests and now you don't know what that means. I'm
confused.
I started off building a test runner. It is incomplete, but has unit
tests itself, so you can see what exactly it does (and doesn't).
tests" comment, but that was my line.  I presumed you meant My
Library.  This test page we are talking about is not part of My
Library.  Just a proving ground for a replacement for
API.getElementPositionStyle, which I will soon be deprecating.
Got it.
Groan.  All I want to know is what is the name of the guy on first
base.
Given time, I'd like to
look into them.
You have looked at them.  In fact, I put up those two primers partly
for your benefit.  And I can't believe they didn't help.
I haven't seen them.
Of course you have.  We've been discussing them here for days.
Sorry but I am not following along. Please post a link to what it is
that you are referring to; it isn't clear.
I saw a demo of your function. Is that what you refer to as a "unit
test".
That's one of the primers!  And no, I never called it a "unit tests".
You are the one that keeps chattering about unit tests, not me.
So it's not the primer. It might be the slickspeed test. I really don't
like guessing games.
[...]
My offsetTop knowledge has has waned in the last 2 years; the number of
problems with that and friends are too much to retain,
You don't have to retain *any* of it.  Zero.
I'm not sure. I recall having arrived at the conclusion that in certain
cases, offsetTop would be inconsistent with itself in the same
implementation -- and for the same element on which it had been
previously used. I recall specifically for BODY element. Are there
others? I can't remember for sure.
I'm not convinced. I'd want to see a test where the element has top:
auto and BODY has a margin.
If you had the slightest clue what we were talking about, you'd know
that BODY margin is irrelevant.  As for automatic top, left, right,
bottom.  That's the whole point.  It actually works whether youdefine
the styles in your CSS or not.
I understand the code. For a tutorial, a diagram could show how the
function moves the element, then reads its position values.
The code could be changed to make it more understandable. The reader has
to go back and forth to see what sides[1] means, for example. The basic
strategy is to find out for, say "style.left" what pixel value setting
wouuld change its offsetLeft by 0.
Here's what I understand the algorithm as doing.
// (GS) Save offsetTop and offsetLeft
   offsetLeft = el.offsetLeft;
   offsetTop = el.offsetTop;
// (GS) Set right and bottom to "auto".
   el.style[sides[2]] = 'auto';
   el.style[sides[3]] = 'auto';
// (GS) Compare saved offsetLeft to current.
// (GS) If there is a difference, it means the element had
// (GS) 'right' positioning. This, set result.left to
// (GS) null. Follow accordingly for offsetTop/result.top
   if (offsetLeft != el.offsetLeft) {
     result[sides[0]] = null;
   }
   if (offsetTop != el.offsetTop) {
     result[sides[1]] = null;
   }
// (GS) Store the offsetLeft and offsetTop once again in case
// (GS) they changed when setting style.right/bottom to 'auto'
       offsetLeft = el.offsetLeft;
       offsetTop = el.offsetTop;
// (GS) Set style.left to have offsetLeft and then do accordingly
// (GS) for top.
       el.style[sides[0]] = offsetLeft + 'px';
       el.style[sides[1]] = offsetTop + 'px';
// (GS) (checking to see if right position was not null...)
// (GS) If el.offsetLeft is different, take original offsetLeft,
// (GS) add to that the amount added (offsetLeft again, so
// (GS) 2 * offsetLeft), and then subtract the new found el.offsetLeft.
   if (result[sides[0]] !== null && el.offsetLeft != offsetLeft) {
// (GS) When is sides[0] not "left" ?
When it is "right".  Read the rest of the code.
     if (sides[0] == 'left') {
       result[sides[0]] = offsetLeft - el.offsetLeft + offsetLeft;
     } else {
       result[sides[0]] = el.offsetLeft;
     }
   }
Nit: Getting top/right is getting in the way of getting left/bottom.
I take it you mean a case where bottom/left or right/top are both
auto.  For clarity, put left/right first (e.g. right/top).  As
mentioned, this is only a test function.  If it were a real function,
it would let you specify the coordinate pairs.

And going back in to detect (and disallow the results from) the IE6/7
relative LI bug, I don't see any problems related to this other
"nit".  Everything appears set to turn this into a new
getElementPositionStyle function for My Library, allowing the caller
to specify left/top, right/bottom, left/bottom or right/top while
avoiding computed/cascaded styles entirely.

For example, something like this should do:-

var getElementPositionStyle = function(el, whichSides) {
var result = getElementPositionStyles(el);

switch(whichSides) {
case 'bottomright':
result = [result.bottom, result.right];
break;
case 'topright':
result = [result.top, result.right];
break;
case 'bottomleft':
result = [result.bottom, result.left];
break;
default:
result = [result.top, result.left];
}

return (result[0] === null || result[1] === null) ? null : result;
};

Why is it necessary to strain the results for a specific pair?
Because if - for example - you had an absolutely positioned element
with no style rules declared, the getElementPositionStyles test
function would fill in all four blanks. Clearly you couldn't use all
four, but would have to pick a pair (e.g. top/left, bottom/left, top/
right or bottom/right).

This higher-level function removes the ambiguity and should be
compatible with the more complex function of the same name found in My
Library. The original only covered top/left and ill-advisedly tried
to use computed/cascaded styles as a first option (see a recently
reported Konquerer issue in the bug tracker for an example).

Will post an updated test page with the IE6/7 relative list item bug
avoided and documented when I have a chance. The two additional lines
of code added are not specific to list items or relative positioning,
so should cover any and all additional cases that fail in the same
way. I doubt there are many as I've been using the same basic
technique since IE5 came out in the late 90's. Contrast that with the
myriad bugs found in getComputedStyle over the years (far too many to
keep track of).
 
G

Garrett Smith

On 6/9/2010 12:02 PM, David Mark wrote:
[...]


As I've said, nothing is guaranteed. You've found a bug in IE6/7
where setting the bottom and/or right style of a relatively positioned
LI does not update the offsetLeft/Top (ever apparently). Such results
are easy enough to detect and disallow. Or I may just add an issues
section (this is the first one in ten years).

It requires more investigation as to why the result got that way. i know
you don't like me to say it, but writing tests really helps. As stated,
I started on a test runner. The code is pretty simple. You could finish
it off and use it for testing this.
Thanks for your input.

On a related note, how are you coming on your quest to find out where
my other (somewhat related) inventions (e.g. injecting, manipulating
and measuring DIV's in one-off feature testing) came from?

I actually came up with the idea on my own in "Find Element Position"
thread. You posted the idea, but I'd already started on that.

I may not have been the first who had invented the idea. We covered in
that thread why it is best to use node.insertBefore(x, node.firstChild)
and not appendChild, because that way it avoids the "operation aborted".

Other attempts to append nodes to documentElement were subsequently
proposed, and, oddly, those seemed to be more popular. Oddly because
they can be expected to throw the dom exception related to invalid
hierarchy, as specified in DOM 2 core. And in versions of some browsers,
it has been reported (older Mozillas and even recent Konq).

http://blogs.msdn.com/b/ie/archive/2010/04/14/same-markup-writing-cross-browser-code.aspx

Gerard Talbot claims that even Konq 4 throws errors and I believe that.
Older versions of konq reportedly did so, as well, and according to DOM2
core, the error is to be expected.

Well, that IE blog post is not great, but it's a good sign in the right
direction and a lot better than what Apple, Google, and Yahoo are doing.
All three of those screen out "invalid" browsers based on the userAgent
sniffing. If you don't have the right userAgent, you get either a buggy
page, an error warning or redirected to their error page.

I have not had time to reply to all of the other posts and I started
working on a "recursive" regexp for trying to validate the JSON plus an
article on another topic. So, I'm going to follow up on replies but I
have a lot of things going on, including activities with people; not on
the Internet.

Garrett
 
D

David Mark

On 6/9/2010 12:02 PM, David Mark wrote:
[...]



As I've said, nothing is guaranteed.  You've found a bug in IE6/7
where setting the bottom and/or right style of a relatively positioned
LI does not update the offsetLeft/Top (ever apparently).  Such results
are easy enough to detect and disallow.  Or I may just add an issues
section (this is the first one in ten years).

It requires more investigation as to why the result got that way. i know
you don't like me to say it, but writing tests really helps. As stated,
I started on a test runner. The code is pretty simple. You could finish
it off and use it for testing this.

I have my own unit testing framework.
I actually came up with the idea on my own in "Find Element Position"
thread. You posted the idea, but I'd already started on that.

Well, even in a vacuum, it was still late. I posted the menu example
(linked to in the host primer) months before that thread. Further
back still, I had proposed the idea of using such a technique after an
exchange with Richard and Randy Webb concerning the viabilities of GP
libraries and feature testing.
I may not have been the first who had invented the idea.

Clearly you were not (and certainly not the first to publish practical
examples).
We covered in
that thread why it is best to use node.insertBefore(x, node.firstChild)
and not appendChild, because that way it avoids the "operation aborted".

You can use appendChild if you understand why the "operation aborted"
happens. I've never had any trouble in that area.
Other attempts to append nodes to documentElement were subsequently
proposed, and, oddly, those seemed to be more popular.

The viability of browser scripting code usually has an inverse
relationship to its popularity.
Oddly because
they can be expected to throw the dom exception related to invalid
hierarchy, as specified in DOM 2 core. And in versions of some browsers,
it has been reported (older Mozillas and even recent Konq).

It's clearly a bad idea to create an invalid DOM tree. And there's no
reason to do it as virtually all of my examples back then included
(real) XHTML demos. No invalid operations, innerHTML hacks, etc.
http://blogs.msdn.com/b/ie/archive/2010/04/14/same-markup-writing-cro...

Gerard Talbot claims that even Konq 4 throws errors and I believe that.

I wouldn't be shocked.
Older versions of konq reportedly did so, as well, and according to DOM2
core, the error is to be expected.

Of course. But the masses have the "show me where it fails" mindset.
If it appears to work in whatever browsers they have handy (and "care
about"), that's good enough for them. They take their cues from such
"luminaries" as the jQuery and YUI developers.
Well, that IE blog post is not great, but it's a good sign in the right
direction and a lot better than what Apple, Google, and Yahoo are doing.

Yes, I've seen their Websites. Almost anything would be better. It's
like they go out of their way to do everything in the most ill-advised
fashion. But try to point out the massive gaps in their logic and
they fall back to "show me where it fails" or "nobody is perfect".
All three of those screen out "invalid" browsers based on the userAgent
sniffing.

That's clearly ridiculous. As I pointed out back around the end of
2007, if the My Library test page could work (or degrade gracefully)
in old and new (X)HTML DOM's, then clearly there is no need to parse
the UA string. Years went by and new browsers didn't break it. I
eventually went back and tested every old browser I could find, which
(not unexpectedly) exposed some gaps in the feature testing logic.
After plugging a few holes, I found that it worked in virtually
everything released this century (and some browsers from last
century). Still, the "major" libraries continue with the browser
sniffing to this day, barely "keeping up" withe latest major desktop
browsers and burning bridges to the old ones.
If you don't have the right userAgent, you get either a buggy
page, an error warning or redirected to their error page.

Yes and it is amazing that the developers can't recognize these
failures for what they are. They are about rationalization rather
than reason.
I have not had time to reply to all of the other posts and I started
working on a "recursive" regexp for trying to validate the JSON plus an
article on another topic.

I've been working on a couple of improvements to my keyboard primer,
an OO primer (O is for Object), a new editor widget (and accompanying
text range add-on) trying to find time to wrap up work in the RC of My
Library and the new Cinsoft site. Unfortunately (or fortunately
depending on perspective), work keeps getting in the way.

However, the Position primer is updated as of last night and I will
post it shortly.
So, I'm going to follow up on replies but I
have a lot of things going on, including activities with people; not on
the Internet.

Who is not on the Internet these days? Many aren't on Usenet, but I
think virtually everyone uses the Internet now (in one way or another).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,077
Messages
2,570,569
Members
47,206
Latest member
MalorieSte

Latest Threads

Top