var isHostProp = function(o, p) {
var t = typeof o[p];
return t != 'undefined' &&
(t != 'object' ||
o[p]);
};
The intention of the above is to find a replacement for a type
converting test which is the most natural test
if (doc.all) {
// use document.all
}
becomes
if (isHostProp(doc, 'all')) {
// use document.all
}
But we know that type converting tests aren't useful in a practical
sense given the behavior of IE and the chance that Microsoft may
change any of their callable host objects to be ActiveXObjects at any
time.
Both the type converting test and the isHostProp test give false
positives if document.all === true but is that a practical concern
that exists in some browser?
Suppose we use isHostCollection
if (isHostCollection(doc, 'all')) {
// use document.all
}
We still get false positives if document.all is some dummy function,
or defective object. We get less false positives but we start getting
false negatives for standard compliant objects. Are there real world
example where this price of false negatives is worth paying?
I can imagine that the following probably doesn't have any real world
problems. It would still have false negative for document.all on
Safari because it seems the WebKit devs want it that way.
var isHostProp2 = function(o, p) {
return typeof o[p] != 'undefined';
};
The following all work
isHostProp2(doc, 'all')
isHostProp2(doc, 'createElement')
isHostProp2(el, 'childNodes')
In which browsers do isHostProp or isHostProp2 or give false positives
for some property that isHostMethod/isHostObject/isHostCollection do
not? Which host object property out there is <code>null</code> in some
browser?
What I'm starting to dislike about isHostMethod, for example, is that
somehow there is the assumption that there is one way to test that any
property name is a function: check for three different typeof strings.
It would be very easy to design two new browsers that make this kind
of test fail.
-----------------------------
I'm starting to get a better big picture concept of how all this
feature testing business fits together.
General Goal: Feature testing is to avoid runtime errors. (Not
necessarily just JavaScript runtime errors but errors like rendering
in a browser that doesn't support enough CSS to make a widget "work"
correctly, for example.)
There are various kinds of inference and (almost) no matter which type
of test we use we will be at some level of inference.
1) navigator.userAgent inference (sniffing)
Since browsers can and do lie in navigator.userAgent looking at
substrings of this property is pointless. Very likely to give false
positives. This is the worst case of unrelated existence inference.
2) unrelated existence inference (sniffing)
If one object property exists, then infer that a different object
property will work. Checking for the existence of document.all or
ActiveXObject and then assuming the IE event model is available. Very
likely to give false positives.
3) related existence inference (sniffing)
Checking for the element.addEventListener and then
event.preventDefault is available. Less likely to give false positives
than unrelated existence inference but still sniffing. Gives a false
positive in Safari.
4) exemplar existence inference
Check that a property exists on an exemplary object. If it exists then
infer that it will work. This is the most common form of feature
testing. It should be sufficient. Are there examples where it is not?
Possible false positives.
5) specific existence inference
This checks for the existence of a property on a particular object but
doesn't assume another similar object will also have that property. If
the feature exists on a particular object then assume it works on that
object. For example, if we test for the existence of
document.documentElement.appendChild, we would not assume any other
DOM element has appendChild. Possible false positives. Somewhere
between existence inference and specific existence inference is why we
check both document.getElementsByTagName and
element.getElementsByTagName.
----
In addition to existence inference, you can analyze the object in a
variety of ways to insure the object satisfies your expectations about
how it should "look".
6) exemplar typeof inference
Check that a property exists and if that its typeof value is one of
the known values for a particular set of browsers then infer the
feature will work. Less likely to produce false positives than simple
existence testing. A host object can return any value when used in a
typeof operation so this can produce false negatives for ECMAScript
compliant host objects that have typeof values you don't know about.
Using typeof inference in addition to existence inference is more
conservative but perhaps without any known benefit over existence
inference alone.
7) specific typeof inference
similar
8) exemplar non-bad-value inference
Check that a host object property exists is not in a particular set of
known bad values, then infer the feature will work. For example, a
host object property is <code>null</code> it is defined and exists but
that value may be no good to you. Reduces the false positives of
existence inference alone.
9) specific non-bad-value inference
similar
----
10) exemplar object, exemplar use inference
Check that a property exists on an exemplar object and then try using
it. If it works then infer it will work again on similar objects.
Unlikely to give a false positive.
11) specific object, exemplar use inference
similar
12) specific object, specific use *testing*
Check that a property exists on a specific object, use it, and check
that it worked as expected after each use. Does not give false
positives. This is computationally expensive and likely not practical
in many situations. If the feature executes but produces a poor result
there is no guarantee you can reverse the operation. The reversal may
fail.
----------
Determining the right level of inference is not an objective decision.
File download size, computation expense and chance of false negatives
trade-off against more code that is less likely to runtime error.
Peter