Sure. For the same reason I do not need to read the
bible in order to be a good Christian.
Judging by your posts to microsoft.public.scripting.jscript under this
subject you are not a good Christian so perhaps you should RTFM.
I suppose one can find many similar ways to explain
what you asked.
And they could all be equally misguided.
I'll tell you the truth tho. I have read the FAQ about
2 years back. Since then so much has happened that I do
not even remember what was in there and what not.
So you read the explanation of the futility of browser detecting and the
recommendations of better alternatives and then went and spent the
intervening years writing scripts based on browser detecting?
Well, the scripts that I have posted are certainly
multi-browser so what you say here is obviously wrong.
You have suffered a deficit in English comprehension.
You think? I can see not reason for assuming that I would have any
emotional response to a hearsay report of an assertion by third party
who cannot be qualified to judge the validity of their own statement.
Why don't you tell us what else you do well except scripting
That would be irrelevant to a discussion of browser scripting and OP for
this newsgroup.
(if we assume that you even do that well).
Why make assumptions when you have access to information.
Maybe you can prove that you don't have any ego-related
reason in making this claim.
I can prove that as easily as you can prove that you heard the assertion
in the first place.
Have you ever wondered why the user agent string is there at all?
Why was it introduced and kept in countless versions and builds?
I haven’t had to wonder for as long as I have known. But before I knew I
don’t recall caring.
I quote from RFC 2616:
----
14.43 User-Agent
The User-Agent request-header field contains information about the
user agent originating the request. This is for statistical purposes,
the tracing of protocol violations, and automated recognition of user
agents for the sake of tailoring responses to avoid particular user
agent limitations. User agents SHOULD include this field with
requests. The field can contain multiple product tokens (section 3.8)
and comments identifying the agent and any subproducts which form a
significant part of the user agent. By convention, the product tokens
are listed in order of their significance for identifying the
application.
User-Agent = "User-Agent" ":" 1*( product | comment )
Example:
User-Agent: CERN-LineMode/2.15 libwww/2.17b3
RFC 2616 is the HTTP 1.1 standard. Can you point me to a standard that
says that the string used in the user agent header in HTTP 1.1 should be
made available to client-side scripting? No . . . perhaps some caution
should be exercised before citing standards as a justification for
client side browser detecting based on that string.
But standard hang heavily upon the words MUST (compulsory), SHOULD
(probably should be treated as compulsory) and MAY (clearly optional
(but possibly recommended)).
In the User Agent specification we find a SHOULD:-
<quote>
User agents SHOULD include this field with requests.
</quote>
And our browsers are complying by sending that header (100% to the
specification). As to the contents of that header we read:-
<quote>
The field can contain multiple product tokens (section 3.8) and comments
identifying the agent and any subproducts which form a significant part
of the user agent. By convention, ...
</quote>
- and the operative word is CAN, not MUST, not SHOULD, not even MAY but
CAN. Clearly the specification leaves the contents of the string up to
the implementer. That is not surprising because this was 1999 and IE was
already spoofing Netscape 4’s UA string so placing any strong
requirements on the content of the string would have been closing the
stable door after the horse has bolted.
So if a browser sends any user agent header it is conforming to the
letter of RFC 2616. The only way in which it can be considered as not
conforming to the standard that you cited is that it may not follow the
spirit of the standard. However, RFC 2616 starts:-
<quote>
The User-Agent request-header field contains information about the user
agent originating the request.
</quote>
-and our browser's User Agent headers are sending information about
themselves. It may not be very useful information, asserting little more
than that they consider themselves to be user agents, but that is still
information.
RFC 2616 then goes on:-
<quote>
This is for statistical purposes,
</quote>
Which is fine as the information can be used for statistical purposes.
You can use it to count the number of requests from software that
asserts its user agentness.
The spirit of:-
<quote>
the tracing of protocol violations,
</quote>
- can be conformed with by not making protocol violations (so there is
no need to trace them).
And it was the abuse of:-
<quote>
and automated recognition of user agents for the sake of tailoring
responses to avoid particular user agent limitations.
<quote>
- by not using the UA information to _avoid_ particular user agent
limitation, but rather to avoid dealing with those limitations by
blocking unrecognised and unwanted user agents, that caused the User
Agent string to become close to meaningless in the first place. But if
the browser manufacturers do not believe their browser to suffer from
limitations then there is no reason for them to need a response tailored
any differently to Some other browser’s.
But in all of this I do not see it stated that the User Agent header
MUST (even SHOULD) contain a discriminating identifier of the user agent
software. The closest RFC 2616 comes to that is to suggest that such an
identifier is an option.
The point is that it should be.
Says who? Apparently not RFC 2616.
On the other hand perhaps your faith that object/property
testing will remain a good way to test is overly inflated.
If it was a personal belief, unsupported by facts, at odds with expert
opinion and contrary to logic then I might question it. Fortunately that
is not the case.
You are not supposed to fake user agent strings. Period.
Again, says who?
This feature has only become available in various browsers
and only because it is very easily implementable.
Ease of implementation is far from the reason (let alone the only
reason) that this has become available. It has happened because browser
detection based on user agent strings (even if all browsers did use a
discriminating string) requires so much data to implement effectively
that it never has been, and the consequences were that unrecognised
browsers were excluded from sites for no better reason than that they
could be discriminated from recognised browsers. And so the
manufacturers of the unrecognised browsers became motivated to make
their browsers indistinguishable from the recognised browsers so that
the misguided script authors had no way of excluding them.
It is browser detecting as a practice that is the problem. It doesn’t
matter how it is done, whatever method is used will result in that
method becoming ineffective. It is just the case that userAgent string
based browser detecting became ineffective long ago and the smart
scripters moved on to something better.
What percentage do you think? I say it works 99%.
Given that browser usage statistics are notoriously based on a faulty
premise and that a browser that sends a user agent string that is
indistinguishable from IE’s will be reported as IE making it impossible
to even guesstimate their usage I think that the question is futile. But
common reported statistics for browsers with JavaScript unavailable or
disabled seem to be in the 8-12% range so no script could "work" more
than 88-92% (if you define work as successfully execute). Disabling
JavaScript is just one of the many ways in which browsers can deviate
form their default settings (though some default to having JavaScript
disabled in the first place so enabling it is non-default), I would
imagine that almost everyone plays with their settings at some point
(even if they just screw something up and end up having to re-set the
defaults in the end).
Heh, sorry but this reminds me of comical Ali.
You see a flaw in the logic?
Yeah but CSS does not always work the way one wants and the
detection is done in JS not CSS.
Go and ask in comp.infosystems.
www.authoring.stylesheets and they will
tell you how to use CSS to fix rendering glitches in browsers. They will
also tell you (every last one of them) that you do not _ever_ use
JavaScript to fix CSS problems.
wow, that is tew kewl (and deep).
There is nothing like a well reasoned argument.
A few points to keep you busy:
* Should every browser preferably have a distinctive user agent
string or not? Don't read the RFC just use your common sense.
The RFC doesn’t say that they have to (only that they send the UA
header), and I can see not reason why they should as there is no need to
be interested in specific browser types anyway. I also can’t see any
reason for the UA string to be exposed to client side scripting for much
the same reason.
* You are trying to make a rule out of the exception (exception
being the faked strings)
I am trying to demonstrate that the proposition that it is possible to
use the navigator.userAgent string to uniquely identify web browsers is
false.
It is in the nature of logic that a billion instances of circumstances
corresponding with a proposition do not prove that proposition to be
true, yet just one instance (properly verified) of circumstances failing
to correspond with the proposition do _prove_ it to be false. And we do
not have just one instance where browser detection by userAgent string
will fail to identify a browser (or will misidentify it) we have many.
The proposition that it is possible to use the navigator.userAgent
string to uniquely identify web browsers _is_ false.
* I can write a user agent that fakes the existence of certain
JS objects but provides others while for the faked ones has
Shakespeare quotes as the return value of various methods. I
could name this product Fudgilla (create say 10 variants of it
under different names like Geekzilla), make it freely available
to the geek crowds and then argue that because these products
exist the feature detection method is no good anymore for deciding
what kind of browser we got.
<snip>
That would be your prerogative and feature detecting can cope with that.
It is exactly the sort of situation that it is expected to cope with
anyway. On discovering that your browser does not support the required
features it would cleanly degrade as designed. Your approach, on the
other hand, only offers the options of refusing the browser access (and
encouraging another round of UA string faking) or erroring until it made
a very ungraceful exit.
On the other hand, is it likely that a new browser would be introduced
that deliberately re-defined the DOM to the extent that you describe?
Reality seems to be moving towards standardising around the various W3C
DOM specifications rather than diverging from the existing browser DOM’
s. While we have already discussed why browser manufactures are well
motivated to fake userAgent strings and offer their users a selection of
easy alternative options.
Your scenario, if taken to an extreme, may eventually negatively impact
on feature detecting as a technique. (Though scripts that used the
technique would stand up longer and better than scripts that used your
browser detecting approach.) But it is an unlikely scenario.
My scenario, of browsers offering users choices of userAgent strings
that are indistinguishable from those of other common (usually IE)
browsers (or just using those strings by default) is already a
demonstrated reality.
Richard.