Bart said:
And in absence of which, the expected encoding is always ISO/IEC
8859-1, being the default HTTP charset ever since. It's not widely used
to set a character set here, unless, of course, you want something else
than ISO/IEC 8859-1.
That is specified so in HTTP/1.x. However, it is not implemented this way,
as the HTML 4.01 Specification points out:
http://www.w3.org/TR/html401/charset.html#h-5.2.2
But it's not a bad habit nowadays to add it anyhow, especially in regard
to the growing Unicode support and further internationalization of the internet.
Declaring the character encoding was and still is a requirement for
interoperability.
A javascript programme:
[ snip code ]
It is merely ECMAScript-3-conforming code, or such a script or program.
Then I suggest we rename this group to 'comp.lang.ECMAScript-3-
conforming-code'. [...]
I am not particularly inclined to support that. While it is similar to the
correct name, ECMAScript implementations are *commonly* known as
"JavaScript". And it follows from Usenet practice that the name of a
newsgroup is not always the best or correct one; technical innovation
introduce new techniques that can be summarized under a common name because
of their similarities. Netscape JavaScript was the first implementation,
and the newsgroup has the greatest chance to be found under that name.
That does not mean it should not be pointed out to the people who actually
subscribe to this newsgroup that we are not dealing with one language named
JavaScript, but (at least) five different language implementations here.
The FAQ mentions it partly, and should emphasize it more.
I am aware that different browers have different js engines, yes.
That section was of course not only for you; at least I hoped that what was
written would not be completely new to you. But we are talking not only
browsers, not even only Web user agents. (Did I mention Macromedia/Adobe
ActionScript and Adobe PDF ECMAScript?)
No. Before you call something, you should be pretty sure that it can be
called; especially when you use it as a constructor for an object. It will
cause a runtime error and will make you look incompetent otherwise.
I haven't seen any advisories that recommend it on this place.
Which does not mean that it should not be done. With host objects, all bets
are off *by definition* (ECMAScript Ed. 3, subsection 8.6.2).
The inner working of MSXML is not accessible, so I follow the
guidelines recommended by Microsoft in their documentation. Seems a
reasonable approach to me.
"Microsoft.XMLHTTP" should suffice and select the most recent installed version.
'true' means that the request must be handled asynchronously. Makes
sense to me to set it here explicitly for code clarity that we want
async here and not sync (even when 'true' would be the default).
JFTR: `true' *is* the default.
No, the request will still be valid.
It will not. *Generally* the header value is optional:
,-<
http://www.rfc-editor.org/rfc/rfc1945.txt>, 4.2
|
| HTTP-header = field-name ":" [ field-value ] CRLF
However, the "Content-Type" header is explicitly defined as:
,-<
http://www.rfc-editor.org/rfc/rfc1945.txt>, 10.5 and 3.6
|
| media-type = type "/" subtype *( ";" parameter )
| type = token
| subtype = token
| [...]
| Content-Type = "Content-Type" ":" media-type
as the header name "Content-Type" is inappropriate at least for a *GET*
request. (HTTP/1.x does not define such a request header for any request
type, only a response header.[1]
It doesn't matter.
It matters here.
If you use a header like...
abc: 123
...it would still remain a valid request.
True, because of
,-<
http://www.rfc-editor.org/rfc/rfc1945.txt>, 4.3
|
| Unrecognized header fields are treated as Entity-Header fields.
,-<
http://www.rfc-editor.org/rfc/rfc1945.txt>, 7.1
|
| Entity-Header = Allow ; Section 10.1
| | Content-Encoding ; Section 10.3
| | Content-Length ; Section 10.4
| | Content-Type ; Section 10.5
| | Expires ; Section 10.7
| | Last-Modified ; Section 10.10
| | extension-header
|
| extension-header = HTTP-header
However:
| The extension-header mechanism allows additional Entity-Header fields
| to be defined without changing the protocol, but these fields cannot
| be assumed to be recognizable by the recipient. Unrecognized header
| fields should be ignored by the recipient and forwarded by proxies.
One should not disregard standards unless one has a very good reason
to do so. There is exactly *no* reason for such a thing here.
[CMIIW] However, to submit different types of information on POST requests,
HTML 4.01 specified implicitly that the Content-Type header may be used also
for request messages. The default used for submitting forms with POST is
specified as "application/x-www-form-urlencoded" which usually suffices.[2]
True, and that's exactly why it can be left empty in my code. No need
to write every <form> as <form enctype="application/x-www-form-
urlencoded"> neither.
You send a "Content-Type" header that has an empty value. That is
fundamentally different from sending no "Content-Type" header at all,
or sending it with the default value as specified in HTML 4.01.
I suppose it's quite possible that the body of the response would be
empty, like for instance a database that's not reachable.
However, an error status code is then indicated instead.
And what about feature testing ?
Not necessary here. When readyState == 4, responseText has to exist or the
implementation is FUBAR.
Of course the programmer must create an element with ID 'myField'. Of
course the programmer supposes that 'getElementById' is supported by
the client. Do you really believe that it's necessary to feature-test
on all these elementary things ? I don't.
I do. Because it always breaks with the client, not in the test cases.
'send()' is better when using GET.
-v please
Naturally, it's the intention that the original poster will not only
use <input>, but makes a total webpage with <form>, <html>, <body>
etc.
Should then not the most compatible solution be presented in the first place?
That is conceptually better, yes. But any idea about which scale we're
talking here ?
Well, *I* do.
Math.random() gives me 15 to 18 digits after comma,
However, that is only the string representation of the IEEE-754 double;
the actual precision is much greater than that.
let's say 16 for easy calculation. A user would have 1 chance out of
10.000.000.000.000.000 that the next value is the same. With a 2
second refresh, the scope is 634.195.839 years. The difference is in
the head.
Different numeric values can have the same string representation, so the
chances are much greater than you think.
But that does not really matter:
From the Law of Probability follows that any event E that can be the outcome
of a Random Experiment (i.e. P(E) > 0) may occur again the next time the
Random Experiment is performed, no matter how great the statistical
probability for that event is. It is foolish to ignore that. Especially
as we are dealing with machine-generated random numbers here, which only
implement pseudo-randomness.
PointedEars