R
rosco
A site with frequent additions that would be missed by personal, local or
regional cacheing of its URL . . .
1) . . . from the archives in Google Groups -> alt.html --> "cache clear":
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
In the Head of your html doc. Forces browser to not cache the page
when it first loads
<<< . . . does this work? . . . >>>
2) . . . from source code of W3Schools home page
http://www.w3schools.com/default.asp: <meta http-equiv="pragma"
content="no-cache" />
and <meta http-equiv="cache-control" content="no-cache" />
<<< . . . the ' />' does not pass W3C markup validation in my hands. What
gives? A XHTML transitional thing evidently. Will it work with the '>' tag
in my CSS/html strict document? . . . >>>
3) . . . from http://vancouver-webpages.com/META/metatags.detail.html
Pragma
Controls cacheing in HTTP/1.0. Value must be "no-cache". Issued
by browsers during a Reload request, and in a document prevents Netscape
Navigator cacheing a page locally.
.. . . and . . .
Expires
Source: HTTP/1.1 (RFC2068)
The date and time after which the document should be considered expired.
Controls cacheing in HTTP/1.0. In Netscape Navigator, a request for a
document whose expires time has passed will generate a new network request
(possibly with If-Modified-Since). An illegal Expires date, e.g. "0", is
interpreted as "now". Setting Expires to 0 may thus be used to force a
modification check at each visit.
Web robots may delete expired documents from a search engine, or schedule a
revisit.
.. . . and . . .
Cache-Control
Source: HTTP/1.1
Specifies the action of cache agents. Possible values:
a.. Public - may be cached in public shared caches
b.. Private - may only be cached in private cache
c.. no-cache - may not be cached
d.. no-store - may be cached but not archived
Note that browser action is undefined using these headers as META tags.
.. . . and . . .
Robots
Source: Spidering
Controls Web robots on a per-page basis. E.g.
<META NAME="ROBOTS" CONTENT="NOINDEX,FOLLOW">
Robots may traverse this page but not index it.
Altavista supports:
a.. NOINDEX prevents anything on the page from being indexed.
b.. NOFOLLOW prevents the crawler from following the links on the page and
indexing the linked pages.
c.. NOIMAGEINDEX prevents the images on the page from being indexed but
the text on the page can still be indexed.
d.. NOIMAGECLICK prevents the use of links directly to the images, instead
there will only be a link to the page.
Google supports a NOARCHIVE extension to this scheme to request the Google
search engine from caching pages; see the Google FAQ
See also the /robots.txtexclusion method
<<< . . . So. Pragma good. Expires bad (some search engines exclude
site). Cache-control good. Robots/NoArchive .??? . . . that last sentence
with NOARCHIVE doesn't make grammatical sense (like I should talk). Any
thoughts? . . . >>>
4). . . also, from http://vancouver-webpages.com/META/metatags.detail.html
HTTP-EQUIV tags
META tags with an HTTP-EQUIV attribute are equivalent to HTTP headers.
Typically, they control the action of browsers, and may be used to refine
the information provided by the actual headers. Tags using this form should
have an equivalent effect when specified as an HTTP header, and in some
servers may be translated to actual HTTP headers automatically or by a
pre-processing tool.
Note: While HTTP-EQUIV META tag appears to work properly with Netscape
Navigator, other browsers may ignore them, and they are ignored by Web
proxies, which are becoming more widespread. Use of the equivalent HTTP
header, as supported by e.g. Apache server, is more reliable and is
recommended wherever possible.
<<< . . . If http-equiv meta tags are ignored by some browsers, or will be
one day, then what good is all this? Should I investigate if there is
java-script that insures my site is freshly loaded when revisited by a user?
.. . .>>>
Thanks in Advance,
Rosco
regional cacheing of its URL . . .
1) . . . from the archives in Google Groups -> alt.html --> "cache clear":
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
In the Head of your html doc. Forces browser to not cache the page
when it first loads
<<< . . . does this work? . . . >>>
2) . . . from source code of W3Schools home page
http://www.w3schools.com/default.asp: <meta http-equiv="pragma"
content="no-cache" />
and <meta http-equiv="cache-control" content="no-cache" />
<<< . . . the ' />' does not pass W3C markup validation in my hands. What
gives? A XHTML transitional thing evidently. Will it work with the '>' tag
in my CSS/html strict document? . . . >>>
3) . . . from http://vancouver-webpages.com/META/metatags.detail.html
Pragma
Controls cacheing in HTTP/1.0. Value must be "no-cache". Issued
by browsers during a Reload request, and in a document prevents Netscape
Navigator cacheing a page locally.
.. . . and . . .
Expires
Source: HTTP/1.1 (RFC2068)
The date and time after which the document should be considered expired.
Controls cacheing in HTTP/1.0. In Netscape Navigator, a request for a
document whose expires time has passed will generate a new network request
(possibly with If-Modified-Since). An illegal Expires date, e.g. "0", is
interpreted as "now". Setting Expires to 0 may thus be used to force a
modification check at each visit.
Web robots may delete expired documents from a search engine, or schedule a
revisit.
.. . . and . . .
Cache-Control
Source: HTTP/1.1
Specifies the action of cache agents. Possible values:
a.. Public - may be cached in public shared caches
b.. Private - may only be cached in private cache
c.. no-cache - may not be cached
d.. no-store - may be cached but not archived
Note that browser action is undefined using these headers as META tags.
.. . . and . . .
Robots
Source: Spidering
Controls Web robots on a per-page basis. E.g.
<META NAME="ROBOTS" CONTENT="NOINDEX,FOLLOW">
Robots may traverse this page but not index it.
Altavista supports:
a.. NOINDEX prevents anything on the page from being indexed.
b.. NOFOLLOW prevents the crawler from following the links on the page and
indexing the linked pages.
c.. NOIMAGEINDEX prevents the images on the page from being indexed but
the text on the page can still be indexed.
d.. NOIMAGECLICK prevents the use of links directly to the images, instead
there will only be a link to the page.
Google supports a NOARCHIVE extension to this scheme to request the Google
search engine from caching pages; see the Google FAQ
See also the /robots.txtexclusion method
<<< . . . So. Pragma good. Expires bad (some search engines exclude
site). Cache-control good. Robots/NoArchive .??? . . . that last sentence
with NOARCHIVE doesn't make grammatical sense (like I should talk). Any
thoughts? . . . >>>
4). . . also, from http://vancouver-webpages.com/META/metatags.detail.html
HTTP-EQUIV tags
META tags with an HTTP-EQUIV attribute are equivalent to HTTP headers.
Typically, they control the action of browsers, and may be used to refine
the information provided by the actual headers. Tags using this form should
have an equivalent effect when specified as an HTTP header, and in some
servers may be translated to actual HTTP headers automatically or by a
pre-processing tool.
Note: While HTTP-EQUIV META tag appears to work properly with Netscape
Navigator, other browsers may ignore them, and they are ignored by Web
proxies, which are becoming more widespread. Use of the equivalent HTTP
header, as supported by e.g. Apache server, is more reliable and is
recommended wherever possible.
<<< . . . If http-equiv meta tags are ignored by some browsers, or will be
one day, then what good is all this? Should I investigate if there is
java-script that insures my site is freshly loaded when revisited by a user?
.. . .>>>
Thanks in Advance,
Rosco