How can I measure web hits from my web browser/end user perspective....?

C

Chris Nelson

Hello:
I know that for a web server..... there are many ways to measure the
"hits", when accessing a web page. However, what I need to be able to
do... is determine that when downloading a web page.... how many
"hits" comprise that web page.(From an end user perspective)

Am I correct in that an accurate # of hits representing a web page
could be determined by viewing the source of a web page and adding up
the files that are listed in that code? Is there a piece of
code/software that does this?

Basically,
I have a situation where I need to demonstrate to semi-technical
people that a single web page can be comprised of many hits. (And in
my case more importantly..... hits do not in any way directly
correlate to time spent an individual web page and or site.

Any help would be greatly appreciated.


Chris
 
M

Malcolm Dew-Jones

Chris Nelson ([email protected]) wrote:
: Hello:
: I know that for a web server..... there are many ways to measure the
: "hits", when accessing a web page. However, what I need to be able to
: do... is determine that when downloading a web page.... how many
: "hits" comprise that web page.(From an end user perspective)

: Am I correct in that an accurate # of hits representing a web page
: could be determined by viewing the source of a web page and adding up
: the files that are listed in that code? Is there a piece of
: code/software that does this?

: Basically,
: I have a situation where I need to demonstrate to semi-technical
: people that a single web page can be comprised of many hits. (And in
: my case more importantly..... hits do not in any way directly
: correlate to time spent an individual web page and or site.

: Any help would be greatly appreciated.


The above shows a great deal of confusion in your mind about web pages.
By "semi-technical people" perhaps you mean yourself.

That would explain why you posted this question in a perl group, instead
of in some group dedicated to web page issues.

However, with that excuse in mind, and guessing somewhat about what you
really mean, perhaps you mean that you think that the contents displayed
in the window of a web browser might sometimes require more than one file
to be downloaded from the web. That is true.

To display what looks like a single "web page" will commonly require
downloading multiple files. Perhaps the most notable situation is that
each picture in a web page requires its own file, so if a web page shows
two pictures then there are at least three downloads - the html file, plus
two picture files.

As for "time spent", and again I am guessing what it is you are really
trying to ask, no, the time spent looking at a web page has nothing to do
with any of the above.
 
C

chris-usenet

Chris Nelson said:
I know that for a web server..... there are many ways to measure the
"hits", when accessing a web page. However, what I need to be able to
do... is determine that when downloading a web page.... how many
"hits" comprise that web page.(From an end user perspective)

The more usual approach is to count pages rather than raw hits. To
do that you don't need to know how many hits comprise a page; rather,
you simply assume[1] that each page is represented by a corresponding
single html object and count those.

[1] not necessarily true, but pretty effective unless all your pages
are different framesets
Am I correct in that an accurate # of hits representing a web page
could be determined by viewing the source of a web page and adding up
the files that are listed in that code? Is there a piece of
code/software that does this?

You're overloading the word "page" here, but I see what you're trying to
say. Your assumption is correct, but it's a hard way of solving the
problem.
Basically,
I have a situation where I need to demonstrate to semi-technical
people that a single web page can be comprised of many hits.

Create a single instance of a web server, zero out the logs and load a
single page (containing no images or other embedded items). View the log
file and count the entries. Zero out the logs again and load another
page (containing serveral images, etc.). View the log file and count
the entries.

Take particular note of the "referer" [sic] field, which indicates the
referring URL for the item being returned.
(And in
my case more importantly..... hits do not in any way directly
correlate to time spent an individual web page and or site.

View page one, jump to a different web site, return to page two. View
the log files.

None of this is perl related.
Chris
 
J

Joe Smith

Chris said:
I know that for a web server..... there are many ways to measure the
"hits", when accessing a web page. However, what I need to be able to
do... is determine that when downloading a web page.... how many
"hits" comprise that web page.(From an end user perspective)

A non-perl answer is to view the page with Mozilla (or latest Netscape),
then select from the menu View -> Page Info -> Media. Each image or
background is a hit to the server. Also include any
<link href="style.css"> or <script src="foo.js"> tags.
Or 'Save As...' and count the number of files MSIE saves.

The perl answer is to use a web mirroring script, but set the recursion
level to 1 (fetch one URL and any <img src=""> tags, but do not
follow any <a href=""> links).

-Joe
 
B

Ben Morrow

Quoth (e-mail address removed) (Chris Nelson):
Hello:
I know that for a web server..... there are many ways to measure the
"hits", when accessing a web page. However, what I need to be able to
do... is determine that when downloading a web page.... how many
"hits" comprise that web page.(From an end user perspective)

Am I correct in that an accurate # of hits representing a web page
could be determined by viewing the source of a web page and adding up
the files that are listed in that code? Is there a piece of
code/software that does this?

HTML::LinkExtor may do what you need (though I agree with the other
posters that you are almost certainly barking up the wrong tree here...)

Ben
 
J

Jürgen Exner

Chris said:
I know that for a web server..... there are many ways to measure the
"hits", when accessing a web page. However, what I need to be able to
do... is determine that when downloading a web page.... how many
"hits" comprise that web page.(From an end user perspective)

What does this have to do with Perl?
Am I correct in that an accurate # of hits representing a web page
could be determined by viewing the source of a web page and adding up
the files that are listed in that code?

No, that number would be bogus at best, think of frames, think of
client-side caching, think of ISP caching, etc.
Basically,
I have a situation where I need to demonstrate to semi-technical
people that a single web page can be comprised of many hits.

Trivial: just write a page with frames and pictures.
(And in
my case more importantly..... hits do not in any way directly
correlate to time spent an individual web page and or site.

You must be kidding. Do they claim a more hits equals more time spent? Then
just write one technically complex page with lots of frames and picture
links, but trivial content. And another technically primitive (single hit)
but with 100s of lines of plain text.
What do they think which page will generate more hits and which page will be
viewed longer?

jue
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top