Processes with strange behavoir

M

Markus Franz

Hi.

Today I created a script called load.py for using at the command line
written in Python 2.3.
This script should load as many websites as given on the comand line and
print them with a seperation string to stdout. The loading was done in
parallel. (I used processes for this.)

The script was started by the following command:

../load.py en 58746 http://www.python.com

Well, everything was fine. Then I wanted to load a second website and so I
started the script with the following command:

../load.py en 58746 http://www.python.com http://www.linux.org

The behaviour was strange: The last website (http://www.linux.org) was
loaded and printed twice.

Then I started the script for a third time with the following command:

../load.py en 58746 http://www.python.com http://www.linux.org
http://www.suse.com

The result was: First websites was loaded and shown once, second website
twice, and the third website was loaded and shown for four times!

(This behaviour occurs with ANY address given to the script...)


Does anybody know an answer to my problem???
Thank you.

Best regards

Markus Franz

(some information about my computer: Python 2.3, SuSE Linux 9.0 Pro with
Kernel 2.4)

--------------------------------------------------
My script:
--------------------------------------------------

#!/usr/bin/python

import urllib2, sys, socket, os, string

# set timeout
socket.setdefaulttimeout(4)

# function for loading and printing a website
def myfunction(url):
try:
req = urllib2.Request(url)
req.add_header('Accept-Language', sys.argv[1])
req.add_header('User-Agent', 'Mozilla/4.0 (compatible; MSIE 6.0;
Windows NT 5.1)')
f = urllib2.urlopen(req)
contents = f.read()
output = "\n---PI001---" + sys.argv[2] + '---PI001---' + '---PI002-'
+ sys.argv[2] + '::' + f.geturl() + '::' + sys.argv[2] + "-PI002---\n" +
contents
print output
del output
f.close()
del contents
del f
del req
except:
pass

# start processes
for currenturl in sys.argv:
if currenturl != sys.argv[0] and currenturl != sys.argv[1] and
currenturl != sys.argv[2]:
PID = os.fork()
if PID == 0:
myfunction(currenturl)
exit
 
P

Peter Otten

Markus said:
Today I created a script called load.py for using at the command line
written in Python 2.3.
This script should load as many websites as given on the comand line and
print them with a seperation string to stdout. The loading was done in
parallel. (I used processes for this.)

The script was started by the following command:

./load.py en 58746 http://www.python.com

Well, everything was fine. Then I wanted to load a second website and so I
started the script with the following command:

./load.py en 58746 http://www.python.com http://www.linux.org

The behaviour was strange: The last website (http://www.linux.org) was
loaded and printed twice.

Then I started the script for a third time with the following command:

./load.py en 58746 http://www.python.com http://www.linux.org
http://www.suse.com

The result was: First websites was loaded and shown once, second website
twice, and the third website was loaded and shown for four times!

(This behaviour occurs with ANY address given to the script...)


Does anybody know an answer to my problem???

I think the structure of your script should be

import os, sys
for arg in sys.argv[1:]:
pid = os.fork()
if pid == 0:
print arg # placeholder for the download routine
break

As you have omitted the break statement, each child will complete the for
loop and thus continue to fork() for the remaining entries in sys.argv[1:].

Peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,225
Members
46,815
Latest member
treekmostly22

Latest Threads

Top