N
nicholas.petrella
I am currently trying to use the python logging system as a core
enterprise level logging solution for our development and production
environments.
The rotating file handler seems to be what I am looking for as I want
the ability to have control over the number and size of log files that
are written out for each of our tools. I have noticed a few problems
with this handler and wanted to post here to get your impressions and
possibly some ideas about whether these issues can be resolved.
The first issue is with multiple copies of the same tool trying to log
to the same location. This should not be an issue as the libraries are
supposed to be thread safe and therefore also should be safe for
multiple instances of a tool. I have run into two problems with
this...
1.
When a log file is rolled over, occasionally we see the following
traceback in the other instance or instances of the tool:
Traceback (most recent call last):
File "/usr/local/lib/python2.4/logging/handlers.py", line 62, in
emit
if self.shouldRollover(record):
File "/usr/local/lib/python2.4/logging/handlers.py", line 132, in
shouldRollover
self.stream.seek(0, 2) #due to non-posix-compliant Windows
feature
ValueError: I/O operation on closed file
As best I can tell this seems to be caused by instance A closing the
log file and rolling it over and instance B is still trying to use
it's file handle to that log file. Except that A has replaced the file
during rollover. It seems that a likely solution would be to handle
the exception and reopen the log file. It seems that the newer
WatchedFileHandler (http://www.trentm.com/python/dailyhtml/lib/
node414.html) provides the functionality that is needed, but I think
it would be helpful to have the functionality included with the
RotaingFileHandler to prevent these errors.
2.
I am seeing that at times when two instances of a tool are logging,
the log will be rotated twice. It seems that ass app.log approaches
the size limeit (10 MB in my case), the rollover is triggered in both
instances of the application causing a small log file to be created.
-rw-rw-rw- 1 petrella user 2758383 May 8 16:22 app.log.1 <----
Small log
-rw-rw-rw- 1 petrella user 10485903 May 8 16:22 app.log.2
-rw-rw-rw- 1 petrella user 2436167 May 8 16:21 app.log.3
It seems that the rollover should also be protected so that the log
file is not rolled twice.
I also wanted to ask for anyone's thoughts on maybe a better way to
implement python logging to meet our needs.
The infrastructure in which I am work needs the ability to have log
files written to from multiple instances of the same script and
potentially from hundreds or more different machines.
I know that the documentation suggests using a network logging server
but I wanted to know if anyone had any other solutions to allow us to
build off of the current python logging packages.
Thanks in advance for any of your responses.
-Nick
enterprise level logging solution for our development and production
environments.
The rotating file handler seems to be what I am looking for as I want
the ability to have control over the number and size of log files that
are written out for each of our tools. I have noticed a few problems
with this handler and wanted to post here to get your impressions and
possibly some ideas about whether these issues can be resolved.
The first issue is with multiple copies of the same tool trying to log
to the same location. This should not be an issue as the libraries are
supposed to be thread safe and therefore also should be safe for
multiple instances of a tool. I have run into two problems with
this...
1.
When a log file is rolled over, occasionally we see the following
traceback in the other instance or instances of the tool:
Traceback (most recent call last):
File "/usr/local/lib/python2.4/logging/handlers.py", line 62, in
emit
if self.shouldRollover(record):
File "/usr/local/lib/python2.4/logging/handlers.py", line 132, in
shouldRollover
self.stream.seek(0, 2) #due to non-posix-compliant Windows
feature
ValueError: I/O operation on closed file
As best I can tell this seems to be caused by instance A closing the
log file and rolling it over and instance B is still trying to use
it's file handle to that log file. Except that A has replaced the file
during rollover. It seems that a likely solution would be to handle
the exception and reopen the log file. It seems that the newer
WatchedFileHandler (http://www.trentm.com/python/dailyhtml/lib/
node414.html) provides the functionality that is needed, but I think
it would be helpful to have the functionality included with the
RotaingFileHandler to prevent these errors.
2.
I am seeing that at times when two instances of a tool are logging,
the log will be rotated twice. It seems that ass app.log approaches
the size limeit (10 MB in my case), the rollover is triggered in both
instances of the application causing a small log file to be created.
-rw-rw-rw- 1 petrella user 10485641 May 8 16:23 app.logls -l
-rw-rw-rw- 1 petrella user 2758383 May 8 16:22 app.log.1 <----
Small log
-rw-rw-rw- 1 petrella user 10485903 May 8 16:22 app.log.2
-rw-rw-rw- 1 petrella user 2436167 May 8 16:21 app.log.3
It seems that the rollover should also be protected so that the log
file is not rolled twice.
I also wanted to ask for anyone's thoughts on maybe a better way to
implement python logging to meet our needs.
The infrastructure in which I am work needs the ability to have log
files written to from multiple instances of the same script and
potentially from hundreds or more different machines.
I know that the documentation suggests using a network logging server
but I wanted to know if anyone had any other solutions to allow us to
build off of the current python logging packages.
Thanks in advance for any of your responses.
-Nick