J
Julien Cigar
Hello list,
I have a dbm "database" which needs to be accessed/writed by multiple
processes. At the moment I do something like :
@with_lock
def _save(self):
f = shelve.open(self.session_file, 'c')
try:
f[self.sid] = self.data
finally:
f.close()
the with_lock() decorator create a .lock file which is deleted when the
function exit, so every operation did the following:
- acquire .lock file
- open the dbm file
- do the operation (save, load, ...)
- close the dbm file
- delete the .lock file
I made some tests and following my results the open() / close() add some
overhead (about 5 times slower).
I read that the gdbm module should be safe against multiple processes (I
saw the "'u' -- Do not lock database." in the doc, so I presume it's
locked by default ?). Does it mean that two (or more) processes can open
the dbm file and write in the same time without
errors/corruptions/segfaults/... ?
Thanks,
Julien
I have a dbm "database" which needs to be accessed/writed by multiple
processes. At the moment I do something like :
@with_lock
def _save(self):
f = shelve.open(self.session_file, 'c')
try:
f[self.sid] = self.data
finally:
f.close()
the with_lock() decorator create a .lock file which is deleted when the
function exit, so every operation did the following:
- acquire .lock file
- open the dbm file
- do the operation (save, load, ...)
- close the dbm file
- delete the .lock file
I made some tests and following my results the open() / close() add some
overhead (about 5 times slower).
I read that the gdbm module should be safe against multiple processes (I
saw the "'u' -- Do not lock database." in the doc, so I presume it's
locked by default ?). Does it mean that two (or more) processes can open
the dbm file and write in the same time without
errors/corruptions/segfaults/... ?
Thanks,
Julien