A
anuraguniyal
In my application I am trying to access(read) a DB thru a thread while
my main thread is adding data to it and it gives following error(s)
bsddb._db.DBRunRecoveryError: (-30974, 'DB_RUNRECOVERY: Fatal error,
run database recovery -- PANIC: Permission denied')
and sometimes
bsddb._db.DBRunRecoveryError: (-30974, 'DB_RUNRECOVERY: Fatal error,
run database recovery -- PANIC: fatal region error detected; run
recovery')
sometimes
bsddb._db.DBInvalidArgError: (22, 'Invalid argument -- DB_LOCK-
sometimes pure seg fault.
Program received signal SIGSEGV, Segmentation fault.
0xb7c1b845 in __bam_adjust () from /usr/lib/libdb-4.4.so
and some time memory usage keeps on increasing and cpu is 100%
it crashes with memory error.
This doesn't happen always, almost 1 in 10 cases.
If i use simple python threaded function instead of threading class,
it works.
I have attached a simple script which tries to replicate the scenario.
Do anybody has a clue what I am doing wrong here?
I suppose bsddb3 DB can be accessed from mutiple threads?
or do I need to specifically set DB_THREAD flag? though with
db.DB_THREAD it hangs on some mutex?
Thanks a lot
Anurag
-------
import time
import os
import threading
import thread
import shutil
from bsddb3 import db
class DocQueueConsumer(threading.Thread):
def __init__(self, queueDB):
threading.Thread.__init__(self)
self.queueDB = queueDB
self.setDaemon(True)
def run(self):
while True: self.queueDB.cursor()
def crash():
path = "/tmp/test_crash"
if os.path.exists(path):
shutil.rmtree(path)
os.mkdir(path)
aBigEnv = db.DBEnv()
aBigEnv.set_cachesize(0, 512*1024*1024)
aBigEnv.open(path, db.DB_INIT_CDB|db.DB_INIT_MPOOL|db.DB_CREATE)
queueDB = db.DB(aBigEnv)
queueDB.open('mydb', dbtype=db.DB_RECNO, flags=db.DB_CREATE)
DocQueueConsumer(queueDB).start()
for i in xrange(10**5):
if i%1000==0: print i/1000
queueDB.append("something")
crash()
-------
my main thread is adding data to it and it gives following error(s)
bsddb._db.DBRunRecoveryError: (-30974, 'DB_RUNRECOVERY: Fatal error,
run database recovery -- PANIC: Permission denied')
and sometimes
bsddb._db.DBRunRecoveryError: (-30974, 'DB_RUNRECOVERY: Fatal error,
run database recovery -- PANIC: fatal region error detected; run
recovery')
sometimes
bsddb._db.DBInvalidArgError: (22, 'Invalid argument -- DB_LOCK-
lock_put: Lock is no longer valid')
sometimes pure seg fault.
Program received signal SIGSEGV, Segmentation fault.
0xb7c1b845 in __bam_adjust () from /usr/lib/libdb-4.4.so
and some time memory usage keeps on increasing and cpu is 100%
it crashes with memory error.
This doesn't happen always, almost 1 in 10 cases.
If i use simple python threaded function instead of threading class,
it works.
I have attached a simple script which tries to replicate the scenario.
Do anybody has a clue what I am doing wrong here?
I suppose bsddb3 DB can be accessed from mutiple threads?
or do I need to specifically set DB_THREAD flag? though with
db.DB_THREAD it hangs on some mutex?
Thanks a lot
Anurag
-------
import time
import os
import threading
import thread
import shutil
from bsddb3 import db
class DocQueueConsumer(threading.Thread):
def __init__(self, queueDB):
threading.Thread.__init__(self)
self.queueDB = queueDB
self.setDaemon(True)
def run(self):
while True: self.queueDB.cursor()
def crash():
path = "/tmp/test_crash"
if os.path.exists(path):
shutil.rmtree(path)
os.mkdir(path)
aBigEnv = db.DBEnv()
aBigEnv.set_cachesize(0, 512*1024*1024)
aBigEnv.open(path, db.DB_INIT_CDB|db.DB_INIT_MPOOL|db.DB_CREATE)
queueDB = db.DB(aBigEnv)
queueDB.open('mydb', dbtype=db.DB_RECNO, flags=db.DB_CREATE)
DocQueueConsumer(queueDB).start()
for i in xrange(10**5):
if i%1000==0: print i/1000
queueDB.append("something")
crash()
-------