thanks you for answer.
Do you think the first solution will be slow? I think I must use
locking to avoid problem.
May not need your own level of locking since SQLite has a fairly
complex internal set of locks -- but you /would/ have to program with
recovery code if one thread, say, attempts to commit() and gets an error
message that the DB is locked. SQLite allows many reader threads in
parallel, but the first one that attempts to update the DB will: 1)
block any NEW reader thread and, 2) block, itself, until all other
reader threads have exited -- only then will it propagate upwards to
actually writing the DB changes.
So it is best to encapsulate any updates into a short fast,
self-contained, sequence. If you have to do something like: read data,
present to use, get user changes, submit update... You will likely need
to do it in two transactions: read data/commit (unlocking DB), present
to user, get user changes (hopefully only one record or group of related
records), submit an update where you specify a match of the old data
values (this is to allow you to detect if some other transaction slipped
in an update), commit.
It IS lot of overhead, but I suspect even a full client/server
database [the DBMS controls the data files and clients send operations]
(rather than SQLite file-server model [each client opens/controls the
same shared files -- which is what M$ JET/Access does]) will require
something similar as you'd need to maintain the open connection/cursor
between operations.
The second solution seems better and looks like what I wanted to build
at first time, but maybe an ever opened sqlite database isn't a good
thing ?
It would have been open continuously in your original attempt to
share across threads. The DB access thread would only have to open
(connect) once, and create one cursor. Updates will still have to follow
the fast short mode, in which a read/display is a separate transaction
from an update.
The data model may become more complex... (pseudocode)
DBRequestQ = Queue.Queue()
class Transaction(object):
def __init__(self):
self.myQ = Queue.Queue()
self.ops = []
def addSelect(self, SQL, parms=None):
self.ops.append(("SELECT", SQL, parms))
def addUpdate(self, SQL, parms=None):
self.ops.append(("UPDATE", SQL, parms))
def addInsert(self, SQL, parms=None):
self.ops.append(("INSERT", SQL, parms))
def addShutdown(self):
self.ops.append(("SHUTDOWN", None, None))
def submit(self):
DBRequestQ.put(self)
def retrieve(self):
myData = []
for (op, status, data) in self.myQ.get():
myData.append((op, status, data))
if op == "COMMIT": return myData
def DBThread():
con = sqlite.connect()
cur = con.cursor()
run = True
for transaction in DBRequestQ.get():
#begin transaction
for (op, SQL, parms) in transaction.ops:
if op == "SHUTDOWN":
run = False
break
if parms:
cur.execute(SQL, parms)
else:
cur.execute(SQL)
if op == "SELECT":
data = cur.fetchall()
else:
data = None
#get operation status code (somehow?)
#ie, number of records updated/inserted, or error
transaction.myQ.put((op, status, data))
con.commit()
#get commit status
transaction.myQ.put(("COMMIT", status, None))
if not run: exit #exit DB thread
cur.close()
con.close()
....
A request sequence would then look something like:
aTrans = Transaction()
aTrans.addSelect("select ...", (itm1, ... itmN))
aTrans.add*(..., ...)
....
aTrans.submit()
results = aTrans.retrieve()
del aTrans
Please note that I did not hack in any time-out or error recovery
logic...
--
Wulfraed Dennis Lee Bieber KD6MOG
(e-mail address removed) (e-mail address removed)
HTTP://wlfraed.home.netcom.com/
(Bestiaria Support Staff: (e-mail address removed))
HTTP://www.bestiaria.com/