O
oliver789
Hello,
my problem is mostly a general database locking problem. Since I use
Java and hibernate I post the problem here. Seems that the plain
database people don't understand once Java or hibernate come into
play. I also posted to the hibernate forum but got no reply. So I'm
trying my luck here...
I have the special problem that there can be races between threads/
servers to insert a new entry into some table. To handle this I have a
locking table with a specific entry for every operation where a race
condition can happen. This specific entry is updated in the same
transaction in which the insert is done. In case of an optimistic
locking exception thrown by hibernate some thread knows that it didn't
come in first. So it knows it only needs to reload the data and thus
gets the data as inserted by the first thread. The locking table has a
unique constraint defined to make sure that there can only be one
entry for each operation where there can be a race. When the operation
is started the respective entry is loaded and stored in a thread local
variable. And when the operation has finished a session.merge(...) is
done on the entry in the thread local variable.
So far so good. Now comes the problem: It is only a question of time
till the version count of some entry in the locking table will reach
its maximal value beyond it cannot be incremented any more, because
the field width of the version count would be exceeded. In my scenario
this will take several months or maybe years till it will happen. But
it will definitely happen one day. So I'm looking for a solution for
this. What would be nice would be to simply reset the version count to
0. But this would certainly result in an optimistic locking exception
in case some thread local variable exists that still has a reference
to the entry loaded before the version count reset.
The solution I have found so far is too complicated. Maybe someone
else has a better idea. What I do is the following:
If the version count has reached the max value another entry is
created which is then in the following incremented. Problem here is
that there can be races conditions in between the change over from the
initial entry to the follow-up entry. Now starts the difficult part
how to handle this. I thought of a third entry that is updated when
the second entry is inserted. Unhappily, this does not work with the
@Transactional annotation in Spring. I would have to do a flush to get
this to work and in the past I have only seen session.flush(...)
resulting in some deadlock. So I have this wild solution I don't like,
because to complicated:
1. Max version count: M. N is some number << M. (much smaller than M)
2. When the version count has reached M - N, the second entry is
created
3. In the following the session.merge(...) is done on both entries
till M has been reached. From that moment on the merge is only done on
the second entry. The idea is here that with N being sufficiently
large also the longest operations will have been finished till M has
been reached.
4. The first entry is deleted.
5. When for the second entry M - N has been reached, the same
procedure repeats resulting in the first entry being created again and
the second one being deleted once M has been reached.
The advantage of this approach is that the number of entries does not
grow beyond 2. The other approach would be not to delete the former
entry but keep on creating new entries. I don't like this since this
for my taste is a little "messy".
Thanks to every body who kept on reading this long post up to this
point . Maybe someone out there has a simpler solution. Would be
nice.
Cheers, Oliver
my problem is mostly a general database locking problem. Since I use
Java and hibernate I post the problem here. Seems that the plain
database people don't understand once Java or hibernate come into
play. I also posted to the hibernate forum but got no reply. So I'm
trying my luck here...
I have the special problem that there can be races between threads/
servers to insert a new entry into some table. To handle this I have a
locking table with a specific entry for every operation where a race
condition can happen. This specific entry is updated in the same
transaction in which the insert is done. In case of an optimistic
locking exception thrown by hibernate some thread knows that it didn't
come in first. So it knows it only needs to reload the data and thus
gets the data as inserted by the first thread. The locking table has a
unique constraint defined to make sure that there can only be one
entry for each operation where there can be a race. When the operation
is started the respective entry is loaded and stored in a thread local
variable. And when the operation has finished a session.merge(...) is
done on the entry in the thread local variable.
So far so good. Now comes the problem: It is only a question of time
till the version count of some entry in the locking table will reach
its maximal value beyond it cannot be incremented any more, because
the field width of the version count would be exceeded. In my scenario
this will take several months or maybe years till it will happen. But
it will definitely happen one day. So I'm looking for a solution for
this. What would be nice would be to simply reset the version count to
0. But this would certainly result in an optimistic locking exception
in case some thread local variable exists that still has a reference
to the entry loaded before the version count reset.
The solution I have found so far is too complicated. Maybe someone
else has a better idea. What I do is the following:
If the version count has reached the max value another entry is
created which is then in the following incremented. Problem here is
that there can be races conditions in between the change over from the
initial entry to the follow-up entry. Now starts the difficult part
how to handle this. I thought of a third entry that is updated when
the second entry is inserted. Unhappily, this does not work with the
@Transactional annotation in Spring. I would have to do a flush to get
this to work and in the past I have only seen session.flush(...)
resulting in some deadlock. So I have this wild solution I don't like,
because to complicated:
1. Max version count: M. N is some number << M. (much smaller than M)
2. When the version count has reached M - N, the second entry is
created
3. In the following the session.merge(...) is done on both entries
till M has been reached. From that moment on the merge is only done on
the second entry. The idea is here that with N being sufficiently
large also the longest operations will have been finished till M has
been reached.
4. The first entry is deleted.
5. When for the second entry M - N has been reached, the same
procedure repeats resulting in the first entry being created again and
the second one being deleted once M has been reached.
The advantage of this approach is that the number of entries does not
grow beyond 2. The other approach would be not to delete the former
entry but keep on creating new entries. I don't like this since this
for my taste is a little "messy".
Thanks to every body who kept on reading this long post up to this
point . Maybe someone out there has a simpler solution. Would be
nice.
Cheers, Oliver