Locking objects in an array

P

Philipp

Hello,
I've come accross a threading problem for which I can't find a nice
solution. (SSCCP at end of post)
I have a bidimensional array of objects (view it as a 2D lattice). I
want to make atomic operations on random square 2x2 regions of the
lattice. Thus I want to lock the 4 objects of the region, then perform
the change, then unlock those objects. Several threads should be
allowed to work on the array at the same time, if each thread is
accessing a 2x2 region which does not overlap with that of another,
they should be capable of doing the work in a parallel way.

How should I design the code to achieve this?

My solution (which may be misguided) so far is that, each object of
the lattice contains a Lock and has a lock() and unlock() method which
delegate to the lock.

So the approach is basically:
1. call lock() on each of the 4 objects (always in the same order)
2. make change
3. call unlock() on each of the 4 objects

What I don't like about this, is that
a) lock and unlock really have nothing to do in the API of the objects
in the lattice. Would it be possible to achieve the same result
without using an explicit Lock (thus no lock() method), but using the
Java synchronized() mechanism?
b) if an exception is thrown anywhere, eg. where only a part of the
lattice region has been locked, the cleanup is ugly because you can't
test if a lock is presently locked. You have to rely on unlock() to
throw.
c) An exception may leave the lattice in an inconsistent state.

Any better designs are welcome.

Thanks Phil

PS: In the code, the lattice objects are Counters and the action which
is performed atomically on the region is an increment.


---------------- Lattice.java ------------


import java.util.Random;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

public class Lattice {
/**
* The size of the region of the lattice to increment
* atomically. In this case, 2 x 2.
*/
private static final int FRAGMENT_SIZE = 2;
private static final int LATTICE_X_SIZE = 10;
private static final int LATTICE_Y_SIZE = 6;

private Counter[][] lattice = new Counter[LATTICE_X_SIZE]
[LATTICE_Y_SIZE];

public Lattice() {
// filling the array
for (int i = 0; i < lattice.length; i++) {
for (int j = 0; j < lattice[0].length; j++) {
lattice[j] = new Counter(i,j);
}
}
}


/**
* This method increments the square region of the lattice
* which has the given indexes as top-left corner. If the
* region spans outside of the lattice, wrap around is used.
* This method should be thread-safe. It can be called
* efficiently by several threads at the same time.
* @param x index of top-left corner
* @param y index of top-left corner
*/
public void increment(int x, int y){
// System.out.println("Increm " + x + " " + y);
try{
// locking the 2 x 2 region
for(int i = x; i < x + FRAGMENT_SIZE; i++){
for(int j = y; j < y + FRAGMENT_SIZE; j++){
int indexI = i % LATTICE_X_SIZE; // wrap around
int indexJ = j % LATTICE_Y_SIZE;
lattice[indexI][indexJ].lock(); // locking in increasing
order to avoid deadlock
}
}

// Now the fragment is locked
// let's increment it
for(int i = x; i < x + FRAGMENT_SIZE; i++){
for(int j = y; j < y + FRAGMENT_SIZE; j++){
int indexI = i % LATTICE_X_SIZE;
int indexJ = j % LATTICE_Y_SIZE;

// the actual action!
lattice[indexI][indexJ].increment();
}
}

} finally{
// unlock the region
for(int i = x; i < x + FRAGMENT_SIZE; i++){
for(int j = y; j < y + FRAGMENT_SIZE; j++){
int indexI = i % LATTICE_X_SIZE;
int indexJ = j % LATTICE_Y_SIZE;
try{
lattice[indexI][indexJ].unlock(); // unlocking in
increasing order
} catch (IllegalMonitorStateException e) {
// was probably never locked (?)
}
}
}
}
}


public void printArray() {
for (int i = 0; i < lattice.length; i++) {
for (int j = 0; j < lattice[0].length; j++) {
System.out.print(lattice[j]);
}
System.out.print("\n");
}
}




public static void main(String[] args) throws Exception {
Lattice m = new Lattice();
RegionIncrementer fi1 = new RegionIncrementer(m);
RegionIncrementer fi2 = new RegionIncrementer(m);
RegionIncrementer fi3 = new RegionIncrementer(m);
Thread t1 = new Thread(fi1);
Thread t2 = new Thread(fi2);
Thread t3 = new Thread(fi3);
t1.start();
t2.start();
t3.start();

Thread.sleep(5000);

fi1.stopit();
fi2.stopit();
fi3.stopit();

t1.join();
t2.join();
t3.join();

m.printArray();
System.out.println(fi1);
System.out.println(fi2);
System.out.println(fi3);
}

/**
* A {@link Runnable} which repeatedly chooses
* a random region on the lattice and increments
* the counter in it.
*/
static class RegionIncrementer implements Runnable{
private static final Random rnd = new Random();
private int performedIncrementsCounter;

private final Lattice lattice;
private volatile boolean run = true;
public RegionIncrementer(Lattice lattice) {
this.lattice = lattice;
}

public void run() {
while(run){
lattice.increment(rnd.nextInt(LATTICE_X_SIZE), rnd.nextInt
(LATTICE_Y_SIZE));
performedIncrementsCounter++;
}
}

public void stopit(){
run = false;
}

public String toString() {
return "Incremented " + performedIncrementsCounter + " times";
}
}


/**
* The object which are stored in the lattice. In this case a simple
* counter.
*/
static class Counter{
private final int xPos;
private final int yPos;

private int count;

private Lock lock = new ReentrantLock();
private int contentions;

public Counter(int pos, int pos2) {
xPos = pos;
yPos = pos2;
}

public void lock(){
boolean locked = lock.tryLock();
if (!locked) { // this is used just to explicitely show
contentions
contentions++;
System.out.println(xPos + " " + yPos + " Contentions: " +
contentions);
lock.lock();
}
}

public void unlock(){
lock.unlock();
}

/**
* Need not be synchronized, because wrapping code
* is synchronizing
*/
// TODO Is this OK? Will lock() and unlock() do a memory barrier?
public void increment(){
count++;
}

public String toString() {
return String.format("%1$#" + 8 + "s", "" + count);
}
}

}
 
L

Lew

Release locks in the reverse order that you acquire them.

Java 'synchronized' is an explicit lock.

Patricia said:
There is a relevant inconsistency between the problem statement and the
sample code. The code appears to be designed to allow for a block size
other than 2x2 through a simple change in a constant.

If you can bind it to 2x2, you can do a 4-deep nested synchronization:

What she said.
 
P

Philipp

There is a relevant inconsistency between the problem statement and the
sample code. The code appears to be designed to allow for a block size
other than 2x2 through a simple change in a constant.

If you can bind it to 2x2, you can do a 4-deep nested synchronization:

No, the 2x2 was just for the example. My problem will have non-fixed
region sizes (maybe not even square).
Or keep a variable that tracks which locks you have obtained.

You are right. Would this be premature optmization, if exceptions are
really never expected?
This seems to be orthogonal to the synchronization issue. I assume the
counter increment task is just a placeholder for possibly more
complicated work.

If you need maintain consistency even in the presence of exceptions part
way though the work, you need to do one of two things:

1. Note the old state at the start of a transaction, and rollback to it
in a finally block unless a flag indicates the whole transaction was
successful.

2. Not make any changes to the actual lattice until after the last
possible non-fatal exception. You could, for example, calculate new
values into a 2x2 array, and only copy its values into the lattice on
success.

Either approach would work if you can make the lattice objects
immutable, so that the operation creates new pointers for the four
cells, rather than changing the state of the objects they reference.

However, the first approach could be done even with mutable objects if
they are made transaction aware with commit and rollback. At the start
of the synchronized block you would inform each of the four objects of
the transaction start. In a finally block you would tell each to commit
if a flag indicated success, or rollback to their state at the start of
the transaction if there were a failure.

Thanks for that advice.
Phil
 
P

Philipp

Patricia said:
[...]
Synchronization with varying block sizes would be more complicated.

     One way to approach it would be with recursion.  I'll illustrate
with a List and an Iterator, although they're by no means the only way
to keep track:

        List<Thing> things = new ...;
        for (int dr = 0;  dr < rspan;  ++dr) {
            for (int dc = 0;  dc < cspan;  ++dc)
                things.add(lattice[r+dr][c+dc]);
        }
        doDirtyDeed(things.iterator(), things);

        ...

        void doDirtyDeed(Iterator<Thing> it, List<Thing> things) {
            if (it.hasNext()) {
                synchronized(it.next()) {
                    doDirtyDeed(it, things);
                }
            }
            else {
                // all the Things' locks are now held
                // do things to Things in "things"
            }
        }

This is a clever approach. Thanks for sharing it.
Phil
 
D

Daniel Pitts

Philipp said:
Hello,
I've come accross a threading problem for which I can't find a nice
solution. (SSCCP at end of post)
I have a bidimensional array of objects (view it as a 2D lattice). I
want to make atomic operations on random square 2x2 regions of the
lattice. Thus I want to lock the 4 objects of the region, then perform
the change, then unlock those objects. Several threads should be
allowed to work on the array at the same time, if each thread is
accessing a 2x2 region which does not overlap with that of another,
they should be capable of doing the work in a parallel way.

How should I design the code to achieve this?

My solution (which may be misguided) so far is that, each object of
the lattice contains a Lock and has a lock() and unlock() method which
delegate to the lock.

So the approach is basically:
1. call lock() on each of the 4 objects (always in the same order)
2. make change
3. call unlock() on each of the 4 objects

What I don't like about this, is that
a) lock and unlock really have nothing to do in the API of the objects
in the lattice. Would it be possible to achieve the same result
without using an explicit Lock (thus no lock() method), but using the
Java synchronized() mechanism?
b) if an exception is thrown anywhere, eg. where only a part of the
lattice region has been locked, the cleanup is ugly because you can't
test if a lock is presently locked. You have to rely on unlock() to
throw.
c) An exception may leave the lattice in an inconsistent state.

Here is the design I would consider:
Note, this is *completely* untested. It is also just a first iteration.
Replacing "boolean[][] locks" with a BitSet may be advantageous,
depending on the size of your data set.

If you really want to ensure exceptions don't leave the lattice in bad
shape, then you might make the objects themselves immutable, and have
Opeartion.operate return new objects, and have those be replaced into
the array upon success.

package latice;

import java.util.Arrays;

/**
* @author Daniel Pitts
*/
public class Latice<T> {
public static interface Operator<T> {
void operate(T[][] data);
}
private final T[][] data;
private final boolean[][] locks;
public Latice(int width, int height) {
data = allocate(width, height);
locks = new boolean[height][width];
}

public boolean operateOnRegion(int x, int y, int width, int height,
Operator<T> operator, long timeout) throws InterruptedException {
if (!lockRegion(x, y, width, height, timeout)) {
return false;
}
try {
operator.operate(getRegion(x, y, width, height));
return true;
} finally {
unlockRegion(x, y, width, height);
}
}

private void unlockRegion(int x, int y, int width, int height) {
synchronized (locks) {
setLockValue(x, y, width, height, false);
locks.notifyAll();
}
}

private void setLockValue(int x, int y, int width, int height,
boolean lockValue) {
for (int i = 0; i < height; ++i) {
Arrays.fill(locks[y + i], x, x+width, lockValue);
}
}

private T[][] getRegion(int x, int y, int width, int height) {
T[][] region = allocate(width, height);
for (int i = 0; i < height; ++i) {
System.arraycopy(data[i+y], x, region, 0, width);
}
return region;
}

private static <T> T[][] allocate(int width, int height) {
@SuppressWarnings({"unchecked", "UnnecessaryLocalVariable"})
final T[][] genericsBroken = (T[][]) new Object[height][width];
return genericsBroken;
}

private boolean lockRegion(int x, int y, int width, int height,
long timeout) throws InterruptedException {
final long endTime = System.currentTimeMillis() + timeout;
synchronized (locks) {
do {
final long timeNow = System.currentTimeMillis();
if (checkLocks(x, y, width, height)) {
break;
}
if (timeout == 0) {
locks.wait();
} else if (timeNow < endTime) {
locks.wait(endTime - timeNow);
} else {
return false;
}
} while (true);
setLockValue(x, y, width, height, true);
}
return true;
}

private boolean checkLocks(int x, int y, int width, int height) {
for (int j = 0; j < height; ++j) {
final boolean[] row = locks[y + j];
for (int i = 0; i < width; ++i) {
if (row[x+i]) {
return false;
}
}
}
return true;
}
}
 
T

Tom Anderson

Release locks in the reverse order that you acquire them.

Really? Why?

I'm not saying it's a bad idea, just that, in this case at least, i can't
see any problem with releasing them in the same, or some arbitrary other,
order.

tom
 
P

Philipp

Philipp said:
Hello,
I've come accross a threading problem for which I can't find a nice
solution. (SSCCP at end of post)
I have a bidimensional array of objects (view it as a 2D lattice). I
want to make atomic operations on random square 2x2 regions of the
lattice. Thus I want to lock the 4 objects of the region, then perform
the change, then unlock those objects. Several threads should be
allowed to work on the array at the same time, if each thread is
accessing a 2x2 region which does not overlap with that of another,
they should be capable of doing the work in a parallel way.
How should I design the code to achieve this?

Here is the design I would consider:
Note, this is *completely* untested.  It is also just a first iteration..
  Replacing "boolean[][] locks" with a BitSet may be advantageous,
depending on the size of your data set.

Your approach is very nice. It is reusable and flexible, particularly
through the use of the Operator interface and generics. The only
downside I see, is that all threads have a common lock ("locks") they
need to acquire to set/unset the lock table. Although this lock is
held only for short times, I guess that if I increase the number of
threads and the data array size, it may become a contention point. I'm
thinking about arrays 2000x2000 and maybe 8 threads working on it (ie.
quad core + hyperthreading).

Phil
 
A

Andreas Leitgeb

Philipp said:
public void increment(int x, int y){
// System.out.println("Increm " + x + " " + y);
try{
// locking the 2 x 2 region
for(int i = x; i < x + FRAGMENT_SIZE; i++){
for(int j = y; j < y + FRAGMENT_SIZE; j++){
int indexI = i % LATTICE_X_SIZE; // wrap around
int indexJ = j % LATTICE_Y_SIZE;

Oh, you've got a wraparound here! Caution, Danger ahead!
Namely: danger of dining philosopher's lockup.
e.g. if x happens to be (LATTICE_X_SIZE-1), then it's right
neighbour is actually (0), but locked later!
lattice[indexI][indexJ].lock(); // locking in increasing
order to avoid deadlock

The way you wrote it, the locks are *not* obtained in increasing order!

You'll have to use two loops, one for those cases where
"x >= LATTICE_X_SIZE - FRAGMENT_SIZE"
that will deal with the wrapped part of the fragment,
and another one for the normal case, clipping rather than
wrapping on overflow.
And ditto for y, of course.

You could also give up locking element-wise, and instead lock on the
inner arrays. It's a compromise between locking the whole lattice in
one, and the effort of locking pow(FRAGMENT_SIZE,2) elements
individually - instead you just lock FRAGMENT_SIZE elements, and get
a somewhat higher frequency of collisions in return (but not as bad
as a big central lock).

PS:
If you wanted to use "synchronized" instead, you'd have to give up
on having FRAGMENT_SIZE configurable. If you think that you'll
never need larger fragments than 2x2, you could hardcode four
variables to point to the respective four lattice elements (or two
for the inner arrays), and synchronize on them.
 
P

Philipp

Oh, you've got a wraparound here!   Caution, Danger ahead!
Namely: danger of dining philosopher's lockup.

You are absolutely right, thanks for spotting that!
I think the lockup can only happen if the number of threads NT >=
LATTICE_X_SIZE / (FRAGMENT_SIZE-1) respectively in the Y direction,
whatever is smaller. If the number of threads is below this, my
proposal should not lock. (ie. there's a philosopher missing at the
dinner table).

You'll have to use two loops, one for those cases where
  "x >= LATTICE_X_SIZE - FRAGMENT_SIZE"
that will deal with the wrapped part of the fragment,
and another one for the normal case, clipping rather than
wrapping on overflow.  

Yes, that would be the more general solution.

Phil
 
A

Andreas Leitgeb

Patricia Shanahan said:
Another option that works in many cases, and reduces conditional
branches, is to make the array a bit bigger than the real size. Add a
strip FRAGMENT_SIZE-1 wide to the right and bottom edges, without
changing LATTICE_X_SIZE or LATTICE_Y_SIZE.

It reduces conditional branches, but it doesn't solve the cyclic
obtaining of locks.
 
D

Daniel Pitts

Philipp said:
Philipp said:
Hello,
I've come accross a threading problem for which I can't find a nice
solution. (SSCCP at end of post)
I have a bidimensional array of objects (view it as a 2D lattice). I
want to make atomic operations on random square 2x2 regions of the
lattice. Thus I want to lock the 4 objects of the region, then perform
the change, then unlock those objects. Several threads should be
allowed to work on the array at the same time, if each thread is
accessing a 2x2 region which does not overlap with that of another,
they should be capable of doing the work in a parallel way.
How should I design the code to achieve this?
Here is the design I would consider:
Note, this is *completely* untested. It is also just a first iteration..
Replacing "boolean[][] locks" with a BitSet may be advantageous,
depending on the size of your data set.

Your approach is very nice. It is reusable and flexible, particularly
through the use of the Operator interface and generics. The only
downside I see, is that all threads have a common lock ("locks") they
need to acquire to set/unset the lock table. Although this lock is
held only for short times, I guess that if I increase the number of
threads and the data array size, it may become a contention point. I'm
thinking about arrays 2000x2000 and maybe 8 threads working on it (ie.
quad core + hyperthreading).

Phil
Don't guess, measure. It *may* be a contention point, but until you
determine that it is, don't try to work around it. If your Operation is
very fast, it *might* be an issue, but if it is "kind of fast" or even
"a little slow", it is likely that the cost of locking is minimal.
 
A

Andreas Leitgeb

Daniel Pitts said:
Don't guess, measure. It *may* be a contention point, but until you
determine that it is, don't try to work around it. If your Operation is
very fast...

I'd expect one increment per locked element to belong to the "very fast"
category.

But I don't know if this is already the real job, or just a sscce-style
simplification. ;-)
 
D

Daniel Pitts

Andreas said:
I'd expect one increment per locked element to belong to the "very fast"
category.

But I don't know if this is already the real job, or just a sscce-style
simplification. ;-)
I think I saw in another port of this thread that it was in fact a
simplification.

If it was the actual job, then using AtomicIntegerArray might be the
better approach.
 
D

Daniel Pitts

Philipp said:
Hello,
I've come accross a threading problem for which I can't find a nice
solution. (SSCCP at end of post)
I have a bidimensional array of objects (view it as a 2D lattice). I
want to make atomic operations on random square 2x2 regions of the
lattice. Thus I want to lock the 4 objects of the region, then perform
the change, then unlock those objects. Several threads should be
allowed to work on the array at the same time, if each thread is
accessing a 2x2 region which does not overlap with that of another,
they should be capable of doing the work in a parallel way.

How should I design the code to achieve this?

I was thinking about this a little longer, and one might be able to
create an algorithm that takes a Region and blocks as long as any
previously added Region is in use. I'm working on a prototype right
now, I'll post when I've completed it.
 
P

Philipp

I was thinking about this a little longer, and one might be able to
create an algorithm that takes a Region and blocks as long as any
previously added Region is in use.  I'm working on a prototype right
now, I'll post when I've completed it.

I don't know if this what you meant, but a possible approach would be
a non-blocking algorithm.
1. Take a snapshot of the objects in the region.
2. Do the work on them
3. Check if the region has not been modified.
4. Atomically commit the change if the region is the same as before in
1.

This might be particularly interesting if contention is not expected
(few threads, big array, small region) because we can limit
synchronization to point 4. Is this a way to follow?

Phil
 
D

Daniel Pitts

Philipp said:
I don't know if this what you meant, but a possible approach would be
a non-blocking algorithm.
1. Take a snapshot of the objects in the region.
2. Do the work on them
3. Check if the region has not been modified.
4. Atomically commit the change if the region is the same as before in
1.

This might be particularly interesting if contention is not expected
(few threads, big array, small region) because we can limit
synchronization to point 4. Is this a way to follow?

Phil
I was thinking more about not having explicit locks per object, but
instead having a "Lock Request" that puts its region into a specific
data structure. That Lock Request would block as long as the current
region intersects any region currently ahead of it in that data
structure. Once work is complete, that Lock Request can be removed from
that data-structure, unblocking other threads waiting for that region.
 
D

Daniel Pitts

Daniel said:
I was thinking more about not having explicit locks per object, but
instead having a "Lock Request" that puts its region into a specific
data structure. That Lock Request would block as long as the current
region intersects any region currently ahead of it in that data
structure. Once work is complete, that Lock Request can be removed from
that data-structure, unblocking other threads waiting for that region.
Here is my idea put into code. This is *completely* untested. It is also
not verified for correctness. I *think* it is correct, but if someone
can point out flaws, I'd be interested in them.

--- GridLock.java
package net.virtualinfinity.concurrent;

import java.util.concurrent.atomic.AtomicReference;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;

/**
* Locks based on concurrent access to a 2d space.
* @author Daniel Pitts
*/
public class GridLock {
private static final
AtomicReferenceFieldUpdater<Lock, Lock> nextUpdater =
AtomicReferenceFieldUpdater.newUpdater(Lock.class,
Lock.class, "next");
public class Lock {
private final Region region;
private volatile Lock next;
private volatile boolean complete;

private Lock(Region region, Lock next) {
this.region = region;
this.next = next;
}

private boolean compareAndSetNext(Lock oldNext, Lock newNext) {
return nextUpdater.compareAndSet(this, oldNext, newNext);
}

public void release() {
complete = true;
synchronized (this) {
notifyAll();
}
cleanLocks();
}
}

AtomicReference<Lock> head = new AtomicReference<Lock>();

/**
* Locks a region. Blocks until existing locks that intersect
* the region are released.
* @param region the region to lock
* @return a Lock handle.
* @throws InterruptedException if the thread is interrupted
*/
public Lock lock(Region region) throws InterruptedException {
Lock release = null;
try {
while (true) {
final Lock h = head.get();
release = new Lock(region, h);
if (!head.compareAndSet(h, release)) {
continue;
}
Lock cur = release;
Lock next = cur.next;
while (next != null) {
if (next.region.intersects(region)) {
while (!next.complete) {
synchronized (next) {
next.wait();
}
}
Lock nn = next.next;
cur.compareAndSetNext(next, nn);
}
cur = next;
next = cur.next;
}
Lock ret = release;
release = null;
return ret;
}
} finally {
if (release != null) {
release.release();
}
}
}

private void cleanLocks() {
Lock head;
head = this.head.get();
while (head.complete) {
Lock next = head.next;
if (head.complete) {
this.head.compareAndSet(head, next);
}
head = this.head.get();
}
Lock cur = this.head.get();
if (cur == null) {
return;
}
Lock next = cur.next;
while (true) {
if (next == null) {
return;
}
while (next.complete) {
cur.compareAndSetNext(next, next.next);
next = cur.next;
if (next == null) {
return;
}
}
cur = next;
next = cur.next;
}
}

}

--- Region.java
package net.virtualinfinity.concurrent;

/**
* Represents a rectangular region on a grid.
* @author Daniel Pitts
*/
public final class Region {
private final int x1;
private final int y1;
private final int x2;
private final int y2;

private Region(int x1, int y1, int x2, int y2) {
this.x1 = x1;
this.y1 = y1;
this.x2 = x2;
this.y2 = y2;
}

public static Region bySize(int x, int y, int width, int height) {
return new Region(x, y, x+width-1, y+height-1);
}

public static Region byBounds(int x1, int y1, int x2, int y2) {
return new Region(Math.min(x1, x2),
Math.min(y1, y2),
Math.max(x1, x2),
Math.max(y1, y2));
}

public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof Region)) return false;

Region region = (Region) o;

return x1 == region.x1 && x2 == region.x2 &&
y1 == region.y1 && y2 == region.y2;

}

public int hashCode() {
return 31 * (31 * (31 * x1 + y1) + x2) + y2;
}

public boolean intersects(Region other) {
return other.x2 >= x1 && other.y2 >= y1 &&
x2 >= other.x1 && y2 >= other.y1;

}
}
 
D

Daniel Pitts

Daniel Pitts wrote:
Important code correction:
notice the "synchronized (next)" section has changed. There was a
possible race condition that would cause the "lock" method to block
indefinitely.

Replace the "lock" method in the previous post with the following version:

/**
* Locks a region. Blocks until existing locks that intersect the
region are released.
* @param region the region to lock
* @return a Lock handle.
* @throws InterruptedException if the thread is interrupted
*/
public Lock lock(Region region) throws InterruptedException {
Lock release = null;
try {
while (true) {
final Lock h = head.get();
release = new Lock(region, h);
if (!head.compareAndSet(h, release)) {
continue;
}
Lock cur = release;
Lock next = cur.next;
while (next != null) {
if (next.region.intersects(region)) {
if (!next.complete) {
synchronized (next) {
while (!next.complete) {
next.wait();
}
}
}
Lock nn = next.next;
cur.compareAndSetNext(next, nn);
}
cur = next;
next = cur.next;
}
Lock ret = release;
release = null;
return ret;
}
} finally {
if (release != null) {
release.release();
}
}
}
 
M

Mark Space

Patricia said:
The first and third requests do not overlap, and could run in parallel,
but (1,1) has to block until (0,0) is over, and (2,2) would block
because it overlaps (1,1) which is ahead of it.

I assume he wants to prevent starvation. Depending on the size of the
requests, esp. if larger than 2x2, Daniel's suggestion might be
sensible. If the requests come in a different order, say (0,0), (2,2),
(1,1) then there's less contention, you're just at the vagaries of the
request order.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,982
Messages
2,570,190
Members
46,740
Latest member
AdolphBig6

Latest Threads

Top