Replace synchronized with AtomicInteger: does this work?

O

Olli Plough

Hello,

I need to access a shared variable. So I enclose the code that
accesses the shared var with a synchronized block:

synchronized(myLock) {
// access my shared variable.
}

While this works, the overhead of synchronization always applies. Now
I try to get the synchronization done without using synchronized, but
with a non-blocking construct where the synchronization overhead only
applies when really some other thread is waiting for entry into the
critical section:

AtomicInteger count = new AtomicInteger(0); // inst var
Semaphore sem = new Semaphore(0); // inst var

if(!count.compareAndSet(0, 1))
sem.acquire();

// access my shared variable.

if(count.compareAndSet(1, 0)
sem.release();

My question is whether the code above using an AtomicInteger is thread-
safe and not violating lifelyness requirements. The solution is a bit
crude since it can only handle 2 threads. But this can be improved.
What I first want to assure myself of is whether the whole thing is
correct or not...

Any opinions?
Thanks, Oliver
 
K

Knute Johnson

Olli said:
Hello,

I need to access a shared variable. So I enclose the code that
accesses the shared var with a synchronized block:

synchronized(myLock) {
// access my shared variable.
}

While this works, the overhead of synchronization always applies. Now
I try to get the synchronization done without using synchronized, but
with a non-blocking construct where the synchronization overhead only
applies when really some other thread is waiting for entry into the
critical section:

This is exactly what synchronized does for you. If two threads aren't
attempting to access synchronized blocks at the same time there is very
little overhead.
AtomicInteger count = new AtomicInteger(0); // inst var
Semaphore sem = new Semaphore(0); // inst var

if(!count.compareAndSet(0, 1))
sem.acquire();

// access my shared variable.

if(count.compareAndSet(1, 0)
sem.release();

Just do the above with a mutex (binary) semaphore and skip the
AtomicInteger.
My question is whether the code above using an AtomicInteger is thread-
safe and not violating lifelyness requirements. The solution is a bit
crude since it can only handle 2 threads. But this can be improved.
What I first want to assure myself of is whether the whole thing is
correct or not...

Any opinions?
Thanks, Oliver

It really isn't clear what you are trying to do. If you just need to
arbitrate access between threads, synchronized is a very simple way to
go about it. If the time spent in the synchronized code is small then
your cost to the other threads will be small.
 
O

Olli Plough

It really isn't clear what you are trying to do.  If you just need to
arbitrate access between threads, synchronized is a very simple way to
go about it.  If the time spent in the synchronized code is small then
your cost to the other threads will be small.

Knute Johnson

Hi Knute,

it is about synchronized >always< causing some overhead whether some
thread has to wait at the start of the synchronized block, because
some thread is already in it, or not. My idea with my little solution
with AtomicInteger is that the overhead of synchronized only accrues
when there really is another thread that wants to enter the critical
section at a time where some other thread already is in it. The point
is that AtomicInteger contrary to synchronized causes almost no
overhead when there is no other thread around that wants to enter the
critical section.

Regards, Oliver
 
T

Tom Anderson

I need to access a shared variable. So I enclose the code that accesses
the shared var with a synchronized block:

synchronized(myLock) {
// access my shared variable.
}

While this works, the overhead of synchronization always applies. Now
I try to get the synchronization done without using synchronized, but
with a non-blocking construct where the synchronization overhead only
applies when really some other thread is waiting for entry into the
critical section:

Nlek-nerr. You already have this.

Java monitors - the things you use via the synchronized keyword - have
been implemented like this for years now. Acquiring a lock on an unlocked
object takes a handful of instructions. Ditto for re-entering a lock that
the thread already owns. Only trying to lock a lock that is currently held
by another thread is slow.
AtomicInteger count = new AtomicInteger(0); // inst var
Semaphore sem = new Semaphore(0); // inst var

if(!count.compareAndSet(0, 1))
sem.acquire();

// access my shared variable.

if(count.compareAndSet(1, 0)
sem.release();

My question is whether the code above using an AtomicInteger is thread-
safe and not violating lifelyness requirements.

A: CAS the count; count becomes 1, returns true
A: enters critical section
B: fails to CAS the count; returns false
B: acquires the semaphore; blocks
A: leaves critical section
A: CAS the count; count becomes 0, returns true
A: releases the semaphore; semaphore becomes 1
B: unblocks; semaphore becomes 0
B: enters critical section
B: (fails to get scheduled for a while)
A: (comes back round its loop again)
A: CAS the count; count becomes 1, returns true
A: enters critical section

You now have both threads in the critical section.

Am i missing something?

Actually, isn't this even simpler:

A: CAS the count; count becomes 1, returns true
A: enters critical section
A: leaves critical section
A: CAS the count; count becomes 0, returns true
A: releases the semaphore; semaphore becomes 1
B: CAS the count; count becomes 1, returns true
B: enters critical section
A: fails to CAS the count; returns false
A: acquires the semaphore; semaphore is 1 so carries on
A: enters critical section

But seriously, you should just use a normal monitor.

tom
 
T

Tom Anderson

it is about synchronized >always< causing some overhead whether some
thread has to wait at the start of the synchronized block, because some
thread is already in it, or not. My idea with my little solution with
AtomicInteger is that the overhead of synchronized only accrues when
there really is another thread that wants to enter the critical section
at a time where some other thread already is in it. The point is that
AtomicInteger contrary to synchronized causes almost no overhead when
there is no other thread around that wants to enter the critical
section.

In the uncontended case, a monitor acquisition *is* an atomic
compare-and-swap. That's how it's implemented by the JVM.

Go and read Dr Bacon's wonderful papers, starting here:

http://www.research.ibm.com/people/d/dfb/papers/Bacon03Retrospective.pdf

tom
 
E

Eric Sosman

Olli said:
Hi Knute,

it is about synchronized >always< causing some overhead whether some
thread has to wait at the start of the synchronized block, because
some thread is already in it, or not.

Have you measured the overhead?
My idea with my little solution
with AtomicInteger is that the overhead of synchronized only accrues
when there really is another thread that wants to enter the critical
section at a time where some other thread already is in it. The point
is that AtomicInteger contrary to synchronized causes almost no
overhead when there is no other thread around that wants to enter the
critical section.

Have you measured the "almost no overhead?"

My late father hated to wait at traffic lights, running
his car's engine and burning gasoline to no purpose. So he
would try to find routes from Here to There that involved as
few traffic lights as possible, driving over little twisty
back roads to avoid the traffic signals that are more prevalent
around larger roads. He'd sometimes drive two or three miles
out of his way to avoid wasting gas at a twenty-second light.

Are you behaving like my father?
 
O

oliver789

Well, there is a diagram in Goetz' book (Java Concurrency in Practice)
on p.228 that shows throughput for a synchronized LinkedList and a
ConcurrentLinkedQueue: the ConcurrentLinkedQueue has a throughput
better by a factor slightly larger than 2. So there is a point in
thinking about getting round synchronized blocks. Despite that it's
simply fun to think about solutions based on a CAS approach.

Actually, I just realized that in class ReentrantLock they do
something in the spirit what I wrote before (well without those errors
I had in my solution). Must have been introduced with JDK6. I always
had in mind that Locks in java.util.concurrent worked with
synchronized blocks. Must originate from JDK5 and earlier classes by
Doug Lea.

Regards, Oliver
 
M

Mark Space

Well, there is a diagram in Goetz' book (Java Concurrency in Practice)
on p.228 that shows throughput for a synchronized LinkedList and a
ConcurrentLinkedQueue: the ConcurrentLinkedQueue has a throughput
better by a factor slightly larger than 2. So there is a point in

That's actual measured data though. With out more info, it's hard to
say whether the OP is justified in the extra execution paths he's creating.

thinking about getting round synchronized blocks. Despite that it's
simply fun to think about solutions based on a CAS approach.

I'm curious what happened to the last thread the OP started on this
exact same subject.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top