On 11/12/2010 03:12, Joshua Maurice wrote:
Normally I instantiate all my singletons up front
(before threading) but I decided to quickly roll a new
singleton template class just for the fun of it
(thread-safe Meyers Singleton):
namespace lib
{
template <typename T>
class singleton
{
public:
static T& instance()
{
if (sInstancePtr != 0)
return static_cast<T&>(*sInstancePtr);
{ // locked scope
lib::lock lock1(sLock);
static T sInstance;
{ // locked scope
lib::lock lock2(sLock); // second lock should emit memory barrier here
sInstancePtr = &sInstance;
}
}
return static_cast<T&>(*sInstancePtr);
}
private:
static lib::lockable sLock;
static singleton* sInstancePtr;
};
template <typename T>
lib::lockable singleton<T>::sLock;
template <typename T>
singleton<T>* singleton<T>::sInstancePtr;
}
Even though a memory barrier is emitted for a specific
implementation of my lockable class it obviously still
relies on the C++ compiler not re-ordering stores across
a library I/O call (acquiring the lock) but it works fine
for me at least (VC++). I could mention volatile but
I better not as that would start a long argument. Roll
on C++0x.
If I'm reading your code right, on the fast path, you
don't have a barrier, a lock, or any other kind of
synchronization, right? If yes, you realize you've coded
the naive implementation of double checked? You realize
that it's broken, right? Have you even read
http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
? To be clear, this has undefined behavior according to
the C++0x standard as well.
I am aware of double checked locking pattern yes and this
is not the double checked locking pattern (there is only
one check of the pointer if you look). If a pointer
read/write is atomic is should be fine (on the
implementation I use it is at least).