why boost:shared_ptr so slower?

K

Keith H Duggar

As to 2) well there are at least 3 experts who hold my view:
Brian Goetz, Joshua Bloch, and Chris Thomasson. Here is, an

Strike that. It seems Chris refers to boost::shared_ptr as having
"normal" thread-safety so I guess he doesn't agree with those other
two. Apologies, Chris, for misrepresenting what you wrote.

KHD
 
C

Chris M. Thomasson

Keith H Duggar said:
Strike that. It seems Chris refers to boost::shared_ptr as having
"normal" thread-safety so I guess he doesn't agree with those other
two. Apologies, Chris, for misrepresenting what you wrote.

No problem at all Keith. FWIW, one can make `boost::shared_ptr' strongly
thread-safe by adding some external synchronization. For instance, take this
simple solution to the classic reader/writer problem:
__________________________________________________________________
static boost::shared_ptr<foo> g_foo;


void writers() {
for (;;) {
boost::shared_ptr<foo> l_foo(new foo);
mutex_lock();
g_foo = l_foo;
mutex_unlock();
}
}


void readers() {
for (;;) {
mutex_lock();
boost::shared_ptr<foo> l_foo(g_foo);
mutex_unlock();
l_foo->something();
}
}
__________________________________________________________________




Otherwise, shared_ptr as-is does not have what it takes to do that on it's
own. BTW, the scenario above will not result in memory consumption blow up
because of the deterministic nature of reference counting.
 
C

Chris M. Thomasson

"basic/normal" thread-safety is what is normally meant by thread
safety (e.g. Posix guarantees). "Strong" thread safety is
something more: it means that the object state can be modified
(from the client point of view) from several threads
simultaneously. It can be useful in a few specific cases
(message queues, etc.), but is expensive, and not generally
useful enough to warrant the expense.

FWIW, there are several ways to implement strongly thread-safe smart
pointers without using any mutual exclusion synchronization whatsoever such
that the implementation can be lock-free, or even wait-free.
 
J

Juha Nieminen

Sam said:
A dynamic_cast from a superclass to a subclass, a derived class, only
works if the superclass has, at least, a virtual destructor.

Since the smart pointer was told what the derived class type is, why
would it do a dynamic_cast? What would be the point?

The only difference between a dynamic_cast and a static_cast in this
case is that the former might return a null pointer. If for whatever
reason the smart pointer was told that the object type is A but in
reality it's an object of different type B, you will get buggy behavior
regardless of whether the smart pointer uses dynamic_cast or
static_cast: In the former case you will get a null pointer access, in
the latter memory trashing (as the member functions of the object are
called with the wrong type of object). Either situation is completely
erroneous.
I thought that the whole point of a shared_ptr is so that the referenced
object may be destroyed at the appropriate time. If you don't want the
object destroyed, you don't need a shared_ptr.

You might not have an option. You could have, for example, a function
which takes a boost::shared_ptr as parameter, and thus you have no other
option but to give it one. However, if you don't want to allocate the
object dynamically just to call that function, but instead you want a
stack-allocated object instead, you can still do it, as
boost::shared_ptr supports telling it to not to try to destroy it.
 
J

James Kanze

[snip same old arguments ie that that some functions are only
safe when called on *different* objects and that it is this
conditional safety that "all the experts" mean when they say
"thread-safe"]
If you'd read the document you'd site, it points out quite
clearly that the so-called "strong thread-safety" is a very
naïve meaning for thread safety.
It points out nothing about the notion being "naive". That is
your coloration. It simply points out the likely costs.

It doesn't use the word "naive", no. But that's a more or less
obvious interpretation of what it does say---that requiring the
"strong" guarantee is a more or less naive interpretation of
thread safety.
The crux of our disagreement is two fold 1) you hold that a
function can be "thread-safe" even if it conditionally imposes
some extra requirements such as different objects, buffer
pointers, etc 2) you hold that "all the experts" agree with
you.
As to 1) I simply disagree.

Then practically speaking, thread-safety is a more or less
useless term, except in a few special cases.

As I pointed out, my meaning is the one Posix uses, which is a
pretty good start for "accepted use" of a term.
I think something is "thread-safe" only if it is the naive
sense of "if the class works when there is only a
single-thread and is thread-safe, then it works when there are
multiple threads with *no additional synchronization coding
required* (note this was just a *toy* way of putting it, read
the article I'm about to link for a more useful wording and
discussion of it).

In other words, functions like localtime_r, which Posix
introduced precisely to offer a thread safe variant aren't
thread safe.
As to 2) well there are at least 3 experts who hold my view:
Brian Goetz, Joshua Bloch, and Chris Thomasson. Here is, an
article by Brian Goetz that lays out the issues very nicely:

Except for Chris, I've never heard of any of them. But
admittedly, most of my information comes from experts in Posix
threading.
Finally, as to 1) ultimately it is a matter of definition. If
you are right and we all agree to call the "as thread-safe as
a built-in type" just "thread-safe" that would be fine too.
However, as you can see, there is disagreement and my point
here was simply that one should be a bit more careful that
just saying boost::shared_ptr is "thread-safe". Indeed, one
should call it exactly what the Boost document calls it "as
thread-safe as a built-in type" or perhaps "conditionally
thread-safe" so as to be careful and avoid confusion.

It's always worth being more precise, and I agree that when the
standard defines certain functions or objects as "thread-safe",
it should very precisely define what it means by the term---in
the end, it's an expression which in itself doesn't mean much.

Formally speaking, no object or function is required to meet its
contract unless the client code also fulfills its obligations;
formally speaking, an object or function is "thread safe" if it
defines its contractual behavior in a multithreaded environment,
and states what it requires of the client code in such an
environment. Practically speaking, I think that this would
really be the most useful definition of thread-safe as well, but
I think I'm about the only person who sees it that way. The
fact remains, however, that Posix and others do define
thread-safety in a more or less useful form, which is much less
strict than what you seem to be claiming.
And by the way, that last point is not just some pointless
nit- pick. Even in the last year-and-half at work, I caught
(during review) three cases of thread-unsafe code that was a
result of a boost::shared_ptr instance being shared unsafely
(all were cases of one thread writing and one reading). When I
discussed the review with the coders all three said exactly
the same thing "But I thought boost::shared_ptr was
thread-safe?". Posts like Juha's that say unconditionally
"boost::shared_ptr is thread-safe" continue to help perpetuate
this common (as naive as you might say it is)
misunderstanding.

OK. I can understand your problem, but I don't think that the
problem is with boost::shared_ptr (or even with calling it
thread-safe); the problem is education. In my experience, the
vast majority of programmers don't understand threading issues
in general: I've seen more than a few cases of people putting
locks in functions like std::vector<>::eek:perator[], which return
references, and claiming the strong thread-safe guarantee. Most
of the time, when I hear people equating strong thread-safety
with thread-safety in general, they are more or less at about
this level---and the word naive really does apply. Just telling
them that boost::shared_ptr is not thread safe in this sense is
treating the symptom, not the problem, and will cause problems
further down the road. (Again, IMHO, the best solution would be
to teach them that "thread-safety" means that the class has
documented its requirements with regards to threading somewhere,
and that client code has to respect those requirements, but I
fear that that's a loosing battle.)
 
J

James Kanze

"James Kanze" <[email protected]> wrote in message
On Aug 21, 5:05 pm, "Chris M. Thomasson" <[email protected]>
wrote:
Okay. See, when I used the term overhead, I meant the total
overhead including the pointer to the private counter object
`myImpl' and the amount of memory it takes to create said
object.

I wasn't sure, but the use of "sizeof" suggested very strongly
that you were talking about sizeof, which doesn't include such
overhead.
So, that's 1 pointer + 1 pointer + 1 int. Perhaps I am in
error thinking about it that way.

There's no simple answer, and I can imagine cases where the
extra level of indirection is the cheapest solution. For
something like shared_ptr, it's a trade-off between the cost of
copying (cheaper with the extra level of indirection) and
dereference speed (cheaper with the larger "sizeof"). In most
of my code, pointers are dereferenced a lot more than they are
copied, so the choice is obvious. For my code.

(I doubt that the memory usage of smart pointers is ever much of
an issue.)
 
J

James Kanze

Chris M. Thomasson writes:

[...]
Also, consider another major design flaw with shared_ptr: a
class method has no way of obtaining a reference to its own
instance. Some method of class A may want to create an
instance of class B that holds a reference to the instance of
A that created it, and, say, return a reference to the newly
created instance of B. That seems to me like a reasonable, and
quite common, thing to do:
ref<B> A::method()
{
// create an instance of B
// B contains a reference to an instance of A, namely this object.
// return the initial reference to B
}

That, of course, is a serious problem, and is the main reason
why I'd tend to avoid using boost::shared_ptr for managing
lifetime. Several hacks have been introduced to work around it,
e.g. enable_shared_from_this, but unless you have the rule that
you never use shared_ptr except if the class derives from
enable_shared_from_this, then you're playing with fire.
 
J

James Kanze

Juha Nieminen wrote:
If the decrement is atomic (not an atomic CPU instruction, but
atomic in the sense of not tearing and producing a result
that's visible to all threads that use the variable) then this
works just fine. Of course, all the other manipulations of
this variable must also be similarly atomic.

That's fine, but on what machines is the decrement atomic. On
an Intel, only if it's done as a single instruction, preceded by
a lock prefix, and I'm not even sure then (and the compilers I
use don't generate the lock prefix, even if the expression has a
volatile qualified type). On a Sparc (and most other RISC
architectures), decrementation requires several machine
instructions, period, so is not atomic.
 
K

Keith H Duggar

On Aug 21, 1:08 am, Keith H Duggar <[email protected]>
wrote: So what do you think one "commonly thinks of"
when one says a construct is "thread safe".
I mean that the entire type interface is "as thread-safe
as a POSIX-thread-safe function".
In other words (quoting the Posix standard): "A function
that may be safely invoked concurrently by multiple
threads." All of the member functions of boost::shared_ptr
meet that requirement.
[snip same old arguments ie that that some functions are only
safe when called on *different* objects and that it is this
conditional safety that "all the experts" mean when they say
"thread-safe"]
In other words, it is what N2410
http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2007/n2410.html
proposed to call "strong thread-safe" also as Chris MT has
called in some other posts. However, note that since
"strong thread-safe" is simply the most natural extension
of POSIX "thread-safe" to C++ types, then "thread-safe"
without qualification should mean "strong thread-safe" and
that is consistent with your claim that "the experts in
the field, more or less corresponds to the Posix
definition". It's just I don't know where you got your
definition of POSIX "thread-safe" because that's not what
I recall from the POSIX document?
If you'd read the document you'd site, it points out quite
clearly that the so-called "strong thread-safety" is a very
na‚àö√òve meaning for thread safety.
It points out nothing about the notion being "naive". That is
your coloration. It simply points out the likely costs.

It doesn't use the word "naive", no. But that's a more or less
obvious interpretation of what it does say---that requiring the
"strong" guarantee is a more or less naive interpretation of
thread safety.

Well in one sense it is naive because it is exactly how (in my
experience) a novice programmer interprets the term from simple
common sense alone. On the other hand, it is also used by (some)
experts as a non-naive goal to which they aim; and this results
in useful thinking, coding, etc such as even in this case with
strong thread-safe smart pointers. It's analogous to how aiming
for "immutability" often leads to very useful designs. Obviously
immutability is an even more restrictive concept nevertheless it
is still very useful.
Then practically speaking, thread-safety is a more or less
useless term, except in a few special cases.

Please see above as to whether it is useful.
As I pointed out, my meaning is the one Posix uses, which is a
pretty good start for "accepted use" of a term.

Even after careful consideration of your points, I am still not
convinced that the conditional thread-safety the _r variants give
is the definition of POSIX thread-safe. If the _r implementations
are defining then it would be but then it's rarely a good idea
for implementations to define concepts.

I suppose you might argue that they had a clear concept and _r are
just examples of that concept. However, in their design rationale
they considered other implementations for the _r variants, namely
dynamic allocation and thread-local storage, both of which would
have provided strong thread-safety. And it seems to me the choice
not to provide the thread-local storage strong solution was simply
incidental rather than fundamental. In other words, some practical
portability issues and not fundamental concepts controlled the
POSIX implementation decision.
In other words, functions like localtime_r, which Posix
introduced precisely to offer a thread safe variant aren't
thread safe.

Correct, the _r variants are "conditionally" thread-safe and I
think the implementation choice was largely incidental to the
concept of "thread-safe".
Except for Chris, I've never heard of any of them. But
admittedly, most of my information comes from experts in Posix
threading.

Can you please tell us some of the experts you have in mind?
It's always worth being more precise, and I agree that when the
standard defines certain functions or objects as "thread-safe",
it should very precisely define what it means by the term---in
the end, it's an expression which in itself doesn't mean much.

Well for novice programmers it seems to mean something clear by
simple default of common sense language: if my use of it works
with a single thread then it works as-is with multiple threads.
Formally speaking, no object or function is required to meet its
contract unless the client code also fulfills its obligations;
formally speaking, an object or function is "thread safe" if it
defines its contractual behavior in a multithreaded environment,
and states what it requires of the client code in such an
environment. Practically speaking, I think that this would
really be the most useful definition of thread-safe as well, but
I think I'm about the only person who sees it that way.

This is an interesting point. However, I think you might agree
there are some common-sense limits to what those contracts can
require. For example, would you consider:

/* thread-safety : foo must be wrapped in a mutex lock/unlock
* pairing. If this requirement is met then foo is thread-safe.
* /
int foo ( ) ;

the foo() above to be thread-safe? I wouldn't. And yet clearly
it has a contract that defines its "thread-safety". The POSIX _r
functions have a contract like

/* thread-safety : the memory location *result must not be modified
* by another thread until localtime_r returns. If this requirement
* is met then localtime_r is thread-safe.
* /
struct tm *localtime_r(const time_t *restrict timer,
struct tm *restrict result);

which is of course more reasonable; but, I'm still thinking this
is "conditionally thread-safe" not just "thread-safe".
The fact remains, however, that Posix and others do define
thread-safety in a more or less useful form, which is much less
strict than what you seem to be claiming.


OK. I can understand your problem, but I don't think that the
problem is with boost::shared_ptr (or even with calling it
thread-safe); the problem is education. In my experience, the

Except that using precise terms helps to educate. So calling it
"conditionally thread-safe" would help to simultaneously educate
while just calling it "thread-safe" helps to introduce bugs.
vast majority of programmers don't understand threading issues
in general: I've seen more than a few cases of people putting
locks in functions like std::vector<>::eek:perator[], which return
references, and claiming the strong thread-safe guarantee. Most
of the time, when I hear people equating strong thread-safety
with thread-safety in general, they are more or less at about
this level---and the word naive really does apply. Just telling
them that boost::shared_ptr is not thread safe in this sense is
treating the symptom, not the problem, and will cause problems
further down the road.

I'm not saying we should tell them boost::shared_ptr is "not
thread-safe" because yes it would cause other problems. I'm saying
we should tell them it is "conditionally thread-safe" or "as thread
safe as a built-in type" which is what the Boost documentation says.
Because that at least encourages a curious one to ask "what are the
conditions?" what does "as a built-in type mean?" etc.

We should reserve "thread-safe" for those structures that are rock-
solid, no-brainer safe with multi-threads ie strong thread-safe.
(Again, IMHO, the best solution would be
to teach them that "thread-safety" means that the class has
documented its requirements with regards to threading somewhere,
and that client code has to respect those requirements, but I
fear that that's a loosing battle.)

Can you please tell me what kind of limits if any you would place
on client code requirements? Is the foo() I gave earlier "thread-
safe"? Or what about a more complex where the "contract" required
synchronization between all calls of multiple functions foo(),
bar(), baz(), ...? Because at some point it seems this definition
of thread-safe would become equally useless.

KHD
 
J

James Kanze

On Aug 23, 4:49 am, James Kanze <[email protected]> wrote:

[...]
Even after careful consideration of your points, I am still
not convinced that the conditional thread-safety the _r
variants give is the definition of POSIX thread-safe. If the
_r implementations are defining then it would be but then it's
rarely a good idea for implementations to define concepts.

I'm not too sure what your point is here. Are you implying that
the current implementations of the _r functions isn't conform?
Or something else?

I wasn't basing my argument on any specific implementation. In
some ways, I was basing it on common sense---localtime_r
obviously isn't going to work if two threads pass it the same
buffer, for the same reason localtime doesn't work. More
generally, the thread-safety guarantees in Posix consist of
several parts. On one hand, there is the guarantee that "All
functions defined by this volume of IEEE Std 1003.1-2001 shall
be thread-safe, except that the following functions1 need not be
thread-safe. [list of functions]" On the other, there are the
requirements placed on client code, for example "Applications
shall ensure that access to any memory location by more than one
thread of control (threads or processes) is restricted such that
no thread of control can read or modify a memory location while
another thread of control may be modifying it." It seems (to
me, at least) that calling localtime_r should be considered
modifying the memory locations pointed to by the buffer
parameter, in which case, calling the function from different
threads with the same buffer argument violates the requirements,
in the same way as e.g. calling it with a null pointer as the
buffer argument violates the requirements.
I suppose you might argue that they had a clear concept and _r
are just examples of that concept. However, in their design
rationale they considered other implementations for the _r
variants, namely dynamic allocation and thread-local storage,
both of which would have provided strong thread-safety. And it
seems to me the choice not to provide the thread-local storage
strong solution was simply incidental rather than fundamental.
In other words, some practical portability issues and not
fundamental concepts controlled the POSIX implementation
decision.

Are you saying that these functions violate the concepts
otherwise defined in the Posix standard? What about setjmp?

Most functions in the Posix standard place restrictions on their
arguments (e.g. no null pointer, etc.). If the client code
violates those restrictions, either the standard defines a
specific error behavior, or undefined behavior occurs. One of
those restrictions is that "Applications shall ensure that
access to any memory location by more than one thread of control
(threads or processes) is restricted such that no thread of
control can read or modify a memory location while another
thread of control may be modifying it." If the function
modifies memory, then this restriction applies; at least as I
read it, *p = ... isn't the only way of modifying memory.

Note that Posix doesn't really specify this as clearly as it
should, but it seems reasonable to consider that this
restriction only applies to modifications that the application
specifically requests. It applies to the memory pointed to by
the buffer argument of localtime_r, for example, but not to any
memory used internally by localtime_r---that's the
responsibility of the implementation.

Also, there are two interpretations as to how this applies to
C++ objects. I very strongly believe that it means that
external synchronization is only required if the application
requests a modification of the logical value of an object, but
others have argued that the const'ness of a function is
determinate. This has a definite effect for classes like
std::string---is something like:
if ( s[ 0 ] == t[ 0 ] )
require external synchronization if it occurs in two different
threads, and s is a non-const std::string?
Correct, the _r variants are "conditionally" thread-safe and I
think the implementation choice was largely incidental to the
concept of "thread-safe".

The problem with this point of view is that Posix very
explicitly states that they are thread safe (in §2.9.1). The
only functions Posix defines as conditionally thread-safe are
ctermid(), tmpnam(), wcrtomb() and wcstrtombs(). (In all cases,
they are required to be thread safe unless passed a null
pointer.)
Can you please tell us some of the experts you have in mind?

The authors of the Posix standard, naturally:). It's also the
position taken by the draft C++ standard with regards to
thread-safety. Although as far as I can see, the C++ standard
doesn't actually use the term "thread-safety" in this
regard---given the apparent ambiguity, that seems like a wise
decision.
Well for novice programmers it seems to mean something clear by
simple default of common sense language: if my use of it works
with a single thread then it works as-is with multiple threads.

In this context, I fear "clear and simple" is the equivalent of
"naive". As I said, I've seen code which carefully locks
internal accesses, then returns a reference to internal data. I
don't think we can base much on what "novice programmers" think
with regards to threading.
This is an interesting point. However, I think you might agree
there are some common-sense limits to what those contracts can
require.

I'm not sure. The point is that the code has defined a contract
that it claims to be valid in a threaded context. And IMHO,
that's the most important aspect if I want to use the code in a
multithreaded context---I know what the contract is, and what I,
as a client, have to do.
For example, would you consider:
/* thread-safety : foo must be wrapped in a mutex lock/unlock
* pairing. If this requirement is met then foo is thread-safe.
* /
int foo ( ) ;
the foo() above to be thread-safe?

Except that the wording of the guarantee doesn't seem very
precise, yes. The author has considered the issues, and decided
what he wants to contractually guarantee. (As I said, this is
*my* definition; I don't think it's widely shared.)
I wouldn't. And yet clearly it has a contract that defines its
"thread-safety". The POSIX _r functions have a contract like
/* thread-safety : the memory location *result must not be modified
* by another thread until localtime_r returns. If this requirement
* is met then localtime_r is thread-safe.
* /
struct tm *localtime_r(const time_t *restrict timer,
struct tm *restrict result);
which is of course more reasonable;

The contract is more complex than that. The contract says that
no other thread may access the memory locations defined by
*result, or undefined behavior occurs. And this requirement
holds not only during the call to localtime_r, but until the
pointed to buffer ceases to exist.
but, I'm still thinking this is "conditionally thread-safe"
not just "thread-safe".

That's your right. Just remember that you're using a different
definition than Posix.
Except that using precise terms helps to educate. So calling
it "conditionally thread-safe" would help to simultaneously
educate while just calling it "thread-safe" helps to introduce
bugs.

During the education process, you're obviously going to have to
define precisely what you mean by each term. And systematically
distinguishing between normal/basic thread safety and strong
thread safety might be a good idea---don't use "thread safety"
at all without a modifier.
vast majority of programmers don't understand threading
issues in general: I've seen more than a few cases of people
putting locks in functions like std::vector<>::eek:perator[],
which return references, and claiming the strong thread-safe
guarantee. Most of the time, when I hear people equating
strong thread-safety with thread-safety in general, they are
more or less at about this level---and the word naive really
does apply. Just telling them that boost::shared_ptr is not
thread safe in this sense is treating the symptom, not the
problem, and will cause problems further down the road.
I'm not saying we should tell them boost::shared_ptr is "not
thread-safe" because yes it would cause other problems. I'm
saying we should tell them it is "conditionally thread-safe"
or "as thread safe as a built-in type" which is what the Boost
documentation says. Because that at least encourages a
curious one to ask "what are the conditions?" what does "as a
built-in type mean?" etc.
We should reserve "thread-safe" for those structures that are
rock-solid, no-brainer safe with multi-threads ie strong
thread-safe.

I'd tend to avoid thread-safe completely with novices, given the
confusion surrounding the term. But I'd also treat
"thread-safe" much like "volatile", spending some time
explaining that it doesn't mean what you think it means. (And
that regardless of the definition, just using thread-safe
components everywhere doesn't guarantee thread safety.)
Can you please tell me what kind of limits if any you would
place on client code requirements? Is the foo() I gave earlier
"thread- safe"? Or what about a more complex where the
"contract" required synchronization between all calls of
multiple functions foo(), bar(), baz(), ...? Because at some
point it seems this definition of thread-safe would become
equally useless.

I don't think so. There are lots of components out there that
don't document anything, and that might have these sort of
requirements. If you know about them, you can take necessary
measures, whatever they might be, and safely use the components
in multithreaded code. If you don't know about them, you can't.

Reasonably, of course, there does occur a point where the
requirements are so restrictive that you won't use the
component.
 
S

SG

Also, consider another major design flaw with shared_ptr: a
class method has no way of obtaining a reference to its own
instance. [...]

That, of course, is a serious problem, and is the main reason
why I'd tend to avoid using boost::shared_ptr for managing
lifetime.  Several hacks have been introduced to work around it,
e.g. enable_shared_from_this, [...]

You are not suggesting that this is more of a hack than being forced
to derive from some other special class to make it work with an
"intrusive smart pointer" class, are you?

Cheers!
SG
 
J

Juha Nieminen

Pete said:
To ensure that the conversion is valid.

Then what should, in your opinion, eg. operator-> do if the
dynamic_cast returns a null pointer? Raise an exception? An assertion
failure? Return it as-is (which will invariably cause a null pointer
dereferencing)? Something else?
 
J

Juha Nieminen

Pete said:
To use C++ terminology: Casting from any base class to a derived class
incurs a penalty...

Could be please tell me exactly what kind of penalty is incurred in
this situation:

class A { int i; };
class B: public A { int j; };

A* basePtr = new A;
B* derivedPtr = static_cast<B*>(basePtr);
 
C

Chris M. Thomasson

Juha Nieminen said:
Could be please tell me exactly what kind of penalty is incurred in
this situation:

class A { int i; };
class B: public A { int j; };

A* basePtr = new A;
B* derivedPtr = static_cast<B*>(basePtr);

The penalty is that if `B' is used: BAM, you're dead! Perhaps you meant:


class A { int i; };
class B: public A { int j; };

A* basePtr = new B;
B* derivedPtr = static_cast<B*>(basePtr);


?
 
I

Ian Collins

Juha said:
Then what should, in your opinion, eg. operator-> do if the
dynamic_cast returns a null pointer? Raise an exception? An assertion
failure? Return it as-is (which will invariably cause a null pointer
dereferencing)? Something else?

That's a design decision. With my smart pointer, the action on a null
pointer is a trait, along with thread safety.
 
J

James Kanze

Also, consider another major design flaw with shared_ptr:
a class method has no way of obtaining a reference to its
own instance. [...]
That, of course, is a serious problem, and is the main
reason why I'd tend to avoid using boost::shared_ptr for
managing lifetime. Several hacks have been introduced to
work around it, e.g. enable_shared_from_this, [...]
You are not suggesting that this is more of a hack than being
forced to derive from some other special class to make it work
with an "intrusive smart pointer" class, are you?

In a way, no, but in a way, yes. In the case of classical
intrusive reference counted pointers, the reference counted
pointer can only point to objects derived from this special base
class; the class defines a dicotomy: classes that are never
managed by the reference counted pointer, and classes that are
always managed by the reference counted pointer.
Boost::shared_ptr doesn't present this dicotomy---it pretends to
be valid with any object, and then requires special handling for
the most common case. The hack isn't so much in the fact that
it requires derivation from a special class, but in the way this
derivation is presented, and the fact that it isn't required
systematically (which is related to the way the derivation is
presented).
 
K

Keith H Duggar

[...]
As I pointed out, my meaning is the one Posix uses, which is
a pretty good start for "accepted use" of a term.
Even after careful consideration of your points, I am still
not convinced that the conditional thread-safety the _r
variants give is the definition of POSIX thread-safe. If the
_r implementations are defining then it would be but then it's
rarely a good idea for implementations to define concepts.

I'm not too sure what your point is here. Are you implying that
the current implementations of the _r functions isn't conform?
Or something else?

I'm saying that the implementations of _r functions could very
easily have ended up strong thread-safe; but did not primarily
because of incidental concerns (portability of efficient malloc
and/or thread-local storage, practical flexibility, etc). It was
not, I believe, because of a fundamental conceptual belief that
"thread safe" == "conditional thread safe".

[snip clear points regarding POSIX]

Yes, finally I'm forced to agree that the POSIX definition of
"thread safe" is "conditional thread safe" and even, it seems
to me, POSIX conforms to your notion of thread safe. Ie there
is a defined contractual requirement "Applications shall ensure
that ..." and it is in that context that POSIX thread-safe is
defined.
The authors of the Posix standard, naturally:). It's also the
position taken by the draft C++ standard with regards to
thread-safety. Although as far as I can see, the C++ standard
doesn't actually use the term "thread-safety" in this
regard---given the apparent ambiguity, that seems like a wise
decision.

Agreed, seems like a wise decision. So too was it wise for boost
to add the "as a built-in type" to the boost::shared_ptr claims.
In this context, I fear "clear and simple" is the equivalent of
"naive". As I said, I've seen code which carefully locks
internal accesses, then returns a reference to internal data. I
don't think we can base much on what "novice programmers" think
with regards to threading.

However, and I think we agree on this, it is a sign at least
that we should be a bit more careful that just claiming a class
is "thread-safe".
That's your right. Just remember that you're using a different
definition than Posix.

Yes, I'm convinced you are right about POSIX now. Thanks for
the patience and clear points.
During the education process, you're obviously going to have to
define precisely what you mean by each term. And systematically
distinguishing between normal/basic thread safety and strong
thread safety might be a good idea---don't use "thread safety"
at all without a modifier.

Yes, I agree with that entirely and that really is the crux of
my point. And specifically one should not go around saying boost
shared_ptr is just plain unqualified "thread-safe".

KHD
 
J

Juha Nieminen

Pete said:
Meaningless question. You shouldn't get there. If the conversion fails
your object isn't valid, just like any other construction failure.

Which is exactly the reason why it doesn't make sense to use
dynamic_cast in the first place.
 
J

Juha Nieminen

Chris said:
The penalty is that if `B' is used: BAM, you're dead! Perhaps you meant:


class A { int i; };
class B: public A { int j; };

A* basePtr = new B;
B* derivedPtr = static_cast<B*>(basePtr);

Yes, I meant that, but typoed.
 
J

Joshua Maurice

gcc manual, section 5.47, under the description of atomic functions, states
the following:

  In most cases, these builtins are considered a full barrier. That is, no
  memory operand will be moved across the operation, either forward or
  backward. Further, instructions will be issued as necessary to prevent the
  processor from speculating loads across the operation and from queuing
  stores after the operation.

I interpret this as stating that the results of these atomic functions will
be immediately visible to all other threads.

(Now, the GCC manual is not as clear as I'd like, so I'm not the most
comfortable posting this, but I think I'm right. Correct me if I'm
wrong.)

I'm not sure if you're misspeaking or actually misunderstanding. When
the term "barrier" is used in the context of threading, the results
are not immediately visible to other threads, nor even visible on the
next matching barrier. Barriers are conditional visibility. Ex:

//static init
a = 0;
b = 0;

int main()
{ //start thread 1
//start thread 2
}

//thread 1
a = 1;
write_barrier();
b = 2;

//thread 2
cout << b << " ";
read_barrier();
cout << a << endl;

Without the barriers, you may see any of the four possible outputs:
0 0
0 1
2 0
2 1
With the barriers in place, this only removes one possible output,
leaving the three possibilities:
0 0
0 1
2 1

The definition of visibility semantics is effectively: "If a read
before a read_barrier sees a write after a write_barrier, then all
reads after that read_barrier see all writes before that
write_barrier."

To nitpick your quote:
I interpret this as stating that the results of these atomic functions will
be immediately visible to all other threads.
It is not the case that the write will be immediately visible to all
other threads. Moreso, even if the other thread executes the correct
barrier instruction(s), that write may still not be visible.

If you want guaranteed visibility, use mutexes. However, even a mutex
in one thread does not guarantee that the write becomes immediately
visible to other threads. The other threads still need to execute the
matching "mutex lock" instruction(s).

In other words, for portable C++, and for assembly on most modern
desktop processors, to have any guarantee whatsoever of the order of
visibility of writes from one thread to another, \both\ threads must
each execute a synchronization primitive. No exceptions.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,818
Latest member
SapanaCarpetStudio

Latest Threads

Top