James Kanze <
[email protected]> wrote:
[...]
I am afraid I don't see them as such. I can agree that it is
possible for them to coexist but *completely* orthogonal would
require them to have no effect on one another whatsoever.
Which is largely the case. RAII depends on object lifetime, by
giving objects specific behavior on their death. GC has nothing
to do with object lifetime.
Perhaps the problem is that GC has been "oversold" by Java
fanatics. Even in Java, you have to deal with lifetime of
object issues.
Another problem, however, is that too many C++ programmers tend
to insist that every object have a deterministic lifetime. Many
don't, and if memory is managed automatically, most don't.
Without memory management, most destructors would be empty.
Fair enough. It is not an inherant advantage to always treat
all resources the same. If both sides of the interface agree
that different resources are to be treated differently and
both sides of the interface as designed with this
understanding, this is perfectly fine. This works with Java.
However, when you design with RAII, all resources get treated
the same.
Only if you want them to. Memory is very special as a resource,
for a number of reasons.
Any service code that use RAII will encapsulate all resources
and clean up in destructors. This is the way RAII code gets
designed.
RAII doesn't handle all resources. It only handles those whose
lifetime can be easily mapped to a scope, so that the resource
can be freed automatically when that scope ends. This isn't
always, or even often, the case of memory. (If the memory maps
to a scope, you just declare it in that scope, and be done with
it.)
If the service code gets then used by a client code that use
GC, everything breaks since the client is essentially breaking
the contract.
Why? If the contract says that destructors must be called, then
the client code calls the destructor. I don't see where the
problem is there.
Could you give an example? (Just the contract, and perhaps
a short explination of why you think that garbage collection
would break it.)
Sorry, maybe I was not clear enough. I am talking from the
service code point of view. A class doesn't know how it will
be used so can't rely anymore on its destructor being run.
A class certainly should know something about how it will be
used. Something like complex knows that it won't be used to
manage a TCP connection, for example. I'm sure that that's not
what you meant, but I can't figure any other meaning to assign
to those words.
In an application, a class has a role and very specific
responsibilities. In order to perform correctly, it defines
a contract with the client code. For some classes (very few, in
my experience), that contract will require that the client code
"dispose" of them---in C++, we would say call the constructor;
in Java, the class will have a "dispose" function which must be
called, etc. In C++, we have one very big advantage over Java:
if the moment the "dispose" function must be called corresponds
to the moment the object goes out of scope, the compiler will
call the method (the destructor in C++) automatically. If it
doesn't, we still have to call it explicitly (delete operator).
All of this has nothing to do with garbage collection, and
remains unchanged in the presence of garbage collection. With
one big exception: without garbage collection, a lot of classes
which wouldn't otherwise require "dispose", because it's the
last chance they have to free memory. Many (most) of my
classes, for example, have the contract that you must call the
destructor *if* (and only if) there is no garbage collection.
Client/User code that creates an object from a class is
perfectly capable of knowing if the destructor will be run or
not since it is in control of deciding to use GC for this
object or use deterministic lifetime management (new or auto).
However, the service/library/class code cannot know and cannot
rely on the destructor always being run anymore. At that
point, any class that clean up in its destructor is broken and
should never be used by a GC client.
No.
What does change, I guess, is that the service must specify the
fact that it must be disposed/destructed as part of its
contract. But that's really the case today anyway, since the
requirement isn't normally just to be destructed "sometime", but
to be destructed in a timely manner (something like scoped_lock,
for example). Which means that some techniques of manual memory
management are excluded as well: you can't, for example, put the
object (or a pointer to the object) in a vector, and only clean
up when there are too many objects in the vector.
If you write a class that will be used by other code, you must
either:
- Make sure it will never be used by GC client code and then you can
use RAII.
- Not use RAII at all in case it get used by GC client code.
- If you only use memory, you might be OK. Implement the class in a
RAII way but if the class get used by a GC client, hopefully the GC
will also cleanup the memory you used when the destructor does not get
run.
More specifically, you have to specify a contract, that the
client code has to respect. Nothing new there, and you have to
do that with or without garbage collection.
If you write code that use other code, you must know how it
was implemented internally so that you know if you can use GC
or if you can use RAII. To me, this breaks the concepts of
encapsulation and implementation hiding. Unless you *know*
that a class only uses memory as resources, you can't use it
in GC but I do not believe that the user should know how the
internal of the class are implemented.
I think you're getting hung up on "resources". Classes have
responsibilities and behavior. Some classes have a very
definite end of lifetime, with specific behavior. Client code
must ensure that this is respected. Garbage collection changes
nothing in all this.
I am sorry, maybe the domain we work on are different, maybe
its because for me it is very much normal to use resources
other than memory, maybe its because I have not seen a large
application that uses both RAII and GC at the same time but
what I see is added complexity due to the lack of certainity
and the need to know too much about the other side of the
interface.
Independently of the domain: if the software is well written,
all you need to know about the other side of the interface is
the contract it adhers to. Beyond that, I'm sure that there are
domains where many, or even most, objects do require timely
desposal/destruction, and their lifetimes regularly end at the
end of scope. Even in my applications, there are some:
typically, a Transaction will be allocated on the stack, for
example, and its destructor will effectuate a roll-back if
commit hasn't been called on it. You don't allocate
Transaction's dynamically, and if for some reason you have to
(say because it is to be shared between two asynchronous
threads), then you do use something like shared_ptr.
In typical data servers, and in GUI applications, such objects
are the exception (and having to allocate them dynamically,
rather than on the stack, is even more exceptional). At least
in my experience.