You write as if the lack of RAII was only a minor inconvenience. I don't
agree with that.
One of the major problems with that lack, which means that resources
have to be managed manually, is that this management burden is "contagious".
With that I mean that if you have a type which needs to be destroyed
manually, that requirement is transferred to any other type that wants
to use that type as a member. (In other words, if you have eg. a struct
that needs to be constructed and destructed by explicitly calling some
functions, if you want to use that struct as a member of another struct,
you need to write equivalent construction/destruction functions for *that*
struct too, and the need to manually call them is "inherited" by that
struct. And so on. It can become quite complicated and burdensome.)
well, this comes down to coding style:
most data types of this sort are not allocated/freed directly, but are
allocated/freed via function calls.
say:
FOO_Context *ctx;
ctx=FOO_NewContext(...);
....
FOO_FreeContext(ctx);
however, rarely does this seem like a big deal.
also, the matter of multiple entry/exit points and releasing things also
has a typically straightforward solution:
if for some function it becomes awkward, it means the function is
probably doing too much and needs to be broken down into smaller ones.
typically, 5-25 lines is a good limit for a function size, as well as
the usual practice that a function only does a single conceptual
operation (as opposed to a function which does "this, that, and this
other thing"...).
It also makes generic programming a lot more difficult. If you make,
for example, a generic container, you have no way of knowing whether
the elements need to be properly destroyed before freeing them or not.
If you want your container to support such elements, you have to offer
some kind of construction/destruction paradigm, which can be inconvenient,
and in the case of elements that don't need it, needless overhead.
typical answers:
one generally doesn't use generic containers (containers are created on
an as-needed basis);
typically containers are homogenous;
for complex non-uniform data types, typically a vtable and/or a pointer
to a destructor function can be used.
this is again why, as noted before, one memorizes/internalizes things
like hashing, linked lists, and sorting algorithms, as one often needs
to deal with them on a fairly regular basis (one memorizes basic
algorithms much as one memorizes things like lists of API functions, ...).
not that it has to be done in some sort of rote/school style way, but
one tends to memorize things after dealing with them a few times.
This is actually a problem in Java because if you have to manage any
resource other than memory, you run into the problem of having to free
the resource manually (at least if the resource should be freed as soon
as possible; the finalizer mechanism in Java doesn't guarantee when
destructors will be called, or even that they will be called at all).
The 'finally' block only handles a subset of cases that RAII does, and
it's nevertheless more burdensome because you have to implement it
manually. (At least it's safer than anything in C, which is a plus.)
potentially...
but then again, a typical pattern in C becomes:
int BAR_Sub_DoSomething(...)
{
FOO_Context *ctx;
int i;
ctx=FOO_NewContext(...);
...
i=FOO_GetFinalValue(ctx);
FOO_FreeContext(ctx);
return(i);
}
then the form of a function becomes itself a convention.
if success/failure status is involved, typically this is handled either
with "if()" blocks, or folding the next step into its own function (the
use of "goto" is nasty and so typically not done).
in Java, the usual strategy is to introduce ones' own release methods
(rather than trying to rely on finally).
for some data types, it is also common to create ones' own mini
allocator/free system.
public class Foo
{
private static Foo freeList;
private Foo next;
public static final Foo newFoo()
{
Foo tmp;
if(freeList)
{
tmp=freeList;
freeList=tmp.next;
tmp.next=null;
return tmp;
}
tmp=new Foo();
return tmp;
}
public static final void freeFoo(Foo tmp)
{
...
tmp.next=freeList;
freeList=tmp;
}
public void free()
{
freeFoo(this);
}
}
as well as things like:
Foo obj=Foo.newFoo();
try {
...
}finally {
obj.free();
}
because, unlike some people seem to claim, the GC is a good deal more
hit-or-miss in practice when it comes to non-trivial usage patterns (and
GC cycles are not always free).
for my own scripting language, I have a delete keyword (partly itself
inherited from ActionScript, which presumably got it from C++).
however, sadly, at the moment there is no good way to prove that no one
tries to access an object after freeing it (a potential safety/security
concern), but it is a tradeoff (however, like Flash, in my language it
is not required that the VM accept the request to delete something, and
it may potentially reject it in some cases, although at the moment it
will actually just free whatever is given to it provided the code has
the needed permissions).
granted, the addition of VM-level permissions checking (using a
POSIX-style model) was itself a subject of debate (others argued for
sandboxing and trying to make sure that sandboxed code could never get
any references to secure objects, worrying that any sort of security
checking would be too slow/complex/... to be usable).
or such...