Back in the 1970s I remember reading a science fiction story about two
sides who were fighting a war, each trying to outsmart the others with
better computers. Victory came when one of the good guys produced
pencil and paper. "A paper computer?" said the good guy's commander.
"No," said the good guy, and proceeded to write down columns of
figures.
The story is a bit dated. What starts as a crutch for the lazy rapidly
turns into something that exceeds all human capabilities. No human can
perform 2 billion floating point calculations a second. equally,
there's an upper limit of the complexity of a memory system that can
be managed by hnad.
True, but you can generally split resource management into two
categories: things that should be released within the same function as
they're allocated, and those that add to the resources "owned" by the
function. The first are very easy to clean up, though that seems to
be where most leaks occur. The second category just moves the problem
up a level---the calling function then has the same issue, of whether
to manage the memory internally. Even with interrupt-driven code that
allocates resources, there should be another part that frees those
resources. I spent some time finding and fixing leaks in an
application years ago, and very few were the result of the
application's complexity. Almost all were obvious where the release
should have been. In languages with GC, it seems that programmers end
up spending a bit of time trying to goose the GC to run. For the most
part, I think it's cleaner and easier to just manage the memory
yourself, and be diligent about not leaving pieces behind when you're
through with them.