K
Keith Thompson
Howard Hinnant said:I think that's an excellent idea. Perhaps there is some refinement
still to be done here. But the basic idea is fundamentally a win-win.
Thanks. On the other hand, there's always the danger of creeping
featurism. The potential complexity of a memory allocation system is
nearly unbounded. For example, it might sometimes make sense to make
a request like:
I need a chunk of at least 300 bytes of data. I'll probably need
to expand it to 500 bytes later on, so reserve 500 if you can; if
not, I'll settle for 300. I *might* need to expand it to 4000
bytes eventually, so optimize for that case if you can. Tell me
how many bytes you were actually able to allocate. And I'm also
going to need to allocate a 10000 byte chunk; if you can't satisfy
that request immediately, let me exactly what my options are for
freeing other chunks of allocated memory to make that allocation
possible.
(Substitute larger sizes and/or more allocations to make this scenario
realistic on a modern non-embedded system.)
In general, you want to optimize for your program's actual usage
pattern, without necessarily knowing in advance what that's going to
be. Just defining the data structures to describe what you want and
what the system can give you would be a daunting task, let alone
actually implementing it. There has to be a tradeoff between having a
simple interface and adding more functionality.
One could argue that the C's current malloc/realloc/free interface is
the best tradeoff. A malloc/realloc/xrealloc/free interface would
certainly provide some options that don't currently exist, but it's
not clear that it would be worth the added complexity (or that it
wouldn't).