Thanks for your reply!
So, doubling the size will cause the number of calls to malloc(or new)
to be small than increasing by a fixed size. And I also think about
the effect of solution that allocating three times of current size.
And I think the memory wasting is more than the situation we allocate
twice, am I right? For other words, is there other reason why I can't
allocate 3/4 ore more times of the memory?
More exactly, keeping the increase exponential keeps the work
basically constant in the number of inputs. For example, no matter
how many items are added to a container that expands by a factor of
two, on average, each element only has to get moved twice during the
"grow" processes.
You can pick any factor that works for you. The smaller the factor
the more often you'll have to grow the list. It's a tradeoff. 100%
is a common increment (it's certainly easy to implement), as are
various “nice” fractions (50%, 41%, 33%, 25%, 10%), although what's
"best" is going to depend a great deal on the usage - the bigger the
factor, the more memory "wasted," but the less work done growing the
container. The work required for the expansion is also a factor -
growing a hash table is likely a more complex operation than growing a
list or vector (so you'd want to do it less often).
Often the programmer just picks something reasonable. For example,
let's say the object in the contain are large, but the container is
physically no more than an array of pointers, then having twice as
many pointers allocated as necessary is basically insignificant.
OTOH, if the container is physically storing the large objects, a big
over-allocation will consume considerable excess memory.
On occasion you see more complex schemes. For example, sometimes you
see the time of the last grow operation tracked, and if another one
happens in less than some interval, a bigger expansion factor is
used. Or sometimes there are predictive schemes that estimate an
approaching workload (for example, you're about to process an input
file, and you estimate that on average you'll put an object into your
container for every 100 bytes in the file - if you can easily
determine the file's size, you can force the allocation early). As a
general rule, the more complex schemes tend to be pretty specifically
targeted.
A related, and even more complex question is, when do you *shrink* a
container's allocation.
But bottom line, use the STL, or program a simple scheme (like simple
doubling) yourself, in most cases those choices are close enough to
reasonable, and only rarely are they seriously bad. After that, if
you're seeing excessive CPU or memory utilization, *then* go back and
try to optimize this stuff.