Kai-Uwe Bux said:
That is correct. In return you are guaranteed that reallocation will not
happen unless the vector grows beyond maxAmount in length. This will
usually gain a little bit of speed (profile to be sure it pays off). Also
it allows you to keep iterators and references valid in certain
circumstances.
Another advantage is that it reduces memory fragmentation (especially
if the vector never grows beyond the reserved size). In most C++
implementations in most systems when the vector is resized larger it
cannot reuse the freed memory. This freed memory can only be reused by
other dynamically allocated objects or arrays which are small enough, if
any are created.
Memory fragmentation is one problem of std::vector which many
programmers are not aware of. In many cases it's not a big problem, but
there are certain situations where it can become a major issue. A
typical problematic case is when the final size of the vector is not
known, and values are continuously being pushed back to it. The effect
can be worsened if there are several such vectors (because there's a
high probability that none of them can reuse the memory freed by the
others simply because they are too large).
In one application I made years ago which required extensive amounts
of memory (and it was precisely of the type where data was generated
during the execution of the program and it was impossible to know the
final sizes of the vectors before all the data had been generated), the
memory used by the program almost halved when I replaced all the vectors
with deques (thus allowing much bigger datasets to be calculated). The
execution speed of the program was not affected almost at all.