persenaama said:
The explanation is probably very simple. std::vector is basicly
mimicking array semantics for memory storage, looks like a duck, sounds
like a duck.. walks like a duck, anyway..
yes, I assumed that because I have written my own vector and set container
classes before that are similar to the basic features of STL's but I have
yet to find any good documentation that describes that exact behavior of the
library in detail.
http://www.sgi.com/tech/stl/stl_introduction.html is pretty poor IMO and
thats all that I can seem to find.
When vector runs out of capacity, it is grown.. when vector is grown
new objects are being created, and existing ones are *copied* to the
new storage that is being allocated. It is not defined as far as I can
remember how large the capacity of a vector is initially, and I don't
remember liking to read Dinkumware's (whose implementation you are
currently using) code very much (sorry DW).
Well, I'm not so concerned with the inner workings of the vector library
except that it is not bloated. The copy procedure was just something that I
was not familiar with.. maybe I missed it when I was reading over the
basics. The whole point for me using the vector lib is mainly for automatic
memory allocation and deallocation for the expansion of the array and
possibly for future reasons that I am not aware of(as the automatic memory
allocation is not that big a deal to implement but all the algorithms that
go along with the class are(cause I rather not implement them myself)).
The small vectors rarely can both, efficient in runtime and memory
footprint. If they are runtime efficient then it means they need some
larger constant as initial capacity, which wastes memory and
vice-versa. In this context, how would you implement your Standard
Library's vector class? ;-)
I suppose this depends on the application. I am using a nested set of
classes each wrapping an STL vector and at most the will probably contain 15
elements each.
I don't think there is a direct relation between memory and speed(in this
case). In my case there is no reason to allocate any inital extra capacity
because the array is almost static for the majority of run time. i.e.,
there is a basicaly a setup procedure that will initalize the vector and
during the main loop of the program one will usually not mess with the
vector(well, its more complicated than that since I have several vectors...
most will not be modified most of the time(if that makes sense)).
So all in all, this comes down to design tradeoffs by your Standard
Library vendor.. if you make the vector larger, with more objects, you
might observe more efficiency taking place immediately.
depends on what you are using the vector for. Like I said, I the majority
of my vectors are basicaly static entities for the majory of the program and
have very little influence on performance. I am using STL's vector because
it is easy than writing my own and to be consistent with other parts of the
program that use STL and also for possible future design reasons.
Btw. Just out of morbid curiosity, if your vector overflows.. which is
better, to double the size, or grow only 50%, or if something else,
what? I know the Dinkumware's (old) answer to this because this was
covered, I think in this same group some years ago, I recall P.Becker
(?) mentioned some rationale to explain the behaviour. I'm drifting
off-topic for your concerns but just popped to mind..
Well, I would say it depends on context. In my specific case I have about 5
vectors that I am using and 3 of them will be static for most of the time,
one will be used like crazy(inserting, removing, manipulating,etc...) and
the other is about midway. I'm using STL's vector for all of them. I
suppose that as long as its optimized for the "like crazy" case then I'll be
fine. I just tend to have a problem wasting memory and performance for
absolutely no reason except lazyness.
When I did my own vector class I allowed the user to change how much extra
capacity one could allocate when an overflow occured... maybe real advanced
methods would have some optimizing feature on this parameter to find the
best number(ofcourse I doubt there is one but...).
I hope this was any help..
Well, not really... but...