I was only referring to the *allocation* in unoptimized _release_ (not
Sorry, I didn't mean that you said the above. I was a bit exagerating
to illustrate a point
But that's not that far from reality from
some apologist that try to hide their refusal to learn new (!? 20 year
old) technology.
debug) code on VC8. Since we do NOT use any optimization flags here,
this is what will end up in production.
If you do not let the compiler do any optimisation then that clearly
mean that speed is of no importance at all for your company. Hence
you may as well take the safety and speed of development of
std::vector and ignore all execution speed issues.
My results are somewhere factor 5 on VC8 (VS2005) in favor of new[].
Try again with optimization on. You will probably find:
std::vector<int>(n) is a bit slower than new int[n] but not much
Great to know, thanks, and I do believe you, but for my production
environment non-optimized code is what matters.
Guess that leaves you with a couple of options:
1- Explain to your employer that they are being stupid.
(I suspect you'll not take that option :-(
2- It is still worth comparing. Using the Stepanov code, with g++ -O0
I get:
1st run:
size array vector with pointers vector with iterators
10 1.62 1.93 3.79
100 1.04 1.09 2.36
1000 1.25 1.28 2.39
10000 1.16 1.19 2.17
100000 1.05 1.08 1.93
1000000 1.14 1.17 2.14
2nd run:
size array vector with pointers vector with iterators
10 1.54 1.82 3.73
100 1.06 1.05 2.27
1000 1.24 1.29 2.39
10000 1.17 1.16 2.20
100000 1.06 1.07 1.90
1000000 1.15 1.13 2.05
i.e. array and vector with pointers are showing same performance once
size>=100 without any optimisation. Iterator are slower. Obviously
this test is not quite your particular case but it certainly shows
that assumption about slow vectors are not correct. With -O2 or -O3
the result are even more interesting.
You are talking mostly about memory allocation. Try testing just
memory allocation. A very basic test could be written in a few lines:
const size_t loopIter = 100000;
const size_t dataSize = 10000;
void test1()
{
for(size_t i = 0; i < loopIter ; ++i)
{
std::vector<char> v(dataSize);
}
}
void test2()
{
for(size_t i = 0; i < loopIter ; ++i)
{
char *v = new char[dataSize];
delete v;
}
}
void test3()
{
for(size_t i = 0; i < loopIter ; ++i)
{
char *v = new char[dataSize];
memset(v, 0, dataSize);
delete v;
}
}
void test4()
{
for(size_t i = 0; i < loopIter ; ++i)
{
std::vector<char> v;
v..reserve(dataSize);
}
}
Both with and without optimisation. On my system, regardless of
optimisation, test1 is pretty close to test3. test4 is a bit slower
than test2 but far less than a 2-1 margin, more like 10 to 50%.
But again, it is worth repeating: the above is unlikely to make much
difference in your application. It would be a rare case where the
cost of new[] vs vector is making a significant difference in the
overall speed of the application. Optimizing a little trivial loop
like above might only gain you 0.01% speed in the whole application.
Yannick