about new and delete

S

SG

Sam said:
I'm sorry that I disappointed you, by failing to come up with the
meaningless results that you were so much looking for.

I wouldn't use the word "disappointing". But the I'm-righter-than-you-
are-and-proud-of-it attitude we can see here isn't going to help
anybody. It certainly doesn't improve your popularity.
Sure thing, Einstein. After increasing the number of iterations by a factor
of ten, the resulting run times for std::list are:

[mrsam@lc2440 tmp]$ time ./t

real    0m2.195s
user    0m2.099s
sys     0m0.000s
[mrsam@lc2440 tmp]$ time ./t

real    0m2.177s
user    0m2.080s
sys     0m0.004s
[mrsam@lc2440 tmp]$ time ./t

real    0m2.245s
user    0m2.080s
sys     0m0.008s

And for std::vector:

[mrsam@lc2440 tmp]$ time ./t

real    0m3.134s
user    0m2.995s
sys     0m0.002s
[mrsam@lc2440 tmp]$ time ./t

real    0m3.113s
user    0m2.989s
sys     0m0.004s
[mrsam@lc2440 tmp]$ time ./t

real    0m3.356s
user    0m2.983s
sys     0m0.010s

Gee, that makes std::vector look so much better!

I'm not really sure what you did. After some testing with G++ 4.3.1
WITHOUT fancy custom allocators or any kind of cheating (like using
vector<>::reserve, vector<>::clear) I came to the conclusion that if
you don't need special list operations that are known to be fast
(splicing, insert, ... etc) a vector is both faster and consumes less
memory. No surprize. Ok, I didn't test the memory consumption part but
I think it should be obvious why this is -- even counting the
temporary memory usage a vector needs when it grows. Each node
consumes at least one pointer in addition (the XOR trick makes an
iterator slightly larger though) and there's the per heap-element
allocation overhead. For testing the speed I created & destroyed an
int-container 5000 times and each time filled it (push_back) with
10.000 elements and later iterated over the elements to compute the
sum. Here are my results:

----------8<---------- list ----------8<----------
sg@home:~/ccpp/heap$ time ./a.out l

real 0m3.332s
user 0m3.228s
sys 0m0.004s
sg@home:~/ccpp/heap$ time ./a.out l

real 0m2.976s
user 0m2.956s
sys 0m0.000s
sg@home:~/ccpp/heap$ time ./a.out l

real 0m3.985s
user 0m3.860s
sys 0m0.004s
----------8<---------- vector ----------8<----------
sg@home:~/ccpp/heap$ time ./a.out v

real 0m0.261s
user 0m0.260s
sys 0m0.000s
sg@home:~/ccpp/heap$ time ./a.out v

real 0m0.306s
user 0m0.280s
sys 0m0.004s
sg@home:~/ccpp/heap$ time ./a.out v

real 0m0.276s
user 0m0.276s
sys 0m0.000s
----------8<--------------------8<----------

I think it's a fair comparison. The use case is similar to the one of
the OP. Again: No tricks, For example, I created and destroyed the
containers 5000 times so that std::vector could not reuse its
allocated memory.
Bzzzt. You just got caught cheating.

As you see, even without clear, reserve (which by the way is
appropriate in the OP's use case) the rather naive program using
vectors outperforms the one which uses std::list by a factor of about
10 in excecution speed.

I have trouble explaining the discrepancy between your results and
mine -- especially since you seem to be using G++ as well. I'm not
saying you're trolling. But it would explain it at least. Maybe you
have a better explanation. I'm all ears.
Thank you for playing. You can go home, now.

Gee, you really know how to talk to people!

Cheers,
SG
 
S

SG

Alf said:
Sorry, that's incorrect.

Well, Sam has a point -- although he seems not to be aware of a
vector's complexity analysis w.r.t. copies. Assuming that a vector
doubles its capacity while growing it requires always about twice as
many copy constructions as a list -- regardless of the size, however,
which is where Sam's mistaken.

So, there *are* situations where the performance of a growing
std::vector will be worse than the performance of std::list. This is
the case when creating a copy is more expensive than allocating
memory.

@Sam: Note that this does not depend on the container's size but only
on the complexity of a copy construction. Since the OP wants to store
simple scalars and copying a scalar is much cheaper than allocating
memory std::vector still looks like the best container for the job.

I'm still interested in how you got your results.

Cheers,
SG
 
S

SG

Sam said:
When I actually decide to enter some kind of a popularity contest, I'll let
you know.


Yes. All the additional array elements that are held in reserve, for future
vector growth, really uses something called "hypothetical memory", instead
of the real one. I didn't know that!

Now, you know. Knowing is half the battle.

Seriously, with T=int I still expect it to be more efficient even with
the reserved memory and in the worst case scenario where a vector
needs to grow. I have no reason to believe otherwise. What do you
think is the overhead for allocating a list node?

You're right in that the containers' performance depends on T.

Summarization:

In case it's only about appending elements to a container and
iterating through it both kinds of container can be used. There is a
certain sizeof(T) value above which std::list comsumes less memory and
below that std::vector consumes less memory. There's also a certain
copy construction cost (time) above which std::list will be faster and
below that std::vector will be faster -- all assuming the exact size
is not known beforehand.
Poor OP. He must've spent a lot of time meticulously typing in all those
5000 ints that his test program prompted for.

What does the size have to do with anything? The excecution time and
memory usage scales linearly with the number of elements on both
containers!

As for the user not wanting to type in 5000 ints: Why do you even
bring up your memory cost argument?!

Cheers,
SG
 
J

Juha Nieminen

Sam said:
Yeah -- just like OP's test program, where I suggested that he'd look at
std::list, instead. I must've missed the part where he was trying to
allocate memory for a million ints, and meticulously typing all of them
in from the keyboard.

Do you know what I find hilarious? First you argue how much more
efficient std::list is compared to std::vector, and then you argue that
efficiency is irrelevant because the original poster just wanted to
input a dozen of values by hand to his program.

So what is it? Efficiency is a reason to choose std::list in this
case, or it isn't? Try to decide already.

If efficiency is irrelevant, then you have to consider ease of usage
as the second most important feature. Which one is, in your opinion,
easier to use, std::list or std::vector? Suppose, that you want to, for
example, iterate through the elements in the container:

for(std::list<int>::iterator iter = values.begin();
iter != values.end(); ++iter)
...

vs.

for(size_t i = 0; i < values.size(); ++i)
...

Take your pick.
 
J

Juha Nieminen

SG said:
Seriously, with T=int I still expect it to be more efficient even with
the reserved memory and in the worst case scenario where a vector
needs to grow. I have no reason to believe otherwise. What do you
think is the overhead for allocating a list node?

Besides, if reallocations are such a huge worry, then std::deque is in
this case way more efficient than std::list (because it performs
considerably less memory allocations and consumes considerably less memory).

std::list is definitely not the only alternative to std::vector.
 
A

Alf P. Steinbach

* SG:
What does the size have to do with anything? The excecution time and
memory usage scales linearly with the number of elements on both
containers!

In the small size regime where Sam tested the size matters for the contribution
from allocations. That contribution is linear for list and logarithmic for vector.

However, Sam only tested n = 10, and there were no other allocations in the
program, which might make the allocation operations unnaturally fast.

If he'd only try n=11, 12, 13...



Cheers,

- Alf
 
J

Juha Nieminen

Sam said:
Yes. We all should take lessons on efficiencies from someone who fails
to realize that std::list::end() is an invariant, and that evaluating it
on every iteration always produces the same result.

What are you talking about? So what if it's an invariant? What does
that have anything to do with this? And exactly what does the end()
function "evaluate"? And even if calling end() was a couple of clock
cycles slower than doing something else, exactly why does it matter
here? The whole point was that efficiency is *irrelevant* when there's
just a dozen of elements in the list.

Maybe you could answer by showing how you would write that first loop
above.
 
J

Juha Nieminen

Sam said:
All these self-appointed experts around here are desperately trying to
have it both ways, in order to save face. As with any other issue of
this kind, it can be approached from a practical, or a theoretical
viewpoint. Practically, in the OP's case, std::list is a win.

Then, oh great guru of C++ programming, please explain to us mere
mortals the exact reasons why std::list "is a win", because I can't see it.

std::vector is easier to use than std::list because the former can be
accessed using a simple indexing syntax (namely operator[]) while the
latter cannot (you need to use iterators, which tend to produce more
verbose code than calling operator[]). That reason alone makes
std::vector more suitable for this situation. Less code to write, and
the code is also simpler.

You argue that std::list is more efficient. You also, rather
contradictorily, argue that in this case efficiency is irrelevant. Of
course in situations where efficiency *is* relevant (namely, when the
amount of elements is large), std::vector is clearly more efficient than
std::list for this type of task (it's faster and consumes less memory).
Linked lists are only useful in relatively rare situations (where you
need things like splicing or removing elements from the middle, assuming
you already have an iterator pointing to the element to be removed).

It's, naturally, good to know how to use std::list and when it's the
best solution. However, *in this case* it's not the best solution
(mainly because it's more verbose to use).

And why do you propose std::list as the best solution here and not,
for example, std::deque? Can you explain why std::list is better?
std::deque has both the easier syntax advantage (ie. operator[]) and
doesn't perform reallocations (which seems something you have so afraid of.
This has been shown.

Where exactly?
But yet, the theoretical analysis fails to include all theoretical
factors, such as cost and expense of additional copy constructors and
destructors, that are inherent in the std::vector implementation.

Please explain why std::deque wouldn't be a good alternative. Why
precisely std::list?
Or
that independent pointers or iterators to members of std::vector cannot
be held independently, since, unlike with std::list, they may be
invalidated by any insert operation. No, we can't include those pesky
inconveniences, in our glowing analysis of std::vector. It's quite easy
to conclude that something is a panacea, when one willfully excludes all
factors to the contrary.

You are the one who seems to be excluding alternatives.
I think that it all stems from nothing more than simple unfamiliarity
and inexperience, with proper usage of std::list. Fear of the unknown,
and nothing more.

You seem to have a pathological fear for hidden reallocation,
especially in situations where it doesn't matter (like this one).
Arrays are an easy concept to wrap one's brain around,
but linked lists are slightly more complicated, and require more care,
for proper usage. These std::lists are too complicated, folks, that's
why they're bad. Bad, bad, bad.

No, the reason is that std::list is not the most convenient solution
in this case. It's approximately the most complicated one.
 
J

Jerry Coffin

Jerry Coffin writes:

[ ... ]
Sure, Einstein. Since the overhead of dumping three million ints to
std::cout dwarfs the relative differences between the containers' methods,
that would make the entire comparison meaningless, nullifying everyone's
arguments, including yours.

Reread what I said, and note (in particular) the emphasis on "OR do
anything else with them."

Next time, read what's there before you make yourself look like such
a complete idiot!
Yea, it's so easy to dispense an endless stream of possibly this, and
possibly that, sitting in one's recliner. Coming up with actual data, to
back up one's theoretical musings, is always the difficult part.

Yup -- you certainly seem to find it difficult anyway. So far, all
you've managed to show is that you're an idiot -- well, okay, that's
getting back to speculation -- you might be an asshole or a liar
instead!
Yes. You have no idea. But that doesn't stop you from making affirmative
conclusions, of course.

If it's impossible to tell for certain what was measured in a
benchmark, then yes, that supports an affirmative conclusion that it
doesn't produce meaningful results.
Yes. We all know how expensive allocation and initialization of 8 or 12
bytes on the stack is.

The whole point of the benchmark is to find out how expensive the
operations involved are. It appears that you're starting with an
assumption about what the result should be, codifying that into your
"benchmark" and then expecting everybody to consider it a huge
revelation when those assumptions show up in the results.
Haaaa haaaa haaaa!


You do realize, of course, that startup time is a fixed constant, so by
subtracting the fixed constant startup time from the total run time, the
relative performance differences would be even more pronounced.

Quite the contrary, I know perfectly well that startup time is NOT a
fixed constant. Startup time depends most obviously on code size, but
also (to some degree) on how the code is arranged in the file.

It's rarely as important anymore, but at one time I found it quite
useful to optimize startup times. At times I've improved startup
times by a factor of about three without changing the functionality
of the program at all (though I'll openly admit that's an extreme
case).
Especially
in the -O2 case. I can't wait to read your argument how the std::list
version's startup time is so much faster than the std::vector version's is.

Yet again, you've got things backwards. I don't have to know that
there is a difference for the benchmark to be meaningless. Exactly
the opposite is true: I have to know that there's NOT a difference
for the benchmark to be meaningful. If there's any uncertainty, the
benchmark is meaningless.
Right. Instead of three samples, you require at least three million, before
you finally become convinced.

Your "measurements" would remain meaningless no matter how often they
were repeated.
I'll get back to you.

Yes, when you get a clue, do that.
Sure thing, Einstein. After increasing the number of iterations by a factor
of ten, the resulting run times for std::list are:

As the previous comments made obvious to anybody with an IQ higher
than a worm's, you need to do a LOT more than just increase the
number of iterations. Get back to me when you get a clue and at least
make some attempt at fixing the other problems with your "benchmark."

[ ... ]
Gee, that makes std::vector look so much better!

What it did was make you look even worse -- difficult though it was
to believe that such a thing was even possible!
Right. What I said, above.


? and which makes all arguments of std::vector's superioty moot.

If the OP simply wanted to write this one program, you'd be right.

Then again, it seems pretty obvious that a program that simply prints
out what it just asked you to type in is rarely useful in itself (and
even if you really wanted to do that, you could do it pretty easily
without any container at all).

This gets back to that "to anybody with an IQ higher than a worm's"
situation, where it's pretty clear that this program is not really an
end in itself. Instead, it's pretty obvious that (apparently unlike
you) the OP wants to learn at least a little but about how to
program.

That being the case, the difference between vector and list is both
meaningful and relevant.
Bzzzt. You just got caught cheating.

Yes and no. It's true that there is undoubtedly _some_ difference in
speed between emptying an existing vector and creating a new one.

On the other hand, consider the alternatives: if we define the
collection inside the loop, we've also restricted its scope to the
loop, so we have to move the accumulate inside the loop as well:

for (size_t j=0; j<900000; j++)
{
std::list<int> n;
for (size_t i=0; i<10; i++)
n.push_back(i);
total = std::accumulate(n.begin(), n.end(), 0);
}

The accumulate was added simply to assure that the contents of the
collection were used, so the manipulation of the collection wouldn't
be optimized away completely. In this code, however, it's become a
substantial part of what we're timing instead. Arguably, this still
makes some sense though -- it's pretty pointless to create a bunch of
collections we never use, so using the collection every time is at
least somewhat reasonable. It doesn't make a lot of difference though
-- running this a few times, vector still wins (albeit by a rather
smaller margin).

Another possibility would be something like this:

std::vector<int> *n = NULL;

for (size_t j=0;j<iterations; j++) {
delete n;
n = new vector<int>;

for (size_t i=0; i<10; i++)
n->push_back(i);
}
int total = std::accumulate(n->begin(), n->end(), 0);

In some ways this is great -- we're creating a new collection each
time, and still only accumulating once to prevent the loop from being
optimized away. Unfortunately, with this we'd also be timing a new
and a delete at every iteration, AND all the references to the
collection are via a pointer as well. This adds a lot of overhead to
what we originally cared about.

A quick test shows that with either of these, list is still slower
than vector though.
 
J

Joshua Maurice

Ok Sam. Let me try to spell this out and see where you disagree. So,
here are my premises and deductions:


2-
In the OP's use case, Sam took it mean a small number of ints, ~10.
Sam did correctly show that inserting ~10 ints into a std::list is
faster than std::vector for whatever platform, compiler, etc., he was
using.

However, the other people in this thread implicitly assumed, rightly
so, that creating a inserting ~10 ints into a container, many times,
is not a valid real use case. With this premise, it follows that the
number of times you insert ~10 ints into a container is a small number
of times, and thus efficiency is not a concern. However, Sam argued
that std::list is faster in this use case, and that's why you should
use it. Thus the apparent contradiction: Sam is arguing to use
something because it's faster when speed is of no concern.

Moreover, even if such a contrived use case did exist where you insert
~10 ints into a container, many times, then std::vector is still
better for the nontrivial case, as I shall show now.

If a real program did insert 10 ints into a container, many times, it
would do so so that the lifetimes of the containers would overlap, or
it would do so serially in a loop.

In the loop case, it would be far more efficient to hoist the
std::vector outside of the loop and reuse it. The only memory
allocations would be on the first iteration, and it would be \much\
faster than std::list.

In the overlapping lifetime case, then you have many lists living in
the program at the same time. If you used std::list over std::vector
in a nontrivial case (aka lots of containers), then you would see
worse performance because of more cache misses due to less locality of
std::list and more memory required of std::list. The extra memory
consumption plus the lack of locality would kill performance.

Your program basically tests creating a container in a tight loop, so
the containers' lifetimes do not overlap. Thus your testing is
worthless. Either the program will create lots of containers of int
with overlapping lifetime, in which case std::vector is a clear
winner, or it'll create a bunch of containers of int serially in a
loop, in which case hoisting the std::vector out of the loop is again
the clear winner.
 
J

Joshua Maurice

Moreover, even if such a contrived use case did exist where you insert
~10 ints into a container, many times, then std::vector is still
better for the nontrivial case, as I shall show now.

If a real program did insert 10 ints into a container, many times, it
would do so so that the lifetimes of the containers would overlap, or
it would do so serially in a loop.

In the loop case, it would be far more efficient to hoist the
std::vector outside of the loop and reuse it. The only memory
allocations would be on the first iteration, and it would be \much\
faster than std::list.

In the overlapping lifetime case, then you have many lists living in
the program at the same time. If you used std::list over std::vector
in a nontrivial case (aka lots of containers), then you would see
worse performance because of more cache misses due to less locality of
std::list and more memory required of std::list. The extra memory
consumption plus the lack of locality would kill performance.

Your program basically tests creating a container in a tight loop, so
the containers' lifetimes do not overlap. Thus your testing is
worthless. Either the program will create lots of containers of int
with overlapping lifetime, in which case std::vector is a clear
winner, or it'll create a bunch of containers of int serially in a
loop, in which case hoisting the std::vector out of the loop is again
the clear winner.

Sorry, meant to add that the moral of this story is that micro-
benchmarks are often misleading as they don't often replicate caching
issues that a real program will face. Ex: it's often better to compile
a whole program for size optimizations than speed optimizations, even
though micro-benchmarks will show otherwise.
 
J

Juha Nieminen

Sam said:
Yeah -- what does the fact that when you're trying to measure the
performance of a given piece of code you also end up measuring the same
bit of irrelevant code, getting executed repeatedly, have to do with
getting accurate results?

My argument was: Given that (by your very words) efficiency is
irrelevant here because the program has to only handle a few values
entered by hand by the user, then the second most important issue to
consider is ease of usage. Then I proceeded to demonstrate that using
std::vector results in much simpler code than using std::list.

Your counter-argument is: In my example code there might be a few
clock cycles wasted in calling the end() method of std::list.

Not only did you refuse to answer my question (ie. which one of the
two is simpler and easier to write), but you dismissed the whole
question with a ridiculous and irrelevant counter-argument.
Right. When one wants to evaluate the performance of a specific
algorithm

I did not talk about performance. I talked about ease of usage. Your
argument is irrelevant and ridiculous.
That's an easy one: every iteration of the loop.

The end() function evaluates every iteration of the loop? What does
that even mean? How can the end() function evaluate "every iteration of
the loop"? I don't think you even know what you are talking about.
Feh. Accuracy -- what an outmoded concept.

Right, don't answer my question. I repeat: Given that there is only a
dozen or so values in the list, entered by hand by the user (your very
words), why does it matter whether calling the end() function may
perhaps be a few clock cycles slower than doing something else?
I don't know whose point is that, but it's not mine's.

It was your point because you argued earlier in the thread that it
makes no sense to measure std::list vs. std::vector performance with a
million elements because the user only enters a dozen or so elements by
hand. With a dozen elements the execution performance of the code is
completely irrelevant.
for(std::list<int>::iterator b = values.begin(), e=values.end(); b != e;
++b)

And that, in your opinion, is better than this:

for(size_t i = 0; i < values.size(); ++i)
 
J

Juha Nieminen

Sam said:
Elsewhere in the thread, you will find concrete data showing faster
results in std::list's case.

With how many elements?

Sure, std::list may be faster when we have 10 elements. However, it
quickly becomes slower when the amount of elements grows even slightly.
And I'm not talking about a million elements. In my system the cutoff
point seems to be about 30 elements: When there are at least that many
elements, std::vector becomes faster than std::list. Of course the
larger the amount of elements, the faster std::vector will be compared
to std::list.

So if the original poster wants to handle, for example, 32 elements,
using std::list will be a complete loss in all possible ways. Not only
will it be slower, but the code will be more complicated.

(Not that it really matters with 32 elements anyways. However, it does
matter if the amount of elements grows much larger, let's say for
example 1000 elements. In that case std::vector beats std::list by a
factor of 10.)

And no, I did the test fair and square. I did not "cheat" by using
reserve() or reusing a std::vector instance on the outer scope: I
instantiated a std::vector at each loop and destroyed it at the end, the
same as with the std::list.

For comparison, I also used std::deque. Here's the program:

//--------------------------------------------------------------------------
#include <list>
#include <vector>
#include <ctime>
#include <iostream>

int main()
{
const int iterations = 1000000;
const int elements = 26;

int sum = 0;

std::clock_t iClock = std::clock();
for(int iteration = 0; iteration < iterations; ++iteration)
{
std::list<int> l;
for(int i = 0; i < elements; ++i) l.push_back(i);
for(std::list<int>::iterator b = l.begin(), e = l.end();
b != e; ++b)
sum += *b;
}
std::cout << double(std::clock() - iClock) / CLOCKS_PER_SEC
<< std::endl;

iClock = std::clock();
for(int iteration = 0; iteration < iterations; ++iteration)
{
std::vector<int> v;
for(int i = 0; i < elements; ++i) v.push_back(i);
const size_t amount = v.size();
for(size_t i = 0; i < amount; ++i)
sum += v;
}
std::cout << double(std::clock() - iClock) / CLOCKS_PER_SEC
<< std::endl;

iClock = std::clock();
for(int iteration = 0; iteration < iterations; ++iteration)
{
std::deque<int> v;
for(int i = 0; i < elements; ++i) v.push_back(i);
const size_t amount = v.size();
for(size_t i = 0; i < amount; ++i)
sum += v;
}
std::cout << double(std::clock() - iClock) / CLOCKS_PER_SEC
<< std::endl;

return sum;
}
//--------------------------------------------------------------------------

(I calculate and return the 'sum' from main to make sure the compiler
doesn't optimize anything away.)

I compiled it with "-O3 -march=native". Here are some results:

10 elements:
std::list: 0.82 seconds.
std::vector: 1.4 seconds.
std::deque: 0.34 seconds.

26 elements:
std::list: 2.2 seconds.
std::vector: 2.12 seconds.
std::deque: 0.42 seconds.

100 elements:
std::list: 8.25 seconds.
std::vector: 3.14 seconds.
std::deque: 0.78 seconds.

1000 elements:
std::list: 79.8 seconds.
std::vector: 7.18 seconds.
std::deque: 8.46 seconds.

Not surprisingly std::deque beats the other two hands down when the
amount of elements is smallish. (When the amount of elements grows
significantly, std::vector catches up with its lower constant factors.)

Most importantly, std::deque beats your proposed solution in this
particular case (ie. about 10 elements) in all possible counts.
std::vector is easier to use than std::list because the former can be
accessed using a simple indexing syntax (namely operator[]) while the

Splendid. However, when random access is not required, this offers no
value added functionality.

However, when splicing is not required, std::list offers no value added.
latter cannot (you need to use iterators, which tend to produce more
verbose code than calling operator[]). That reason alone makes
std::vector more suitable for this situation. Less code to write, and
the code is also simpler.

Damn right! Less code=good, even if less code=worse performance.

Now you'll have to explain why less code = better performance in the
case of std::deque, which you seem to blissfully ignore as an
alternative. (Note how you can use std::deque in the exact same way as
you can use std::vector, so there's no penalty with regard to code
clarity and simplicity.)
You'll
find plenty of compact, tight code on www.ioccc.org. We should all
strive to produce applications that look so compact, and tight.

Why would I use the verbose std::list when std::deque beats it hands
down and is easier to use?
 
S

SG

And speaking of "extra memory consumption", I guess that all the additional
memory std::vector needs to allocate on account of future growth -- I
suppose that all gets stored in IAM (imaginary access memory), and doesn't
count. You are only concerned about "extra memory consumption" in
std::list's case, but in std::vector's case it's perfectly fine, because it
doesn't count.

I think I'm not denying any overhead.

1. In the OP's use case he already knows the size in advance
which makes memory usage very efficient. The overhead is
constant: O(1)

2. In case the size is not known in advance the memory overhead
for std::vector assuming doubling the for a vector of size
pow(2,k) with k being an integer is at most
pow(2,k+1) * sizeof(T) + O(1)
(That's including the peak memory usage during growth)
Without this _temporary_ usage peak you'll have an overhead
of at most
pow(2,k) * sizeof(T) + O(1)
And this is just the worst case. At average it should be
about half of it:
pow(2,k-1) * sizeof(T) + O(1)

3. The memory overhead for the list of size n is
n * PNA + O(1)
where PNA is the per node overhead, more specifically the
amount of memory used by a node minus sizeof(T). Any
guesses on PNA for typical implementations?

Sam, what's your experience w.r.t. "PNA"? Can you suggest a way to
measure/check it? I don't know what the granularity of typical memory
allocates is. Due to alignment constraints I'm guessing that on a
typical system with a good allocator the amount of memory that is
reserved when allocating N bytes is not less than ((N+7+2*sizeof
(void*))/8)*8, so, 8 byte alignment plus the size two pointers (one
that is used by the allocator and one for the XORed next/prev
pointers). At average this is what? 12 to 20 bytes of overhead? 12 for
sizeof(void*)==4 and 20 for sizeof(void*)==8? Is this realistic or too
optimistic?

Assuming the above "optimistic" numbers are correct and further
assuming a 32bit system with sizeof(void*)=sizeof(int)=4 we have an
average overhead of about 50% for vector<int> and an overhead of 300%
for list<int>.

Ovbiously there is an implementation-dependent threshold S so that if
sizeof(T)<=S std::vector consumes not more memory than std::list in
case we don't know the size in advance. If we do know the size in
advance, std::vector has constant memory overhead.
That's ok. I'm sure that on some bright day, in the future, you'll figure it
out.

Please watch your signal-to-noise ratio.

Cheers,
SG
 
J

Juha Nieminen

Sam said:
The same number in OP's test case.

The original post had no test cases. It only had a program. The
original poster did not specify how many numbers would be entered.
You changing a practical point into a theoretical one is noted.

Exactly how is it theoretical that std::vector becomes quickly faster
than std::list when the amount of elements grows even a bit (you don't
even need to get to 50 elements when std::vector already beats std::list
in speed). That's not theory, that's practice. You can go and test it
yourself.
Now,
include the fact that vector iterates are invalidated after any insert,

Your own arguments defeat you: The original poster didn't use any
iterators.

You talk about what the original poster strictly wanted in his post
when it's more convenient to you, but quickly switch to metafeatures
when it suits you better. Then you accuse me of "theoretical stuff" when
I argue that std::vector is more efficient.
and std::vector's worse performance for inserts and deletes in the
middle of the container, compared to std::list;

The original poster wasn't inserting anything in the middle.
include that in your
theoretical analysis, in order for it to be actually valid, and then we
can have an intelligent theoretical discussion.

I'll do that when you include in your analysis the additional memory
requirements of std::list, the increased amount of allocations, the fact
that elements in a list are not contiguous in memory (which means that
they cannot be passed to something expecting a C-style array) and the
O(n) access of an element (compared to the O(1) access in std::vector).

Why are you so eager to defend std::list on the grounds that inserting
in the middle is O(1), but blissfully ignore the fact that accessing a
random element is O(n)? And don't answer with the "the OP didn't require
random access" crap. The OP didn't require insertion in the middle either.

It's you who is cherry-picking features which are most convenient to
your idiotic argument.
But, you refuse to do this, because it shatters your bubble.

And you refuse to take into account the additional memory usage and
linear-time random access, because "it shatters your bubble".
std::vector is easier to use than std::list because the former can be
accessed using a simple indexing syntax (namely operator[]) while the

Splendid. However, when random access is not required, this offers no
value added functionality.

However, when splicing is not required, std::list offers no value
added.

Indeed. After adding an element to a vector, the feature that all
iterators or pointers to existing members of the vectors remain valid is
not a "value added" in a real world. No sirree.

Why is that a "value added" but O(1) access is not?
Keep cherry-picking your arguments, and ignoring ones that burst your
bubble.

Look who's talking. Please explain why constant-time random access is
not an added value, while inserting in the middle in constant time is.
Of course you won't answer to this question with anything coherent.
You are desperately trying to change the topic. You are defending the
merits of a std::vector, not std::deque, in the practical case.

I'm not changing the topic. Your argument was that std::list is the
best choice. I disagree with that argument. std::list is *not* the best
choice in this case, nor in most cases.

std::deque is a practical demonstration of why std::list is not the
best choice.
It looks
like you now begrudgingly abandoning your original argument, and are
giving up, but you want to save face arguing that a std::deque would be
even better than std::list, in the original case. That may or may not
be, however even were that to be the case, it has no relation to the
merits of std::vector.

It looks like you are trying to evade answering my questions. When I
showed you that std::list is *not* the most efficient solution, you are
now trying to evade admitting your mistake.
Because you went down with std::vector's flaming ship. :)

I don't even understand what that means.
 
S

SG

SG said:
[...] Due to alignment constraints I'm guessing that on a
typical system with a good allocator the amount of memory that is
reserved when allocating N bytes is not less than
((N+7+2*sizeof(void*))/8)*8

Sorry, that should be ((N+7+sizeof(void*))/8)*8 (for 8 byte alignment
and a pointer as part of the allocator's data structures)

Sam, you keep talking about people ignoring facts. Maybe your
communication skills are part of the "problem". Just a thought I
wanted to share.

Cheers,
SG
 
S

SG

Indeed. After adding an element to a vector, the feature that all iterators
or pointers to existing members of the vectors remain valid is not a "value
added" in a real world. No sirree.

The context is adding elements to a container and iterating through
it. Nothing more, nothing less. This is what the OP tries to do. This
is where you can use std::vector as well as std::list. You're applying
different standards here. You're pointing out advantages of std::list
that are of no significance in this context. And everything someone
does the same for std::vector you're pointing to the OP's
requirements.

Cheers,
SG
 
F

Francesco S. Carta

is it the right way to write the Code......
Here is mine code for using new and delete where i use get the number
from the user and print them. in a order. can i do it better ?
How?

#include<iostream>
#include<new>

int main()
{
int i,n;
int *p;
std :: cout << "How many numbers would you like to type ";
std :: cin >> i;
p= new(std::nothrow) int;
if(p==0)
std :: cout<<"Memory Could Not Be Allocated ";
else
for(n=0;n<i;n++)
{
std :: cout <<"Enter Number ";
std :: cin >>p[n];
}
std :: cout<< "You have Entered ";
for(n=0; n<i;n++)
{
if(p[n] < p[i-1])
{
std :: cout <<p[n]
<<", ";
}
else if(p[n] == p[i-1])
{
std :: cout <<p[n]
<<".";
}
}
delete [] p;
std :: cout <<std ::endl;
return 0;

}


Hi all,
I'm replying to the original post to recall it.

Take in account not what the above code _does_, but what it is _meant
to do_:
- collect n numbers from cin.
- format the collected numbers by inserting ", " between all of them
and appending "." at the end.

Two reasonable issues to take in account:
- ease of use
- performance

Since we are collecting data from cin and since it's not reasonable to
blame the computer for the slow typing of its user, I modified the
test case to collect data from an hypothetical super-typist - in other
words, from an already filled out stream.

Here is my benchmark/self-profiler or whatever you want to call it. It
might be screwed up, feel free to fix it in case and to teach me
something new about correct measuring - I'm not kidding.

Please scroll down till the end (click "read more" just in case...)
because I've something more to add once I'll have shown my program's
output.

-------
#include <iostream>
#include <sstream>
#include <list>
#include <vector>
#include <ctime>

using namespace std;

int oss_wins;
int list_wins;
int vector_wins;
int oss_losses;
int list_losses;
int vector_losses;

void test(const string& s) {

clock_t osstream_total = 0;
clock_t list_total = 0;
clock_t vector_total = 0;

static clock_t osstream_start = 0;
static clock_t list_start = 0;
static clock_t vector_start = 0;

static stringstream in;
static stringstream out;

static const int max_loops = 100000;

for (int loop = 0; loop < max_loops; ++loop) {

/* oss */ {
in.clear();
in.str("");
in << s;
out.str("");
osstream_start = clock();
ostringstream osstream;
int v = 0;
int n = 0;
in >> n;
--n;
for (int i = 0; i < n; ++i) {
in >> v;
osstream << v << ", ";
}
in >> v;
osstream << v << ".";
out << osstream.str() << endl;
osstream_total += clock() - osstream_start;
}

/* list */ {
in.clear();
in.str("");
in << s;
out.str("");
list_start = clock();
list<int> ilist;
int v = 0;
int n = 0;
in >> n;
for (int i = 0; i < n; ++i) {
in >> v;
ilist.push_back(v);
}
list<int>::iterator iter = ilist.begin();
/* the following is _NOT_ cheating, we know
upfront the size, we can use it as a fair help to list
in order to correctly format the last element */
for (int i = 0, e = ilist.size()-1; i < e; ++i, ++iter) {
out << *iter << ", ";
}
out << ilist.back() << "." << endl;
list_total += clock() - list_start;
}

/* vector */ {
in.clear();
in.str("");
in << s;
out.str("");
vector_start = clock();
vector<int> ivector;
int v = 0;
int n = 0;
in >> n;
/* the following is _NOT_ cheating, we know
upfront the required size, we can use it */
ivector.reserve(n);
for (int i = 0; i < n; ++i) {
in >> v;
ivector.push_back(v);
}
for (int i = 0, e = ivector.size()-1; i < e; ++i) {
out << ivector << ", ";
}
out << ivector.back() << "." << endl;
vector_total += clock() - vector_start;
}
}
cout << out.str().substr(0, 10) << " [...]" << endl;
double oss_avg = double(osstream_total) / max_loops /
CLOCKS_PER_SEC * 1000;
double list_avg = double(list_total) / max_loops / CLOCKS_PER_SEC
* 1000;
double vector_avg = double(vector_total) / max_loops /
CLOCKS_PER_SEC * 1000;
if(oss_avg < list_avg && oss_avg < vector_avg) {
cout << "osstream wins" << endl;
++oss_wins;
} else if(list_avg < oss_avg && list_avg < vector_avg) {
cout << "list wins" << endl;
++list_wins;
} else if(vector_avg < list_avg && vector_avg < oss_avg) {
cout << "vector wins" << endl;
++vector_wins;
} else {
cout << "no absolute winner" << endl;
}
if(oss_avg > list_avg && oss_avg > vector_avg) {
cout << "osstream last classified" << endl;
++oss_losses;
} else if(list_avg > oss_avg && list_avg > vector_avg) {
cout << "list last classified" << endl;
++list_losses;
} else if(vector_avg > list_avg && vector_avg > oss_avg) {
cout << "vector last classified" << endl;
++vector_losses;
} else {
cout << "no absolute last classified" << endl;
}
cout << "osstream: " << oss_avg << "ms" << endl;
cout << " list: " << list_avg << "ms" << endl;
cout << " vector: " << vector_avg << "ms" << endl << endl;
}


int main() {
stringstream ss_small, ss_large;

ss_small << 10 << " ";
for(int i = 0; i < 10; ++i) {
ss_small << i << " ";
}

ss_large << 40 << " ";
for(int i = 0; i < 40; ++i) {
ss_large << i << " ";
}

for (int i = 0; i < 5; ++i) {
cout << "Run: " << i << endl;
cout << "10 elements:" << endl;
test(ss_small.str());
cout << "40 elements:" << endl;
test(ss_large.str());
}

cout << "--- totals --- " << endl;
cout << "osstream: " << oss_wins << " win(s), " << oss_losses << "
last position(s)" << endl;
cout << " list: " << list_wins << " win(s), " << list_losses <<
" last position(s)" << endl;
cout << " vector: " << vector_wins << " win(s), " <<
vector_losses << " last position(s)" << endl;
return 0;
}

/*

Run: 0
10 elements:
0, 1, 2, 3 [...]
osstream wins
list last classified
osstream: 0.01391ms
list: 0.01502ms
vector: 0.01402ms

40 elements:
0, 1, 2, 3 [...]
osstream wins
list last classified
osstream: 0.04616ms
list: 0.06029ms
vector: 0.04696ms

Run: 1
10 elements:
0, 1, 2, 3 [...]
vector wins
list last classified
osstream: 0.01323ms
list: 0.01583ms
vector: 0.0123ms

40 elements:
0, 1, 2, 3 [...]
osstream wins
list last classified
osstream: 0.04637ms
list: 0.06021ms
vector: 0.04663ms

Run: 2
10 elements:
0, 1, 2, 3 [...]
vector wins
list last classified
osstream: 0.01392ms
list: 0.01455ms
vector: 0.013ms

40 elements:
0, 1, 2, 3 [...]
osstream wins
list last classified
osstream: 0.04295ms
list: 0.0583ms
vector: 0.04947ms

Run: 3
10 elements:
0, 1, 2, 3 [...]
vector wins
list last classified
osstream: 0.015ms
list: 0.01503ms
vector: 0.01162ms

40 elements:
0, 1, 2, 3 [...]
vector wins
list last classified
osstream: 0.04757ms
list: 0.0588ms
vector: 0.04486ms

Run: 4
10 elements:
0, 1, 2, 3 [...]
osstream wins
list last classified
osstream: 0.01283ms
list: 0.01622ms
vector: 0.01312ms

40 elements:
0, 1, 2, 3 [...]
vector wins
list last classified
osstream: 0.04626ms
list: 0.05759ms
vector: 0.04618ms

--- totals ---
osstream: 5 win(s), 0 last position(s)
list: 0 win(s), 10 last position(s)
vector: 5 win(s), 0 last position(s)

Process returned 0 (0x0) execution time : 104.950 s

*/
-------

Compiled without any kind of optimization with GCC 3.4.5.

If it were for performance issues, I would choose either vector or
ostringstream, but here the performance goes to hell because a human
being is meant to insert the data.

Hence my preference falls decisively onto ostringstream, because I
need to loop just once.

As somebody else already mentioned, for this very case we don't really
need a "so-called" container, we can plainly use a stream. But
honestly, in this case a stream is just a char container - with some
value added ;-)

I think I'll have no need to defend my preference.

Have good time,
Francesco
 
J

Juha Nieminen

Sam said:
To adopt a saying from another context: a few CPU cycles here, a few CPU
cycles there, a few bytes here, and a few bytes there, pretty soon
you'll be talking a real pig of an app.

std::list uses more memory and more allocations than std::vector does.
Thus using it is counter-productive if you don't require the few
advantages it has. Your suggestion of using std::list is a bad one. It
will result in inefficient memory-hog programs.
You are referring to indexing a vector to access its elements. That only
matters when random access is a requirement. It is not always required,
and not in this sample case.

And then you try to argue how std::list is so much better because you
can insert in the middle in constant time, as if that had been a
requirement in this case.
Furthermore, once again, you're trying to cobble together an argument by
blurring the distinction between practical and theoretical factors.
Practically, in OP's original case, a std::list was the most practical
approach.

I find it hilarious how you keep repeating that, yet show zero proof.

In the original poster's case the most practical approach is the
*simplest* approach, and std::vector is way simpler to use than std::list.

You didn't even disagree that iterating through the elements is
simpler with std::vector than it is with std::list, yet you refuse to
acknowledge that std::vector is the most practical approach exactly
because it's simpler to use.
Unable to form a practical argument, you have to retreat to making a
theoretical one, such that using a vector is "easier".

Unable to form a practical argument, you have to retreat to making a
theoretical one, such as insertion in the middle being faster.

Which feature is more practical in this case: Ease of usage, or an
unused feature? The easier syntax with std::vector is, in fact,
something which would actually be used, unlike your ridiculous argument
that inserting in the middle is faster.
Well, if you adopt a theoretical argument, you'll just have to take into
account all theoretical aspects of a std::vector, instead of
cherry-picking just the ones you like.

Good thing that you don't have to do the same with std::list.
You're going to have to consider
all of std::vector's baggage, such as excessive memory usage, excessive
copying, and iterator invalidation in analogous situations where a
std::list would not invalidate any of the existing iterators.

You are going to have to consider all the std::list's baggage, such as
excessive memory usage, slow allocation of a high amount of elements and
slow random access.

Practical test: Allocate 100 integers with std::vector and std::list,
and measure which one consumes more memory and is slower.

Also consider that often just *traversing* the elements will often be
slower with std::list than with std::vector because of cache issues
(traversing a list often causes more frequent cache misses than
traversing a vector, especially if the list elements happen to be at
random locations in memory, something which does not happen with vectors
nor deques).
But of course, you're always quick to close your eyes, and shut your
ears, when reality comes calling.

You are the one who is refusing to run his own test using more than 10
elements. Talk about deliberately closing your eyes from reality.
But, in singing the praises of std::vector, its fan club always fails to
consider these things.

And you fail to consider the drawbacks of std::list.
This, of course, is the theoretical world, and this is a valid
theoretical point.

Allocating 100 elements is a purely theoretical thing to do in your
world, it seems. I suppose allocating 1000 elements is completely beyond
the realm of reality then (not to talk about allocating 1 million elements).
What usually happens, at this point, is that the
vector fan club decides that they don't want to talk theory, at this
point, and revert back to some practical argument based on OP's original
scenario. Each time they're backed into a corner, they jump into some
other one.

I find it hilarious how it seems like you are talking about yourself.
You are accusing others of your own flaws.
I already answered it: the iterator one is simpler and easier to write,
for me.

Oh, so you are a liar too, it seems. You don't really think that the
loop using std::list iterators is simpler than the loop using an
integral counter, but of course you can't retract from your argument
now, so if doing so requires lying, that's a small price to pay.
It is the most natural fit with many STL iterator-based classes
and algorithms. Furthermore, when using the C++0x syntax, iterating over
a vector or a list is the same:

And you don't consider having to rely on a future not-yet-published
standard to be resorting to theoretical arguments?

Theory (in this type of conversation) means something which is not
practical, or something which cannot be put into practice. Resorting to
a yet-to-be-published standard is highly theoretical, rather obviously,
because you can't yet use the feature in practice.
This gives one an option of coding for an opaque container, which may be
implemented as either a std::list or a std::vector, without changing the
iterating code.

Good thing you don't have to resort to "theoretical stuff", but you
can keep with practical things relevant to the original problem.
You talk about either performance or ease of usage, depending on which
one is easier to shoehorn into your argument.

Ease of usage is and has always been better for std::vector. I don't
have to choose that argument depending on the situation. That argument
is true regardless.

*In addition* to that, std::vector is more efficient when the amount
of elements grows even slightly larger. Your std::list loses in both
counts in that case.
You know perfectly well what I'm talking about. You're just posturing,
due to a lack of any better argument.

Your argument was that when measuring performance, end() should not be
called in the conditional because it evaluates something. I asked you
what is it that it evaluates. You answered with some weird "every
iteration of the loop" which doesn't make any sense.

My point was that end() is not a heavy function. It doesn't evaluate
anything at all. ("Evaluate" would imply that it performs some algorithm
or some calculations, which is not the case.) All it does is to return
an iterator in constant time, which probably takes a couple of clock
cycles. Its speed is largely irrelevant.
Because it does matter.

That's not an answer to my question.

"Why does it matter?" "Because it does matter." That's not an answer.
It's just a childish "Why? Because!"
A small domain is not a valid excuse for a
sloppy implementation, in my book.

You want to optimize calling end() outside the loop. Well, how about
we optimize the speed testing program and put the std::vector instance
outside all the loops, so that at each iteration the same instance gets
reused over and over? See what the speed measurements tell then.

Or would that be "cheating"? Well, my answer to that would be your own
words: "A small domain is not a valid excuse for a sloppy
implementation, in my book."

If we are talking about micro-optimization, then we should use
micro-optimization with *all* testcases, not just with your pet container.
You should try striving to produce
the best, most efficient code, notwithstanding the situation, every
time. It's a good habit to have, and brings benefits in the long run.

Which is why you should use std::vector rather than std::list, unless
the few advantages of std::list overweight its disadvantages compared to
std::vector. Also std::deque should be considered as an alternative (and
as I have shown, it *is* the best choice in this particular case,
beating your pet std::list hands down.)
 
S

SG

Juha said:
Sam said:
  std::list uses more memory and more allocations than std::vector does..

Of course, this general statement is false. Please use proper
qualifications.

As I tried to point out this is only true _at average_ if at least one
of the following two conditions are met: (a) you know the size in
advance or (b) sizeof(T)/2 is smaller than the per-list-node memory
overhead -- assuming 50% extra capacity at average.

Cheers,
SG
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,982
Messages
2,570,186
Members
46,742
Latest member
AshliMayer

Latest Threads

Top