J
Juha Nieminen
Goran said:Oh, come on! set is not a vector. That's a MAJOR flaw in your example
and example is therefore completely of the mark.
The example is demonstrating the speed of 'new' and 'delete' compared
to other relatively complex operations (in this case inserting an element
into a balanced binary tree). In this case 'new' is over 3 times slower
than all the rebalancing needed in the element insertion, which is quite
a lot. That's the point in the example.
Why are you nitpicking on the choice of data container when the whole
point was to demonstrate the speed of memory allocation?
What!? Reserve would cause exactly 0 memory fragmentation, at least
until code starts reallocating or freeing these vectors, at which
point, there would be, fragmentation-wise, little, if any, difference
between a vector and discussed struct hack.
True. reserve() in itself causes 0 memory fragmentation in this case.
However, I didn't say that reserve() causes memory fragmentation. I said
that reserve() does not alleviate the memory fragmentation caused by
having a std::vector as a member of the struct. You should really try to
read what I'm writing.
The std::vector is used simply as a subsitute of a raw array in this
case. Reallocation is not the issue here.
You're still to prove how memory is fragmented. Until you reach
reserved size in a vector, there's no fragmentation.
I'm still to prove how memory is fragmented? Hello? Do you even know
what memory fragmentation means, and how it happens during the execution
of a typical program?
Making two memory allocations worsens memory fragmentation with higher
probability than making only one. reserve() has nothing to do with that.
All you have is one allocation more with a vector. That's easily
swamped by the rest of the code, especially if you have millions of
elements in it.
The problem is not one vector having a million elements. The problem
is a million vectors, each having a few elements. That's what the struct
hack is optimizing (well, one of the things).
You need to have _a lot_ of vectors, all with a _small_ number of
elements in it for your complaint to be relevant.
And that's exactly what's happening if you have an array as the member
of a struct, and then you instantiate that struct millions of times. Which
was my original point.
And for that,
there's no need to use contortions until one can measurein running
code, that performance hit is indeed relevant. You are trying to do it
backwards, and especially because programmers are proven time and time
over to be poor judges of performance problems.
I'm trying to do it backwards? I'm not trying to do anything. I'm not
even advocating the use of the struct hack. All I'm saying is that one
of the possible reasons to use the struct hack is that it lessens the
amount of memory allocations (making the program potentially faster)
besides lessening memory fragmentation.
There are many reasons to avoid such a low-level hack, and I have never
denied that.