Karl said:
Kai-Uwe Bux said:
Hilarious. Try running your tests in release mode with optimization.
You'll throw the array out the door. The larger the data set, the
farther the array will fall.
Are you saying that a vector could actually end up being faster than an
array with respect to retrieving items through operator[]?
I have seen the results of performance tests that seem to indicate this
possibility.
I am wondering how this is possible though.
My thinking is that since both an array and a vector use a single
contiguous block of memory and that to access an individual element in
either an array or vector requires the same pointer arithmetic that the
performance would be comparable in the presence of inlining for the
vector operator[].
Could you illustrate a scenario in which a vector might be faster?
Hi,
just whipped up some test code. Sorry for including the sys/timeb.h
header. Does someone know a standard way of doing this?
#include <ctime>
clock_t ticks = clock();
Thanks.
just a measurement on my machine (2.4GH, g++3.4.0 -O3):
457
601
Same here: VC++ 6.0, 2.4 Ghz
Debug:
vector 3485
array 581
Release:
vector 511
array 761
Now I think, std::vector can beat raw arrays because of smarter allocators:
#include <vector>
#include <iostream>
#include <ctime>
#include <memory>
template < typename T, typename Alloc = std::allocator<T> >
class stupid {
public:
typedef Alloc allocator;
typedef typename allocator::value_type value_type;
typedef typename allocator::size_type size_type;
typedef typename allocator::difference_type difference_type;
typedef typename allocator:
ointer pointer;
typedef typename allocator::const_pointer const_pointer;
typedef typename allocator::reference reference;
typedef typename allocator::const_reference const_reference;
typedef pointer iterator;
typedef const_pointer const_iterator;
typedef typename std::reverse_iterator< iterator >
reverse_iterator;
typedef typename std::reverse_iterator< const_iterator >
const_reverse_iterator;
private:
pointer ptr;
size_type the_size;
public:
stupid ( size_type length ) :
ptr ( new T [ length ] ),
the_size ( length )
{
for ( iterator iter = this->ptr;
iter != this->ptr + the_size;
++ iter ) {
::new( static_cast<void*>(iter) ) T();
}
}
~stupid ( void ) {
iterator iter = ptr + the_size;
while ( iter > ptr ) {
-- iter;
iter->~T();
}
{
allocator alloc;
alloc.deallocate( ptr, the_size );
}
the_size = 0;
}
reference operator[] ( size_type index ) {
return( this->ptr[ index ] );
}
const_reference operator[] ( size_type index ) const {
return( this->ptr[ index ] );
}
}; // stupid
int main ( void ) {
const unsigned long l = 50000000;
{
std::vector< int > v ( l );
std::clock_t loop_start = std::clock();
for ( unsigned long i = 0; i < l; ++i ) {
v
= 5;
}
std::clock_t loop_end = std::clock();
std::cout << loop_end - loop_start << std::endl;
}
{
int* v = new int [ l ];
std::clock_t loop_start = std::clock();
for ( unsigned long i = 0; i < l; ++i ) {
v = 5;
}
std::clock_t loop_end = std::clock();
std::cout << loop_end - loop_start << std::endl;
}
{
stupid< int > v ( l );
std::clock_t loop_start = std::clock();
for ( unsigned long i = 0; i < l; ++i ) {
v = 5;
}
std::clock_t loop_end = std::clock();
std::cout << loop_end - loop_start << std::endl;
}
}
320000
460000
310000
It appears that std::allocator<int> does something fancy. Maybe, it alligns
very large arrays so that it starts at a memory page boundary. This way,
paging and pointer arithmetic do not interfere.
Best
Kai-Uwe Bux