D
Daniele Pini
Please consider the following random access container wrapper:
template<typename T, typename Cont = std::vector<T> >
class indexed_list
{
class _node
{
aligned_POD_for<T> content;
size_t previousNodeIdx, nextNodeIdx;
public:
// Appropriate copy/move constructors and assignment,
// when content actually holds a constructed T instance
// (* see below)
};
private:
Cont theWrappedContainer;
size_t topOfReleasedNodesStackIdx;
public:
// Base node acquire/release functions
size_t acquireNode();
void releaseNode(size_t nodeIdx);
// .. The rest just implements a double-linked list concept,
// with special mention to ..
// Push functions return the index to the newly inserted object
size_t push_back(const T& t);
size_t push_front(const T& t);
// By symmetry, a pop_back overload could accept an index, too
void pop_back(size_t idx);
// Just implement as swap(indexed_list(begin(), end()), *this),
// 'disentangles' the list and squeezes released nodes out.
void shrink_to_fit();
// A SFINAEd version of reserve() would be good in case Cont
// implements it
public:
class iterator
{
indexed_list* theList;
size_t currentIdx;
public:
// .. Implement a standard-conforming bidirectional list
// iterator ..
}
};
Here 'aligned_POD_for<T>' stands for a POD struct with the same
alignment and size of type T - in C++11 terms it could be
implemented as std::aligned_memory<std::align_of<T> > if I don't
get it wrong (still bound to C++03 at work, but even then there
are workarounds to get the same thing done).
The acquire/releaseNode() functions obviously apply placement
new and manual destruction of the nodes' content.
The acquireNode() function either pulls a node from the stack of
nodes released through the releaseNode() function or calls
Cont:ush_back when said stack is empty.
The stack of released nodes can be easily crammed inside the
wrapped container itself, by making one of the nodes' 'linking
indexes', say _node::nextNodeIdx, reference the node immediately
lower on the stack upon release.
Finally, we can reserve certain index values like size_t(-1) to
make it the equivalent of nullptr in a common list implementation,
and size_t(-2) on _node:reviousNodeIdx to tell whether a node
has been released.
Reserving special indexes is somewhat better than making a new
node member in this particular case - to keep the node size
minimal.
This is also why std:ptional<T> should not be used for
_node::content, since I guess its implementation must rely on a
small variable flag to tell whether it currently stores valid
content or not. That would be especially wasteful due to padding.
(*) Note that when a node has not been released, it must apply
the appropriate copy/move/assignment operations on its content.
That's it. Now consider:
1. It is a 'lazy' container class, in the sense that it does
not release previously allocated nodes but rather silently
marks them as unused.
That's similar to what std::vector does.
This can be beneficial in highly dynamic scenarios, where
stored elements are frequently released and inserted.
2. It's a cache-friendlier implementation than std::list, that
does not force the internal adoption of a custom allocator.
Of course, the 'cache-friendliness' depends on how much
scattered the node connections are.
It has roughly the same memory overhead of std::list -
except for all those released nodes.
3. It achieves iterators stability through indexes rather than
pointers.
In particular, copying an indexed_list does not invalidate
EXTERNAL 'references' (read indexes) into the list.
The indexed_list::shrink_to_fit function would be the ONLY
iterators/indices-invalidating one.
4. In case Cont = std::deque, indexed_list provides stable
element pointers, too.
5. One may consider the index returned upon element insertion
as the key to access the element. Or, indexed_list shows
a kind of 'auto-associative' behavior.
A notable use case:
template<typename V, typename E, typename F>
class indexed_graph
{
// IMPLICIT copy/move/assignment, O(1) cost in elements
// insertion, removal and access.
indexed_list<V> vertexes;
indexed_list<E> edges;
indexed_list<F> faces;
public:
// ...
void shrink_to_fit(); // The only slightly trickier
// part due to re-indexing
};
Ever heard or seen some library container like this?
template<typename T, typename Cont = std::vector<T> >
class indexed_list
{
class _node
{
aligned_POD_for<T> content;
size_t previousNodeIdx, nextNodeIdx;
public:
// Appropriate copy/move constructors and assignment,
// when content actually holds a constructed T instance
// (* see below)
};
private:
Cont theWrappedContainer;
size_t topOfReleasedNodesStackIdx;
public:
// Base node acquire/release functions
size_t acquireNode();
void releaseNode(size_t nodeIdx);
// .. The rest just implements a double-linked list concept,
// with special mention to ..
// Push functions return the index to the newly inserted object
size_t push_back(const T& t);
size_t push_front(const T& t);
// By symmetry, a pop_back overload could accept an index, too
void pop_back(size_t idx);
// Just implement as swap(indexed_list(begin(), end()), *this),
// 'disentangles' the list and squeezes released nodes out.
void shrink_to_fit();
// A SFINAEd version of reserve() would be good in case Cont
// implements it
public:
class iterator
{
indexed_list* theList;
size_t currentIdx;
public:
// .. Implement a standard-conforming bidirectional list
// iterator ..
}
};
Here 'aligned_POD_for<T>' stands for a POD struct with the same
alignment and size of type T - in C++11 terms it could be
implemented as std::aligned_memory<std::align_of<T> > if I don't
get it wrong (still bound to C++03 at work, but even then there
are workarounds to get the same thing done).
The acquire/releaseNode() functions obviously apply placement
new and manual destruction of the nodes' content.
The acquireNode() function either pulls a node from the stack of
nodes released through the releaseNode() function or calls
Cont:ush_back when said stack is empty.
The stack of released nodes can be easily crammed inside the
wrapped container itself, by making one of the nodes' 'linking
indexes', say _node::nextNodeIdx, reference the node immediately
lower on the stack upon release.
Finally, we can reserve certain index values like size_t(-1) to
make it the equivalent of nullptr in a common list implementation,
and size_t(-2) on _node:reviousNodeIdx to tell whether a node
has been released.
Reserving special indexes is somewhat better than making a new
node member in this particular case - to keep the node size
minimal.
This is also why std:ptional<T> should not be used for
_node::content, since I guess its implementation must rely on a
small variable flag to tell whether it currently stores valid
content or not. That would be especially wasteful due to padding.
(*) Note that when a node has not been released, it must apply
the appropriate copy/move/assignment operations on its content.
That's it. Now consider:
1. It is a 'lazy' container class, in the sense that it does
not release previously allocated nodes but rather silently
marks them as unused.
That's similar to what std::vector does.
This can be beneficial in highly dynamic scenarios, where
stored elements are frequently released and inserted.
2. It's a cache-friendlier implementation than std::list, that
does not force the internal adoption of a custom allocator.
Of course, the 'cache-friendliness' depends on how much
scattered the node connections are.
It has roughly the same memory overhead of std::list -
except for all those released nodes.
3. It achieves iterators stability through indexes rather than
pointers.
In particular, copying an indexed_list does not invalidate
EXTERNAL 'references' (read indexes) into the list.
The indexed_list::shrink_to_fit function would be the ONLY
iterators/indices-invalidating one.
4. In case Cont = std::deque, indexed_list provides stable
element pointers, too.
5. One may consider the index returned upon element insertion
as the key to access the element. Or, indexed_list shows
a kind of 'auto-associative' behavior.
A notable use case:
template<typename V, typename E, typename F>
class indexed_graph
{
// IMPLICIT copy/move/assignment, O(1) cost in elements
// insertion, removal and access.
indexed_list<V> vertexes;
indexed_list<E> edges;
indexed_list<F> faces;
public:
// ...
void shrink_to_fit(); // The only slightly trickier
// part due to re-indexing
};
Ever heard or seen some library container like this?