J
John Doe
Hi all,
I know the standard doesn't care about threads (wrongly
But in current compilers implementation, is the "list" which holds count
of the occupied and free heap addresses SHARED among various threads or not?
I don't know if I was clear:
Case1: the list which holds count of what is free and what is occupied
is SHARED among all the threads. In this case each
malloc/new/free/delete needs to acquire a mutex.
Case2: list (almost) thread local: no need for mutexes inside the
malloc/new/free/delete implementations, for most calls. So concurrent
access to malloc/new is fast
In this latter case of course there should be some heap address ranges
which are reserved for use by thread1, some are reserved for thread2...
so they don't usually conflict. If some thread fills up its address
range with mallocs, then it has to take a mutex and rearrange the
address ranges dedicated to the various threads so that it can get some
more heap range for the next mallocs...
I need to know the answer to evaluate if for fast
allocations/deallocations it should be wise to use something like an
allocator pool.
Oh another question: are distinct allocator functions for the various
classes automatically generated? It would seem wise to me to divide the
heap (or the section of the heap dedicated to one thread) in address
ranges, and each address range should be used for one class only. In
this way the allocation for various objects of the same type would be
contiguous and the memory would never be fragmented. Of course if one
class finishes its heap range, a reassignment of the heap ranges with
the other objects would have to be made.
TIA
I know the standard doesn't care about threads (wrongly
But in current compilers implementation, is the "list" which holds count
of the occupied and free heap addresses SHARED among various threads or not?
I don't know if I was clear:
Case1: the list which holds count of what is free and what is occupied
is SHARED among all the threads. In this case each
malloc/new/free/delete needs to acquire a mutex.
Case2: list (almost) thread local: no need for mutexes inside the
malloc/new/free/delete implementations, for most calls. So concurrent
access to malloc/new is fast
In this latter case of course there should be some heap address ranges
which are reserved for use by thread1, some are reserved for thread2...
so they don't usually conflict. If some thread fills up its address
range with mallocs, then it has to take a mutex and rearrange the
address ranges dedicated to the various threads so that it can get some
more heap range for the next mallocs...
I need to know the answer to evaluate if for fast
allocations/deallocations it should be wise to use something like an
allocator pool.
Oh another question: are distinct allocator functions for the various
classes automatically generated? It would seem wise to me to divide the
heap (or the section of the heap dedicated to one thread) in address
ranges, and each address range should be used for one class only. In
this way the allocation for various objects of the same type would be
contiguous and the memory would never be fragmented. Of course if one
class finishes its heap range, a reassignment of the heap ranges with
the other objects would have to be made.
TIA