Implementing overloaded operator new[]/delete[]

T

Tristan Wibberley

Hi

I've got implementing overloaded operator new and delete pretty much
down. Just got to meet the alignment requirements of the class on which
the operator is overloaded.

But how does one implement operator new[]/delete[] I can't see a way to
indicate, on delete[], how many objects must be destroyed (or how big
the space is) - alternatively I can't figure out what are the alignment
requirements so that the implementation, after calling my new[], can
fill in its own size record.

Or is the C++ implementation required to keep a mapping of pointers to
sizes itself - I really hope not :)

--
Tristan Wibberley

Any opinion expressed is mine (or else I'm playing devils advocate for
the sake of a good argument). My employer had nothing to do with this
communication.
 
T

terminator

Hi

I've got implementing overloaded operator new and delete pretty much
down. Just got to meet the alignment requirements of the class on which
the operator is overloaded.

so simple : the compiler calculates the exact minimum proper size that
can contain an object with correct alignment and passes it to operator
'new' then the return is rounded up to first aligned object.
But how does one implement operator new[]/delete[] I can't see a way to
indicate, on delete[], how many objects must be destroyed (or how big
the space is) - alternatively I can't figure out what are the alignment
requirements so that the implementation, after calling my new[], can
fill in its own size record.

Or is the C++ implementation required to keep a mapping of pointers to
sizes itself - I really hope not :)

new allocates more memory than needed for the array and - on some
platforms - stores the number of elements at the beginning.so that
destructor is called in correct number of times when 'delete[]'.but if
you upcast the array pointer you will face undefined behavior which
can be fixed for the case of 'delete[]' via introducing the base
destructor as virtual which allows the compiler to have runtime info
about the size of deleted objects.
Any opinion expressed is mine (or else I'm playing devils advocate for
the sake of a good argument). My employer had nothing to do with this
communication.

regards,
FM.
 
L

Laurent D.A.M. MENTEN

But how does one implement operator new[]/delete[] I can't see a way to
indicate, on delete[], how many objects must be destroyed (or how big
the space is) - alternatively I can't figure out what are the alignment
requirements so that the implementation, after calling my new[], can
fill in its own size record.

I used to keep track of the base address/size pairs in an internal table
accessible by both operators.

Anyway, what I understand is that you want delete[] to walk through the
array to delete objets it contains/references... IMHO this is not a good
idea since you cannot assert that an entry of the array is valid.
 
J

James Kanze

I've got implementing overloaded operator new and delete pretty much
down. Just got to meet the alignment requirements of the class on which
the operator is overloaded.
But how does one implement operator new[]/delete[] I can't see a way to
indicate, on delete[], how many objects must be destroyed (or how big
the space is) - alternatively I can't figure out what are the alignment
requirements so that the implementation, after calling my new[], can
fill in its own size record.

I'm not sure I understand. The usual implementation of
new[]/delete[] is just:

void*
operator new[]( size_t n )
{
return operator new( n ) ;
}

void
operator delete[]( void* p )
{
operator delete( p ) ;
}

The operator delete function doesn't need to know how many
objects are involved.
 
T

Tristan Wibberley

so simple : the compiler calculates the exact minimum proper size that
can contain an object with correct alignment and passes it to operator
'new' then the return is rounded up to first aligned object.

Well, it can't round up the pointer I return unless it knows I've
allocated more bytes than it asked for. I have to get the alignment
right, but that is not too hard, I can just use global operator new,
operator new[], a standard allocator, etc.

So plain operator new overloading is easy.

But how does one implement operator new[]/delete[] I can't see a way to
indicate, on delete[], how many objects must be destroyed (or how big
the space is) - alternatively I can't figure out what are the alignment
requirements so that the implementation, after calling my new[], can
fill in its own size record.

Or is the C++ implementation required to keep a mapping of pointers to
sizes itself - I really hope not :)

new allocates more memory than needed for the array and - on some
platforms - stores the number of elements at the beginning.

No, in my overloaded new[], *I'll* allocate memory from some large
static buffer, or something, so how do I find out how many extra bytes
to allocate for the C++ implementation to store the number of bytes in
so that it can destroy them before forwarding "delete[] ptr;" to my
operator delete[], and how do I find out what the alignment requirements
are for that metadata?

--
Tristan Wibberley

Any opinion expressed is mine (or else I'm playing devils advocate for
the sake of a good argument). My employer had nothing to do with this
communication.
 
T

Tristan Wibberley

But how does one implement operator new[]/delete[] I can't see a way to
indicate, on delete[], how many objects must be destroyed (or how big
the space is) - alternatively I can't figure out what are the alignment
requirements so that the implementation, after calling my new[], can
fill in its own size record.

I used to keep track of the base address/size pairs in an internal table
accessible by both operators.

Anyway, what I understand is that you want delete[] to walk through the
array to delete objets it contains/references... IMHO this is not a good
idea since you cannot assert that an entry of the array is valid.

I thought delete[] *does* do that, assuming that they are all valid, and
then passes the pointer into my overloaded operator delete[] for actual
freeing of the memory. In which case how do I ensure that the bit of C++
implementation code that does the destruction knows how many objects
there are (ie, how big the array is) even though it has only called my
operator new[] overload by that point.

--
Tristan Wibberley

Any opinion expressed is mine (or else I'm playing devils advocate for
the sake of a good argument). My employer had nothing to do with this
communication.
 
T

Tristan Wibberley

I've got implementing overloaded operator new and delete pretty much
down. Just got to meet the alignment requirements of the class on which
the operator is overloaded.
But how does one implement operator new[]/delete[] I can't see a way to
indicate, on delete[], how many objects must be destroyed (or how big
the space is) - alternatively I can't figure out what are the alignment
requirements so that the implementation, after calling my new[], can
fill in its own size record.

I'm not sure I understand. The usual implementation of
new[]/delete[] is just:

void*
operator new[]( size_t n )
{
return operator new( n ) ;
}

void
operator delete[]( void* p )
{
operator delete( p ) ;
}

The operator delete function doesn't need to know how many
objects are involved.

Doesn't "delete ptr;" normally destroy all the elements of the array
before calling the overloaded operator delete[] in my class? How will it
know how many objects to destroy - does it keep a mapping of the
pointers I've returned from my operator new[] to the number of
elements/bytes that it asked my operator new[] to allocate?

--
Tristan Wibberley

Any opinion expressed is mine (or else I'm playing devils advocate for
the sake of a good argument). My employer had nothing to do with this
communication.
 
A

Alf P. Steinbach

* Tristan Wibberley:
Doesn't "delete ptr;" normally destroy all the elements of the array
before calling the overloaded operator delete[] in my class? How will it
know how many objects to destroy - does it keep a mapping of the
pointers I've returned from my operator new[] to the number of
elements/bytes that it asked my operator new[] to allocate?

You don't have to know or care. Your code is just asked to allocate a
certain number of bytes. How the allocated chunk of memory is used is
not the allocator or deallocator function's concern.
 
T

Tristan Wibberley

* Tristan Wibberley:
Doesn't "delete ptr;" normally destroy all the elements of the array
before calling the overloaded operator delete[] in my class? How will it
know how many objects to destroy - does it keep a mapping of the
pointers I've returned from my operator new[] to the number of
elements/bytes that it asked my operator new[] to allocate?

You don't have to know or care. Your code is just asked to allocate a
certain number of bytes. How the allocated chunk of memory is used is
not the allocator or deallocator function's concern.

So I don't have to make any particular alignment guarantees, or
guarantee that a second call will be successful even if the first
allocation is deleted?

I just have to return a pointer to memory such that a pointer to memory
within the requested number of bytes following it won't be returned by a
subsequent call at least until the next call of my operator delete[].

Are those the only requirements?

If so, then I can see why allocating things with new could be slow - it
will definitely need to use some mapping of the pointers I've returned
to the number of bytes it requested.

In asking this I am in part looking for confirmation that usage of
overloading new[] and delete[] are either:

1) Usable only for wrapping global operator new with statistical records
or forcing failures as for unit tests,
2) Slow down as the number of current allocated blocks increases and
therefore something to avoid for all but allocation from limited pools
that starts throwing bad_alloc before the number of current blocks gets
large, or
3) Unspecified in general, essentially undefined behaviour

--
Tristan Wibberley

Any opinion expressed is mine (or else I'm playing devils advocate for
the sake of a good argument). My employer had nothing to do with this
communication.
 
A

Alf P. Steinbach

* Tristan Wibberley:
* Tristan Wibberley:
Doesn't "delete ptr;" normally destroy all the elements of the array
before calling the overloaded operator delete[] in my class? How will it
know how many objects to destroy - does it keep a mapping of the
pointers I've returned from my operator new[] to the number of
elements/bytes that it asked my operator new[] to allocate?
You don't have to know or care. Your code is just asked to allocate a
certain number of bytes. How the allocated chunk of memory is used is
not the allocator or deallocator function's concern.

So I don't have to make any particular alignment guarantees,

Oh, you do.

TR1 (and as I understand it, also Boost) contains some functionality
that is specifically meant to help with ensuring proper alignment.

or
guarantee that a second call will be successful even if the first
allocation is deleted?

? Sorry, that's meaningless to me. If the allocation succeeds, return
a pointer to the allocated memory, and otherwise, call the installed
new-handler or just throw std::bad_alloc.


I just have to return a pointer to memory such that a pointer to memory
within the requested number of bytes following it won't be returned by a
subsequent call at least until the next call of my operator delete[].

That's one requirement, yes.

Are those the only requirements?

Hm, so far (1) alignment, (2) being sure to never allocate from memory
that's in use, and (3) handling allocation failure properly. In
addition you should ideally (4) implement the no-throw placement forms,
both allocator and deallocator functions (the latter will be used only
if constructor throws). Not sure if there are further requirements.

If so, then I can see why allocating things with new could be slow - it
will definitely need to use some mapping of the pointers I've returned
to the number of bytes it requested.

Assuming you meant "external mapping", no, it won't. It can request the
number of bytes it needs to e.g. add an element count at the start of
the region. It's not the allocator function's concern.

In asking this I am in part looking for confirmation that usage of
overloading new[] and delete[] are either:

1) Usable only for wrapping global operator new with statistical records
or forcing failures as for unit tests,

No. Actually, apart from implementing your own per-class allocation
scheme, the main use of implementing your own allocator function that
I'm aware of is to obfuscate the new'ing of an instance of the class in
question, so that you're in practice ensured that only your own special
macro for that is used, which then guarantees that the pointer produced
is wrapped in a suitable smart pointer. In principle I guess it could
also be used to disallow dynamic allocation, although I fail to imagine
any practical use for that.

2) Slow down as the number of current allocated blocks increases and
therefore something to avoid for all but allocation from limited pools
that starts throwing bad_alloc before the number of current blocks gets
large, or

? Sorry, that's meaningless to me.

3) Unspecified in general, essentially undefined behaviour

? Sorry, that's meaningless to me.


Cheers, & hth.,

- Alf
 
T

Tristan Wibberley

Assuming you meant "external mapping", no, it won't. It can request the
number of bytes it needs to e.g. add an element count at the start of
the region. It's not the allocator function's concern.

I address this at the end, feel free to snip the rest of the email if my
final paragraphs convince you that I'm not the idiot you assumed from
the off and that you thought deserved little more than repeated
application of "? Sorry, that's meaningless to me." Because I'm
actually quite smart, and know enough C++ to know that some features
cannot be used safely in ways that seem totally obvious even when they
seem like they were intended - hence this thread. Anyway, see you at the
end of this email.

In asking this I am in part looking for confirmation that usage of
overloading new[] and delete[] are either:

1) Usable only for wrapping global operator new with statistical records
or forcing failures as for unit tests,

No. Actually, apart from implementing your own per-class allocation
scheme, the main use of implementing your own allocator function that
I'm aware of is to obfuscate the new'ing of an instance of the class in
question, so that you're in practice ensured that only your own special
macro for that is used, which then guarantees that the pointer produced
is wrapped in a suitable smart pointer. In principle I guess it could
also be used to disallow dynamic allocation, although I fail to imagine
any practical use for that.

Oh! So you don't have to return a built-in pointer type? How does the
caller find out the type so it can store an instance of the smart
pointer instead of built-in pointer?
? Sorry, that's meaningless to me.

eg, like an external mapping which will get slower as the number of
blocks increases, thus making new[] only sensible, in general, if there
is going to be a limited (and small) number of allocated blocks at any
time. This scenario seems an unlikely one to me, but I've been including
to show that I'm trying to thing of all possibilities in deciding
whether I should make use of new[] overloading lightly.

? Sorry, that's meaningless to me.

Are you an AI? I've never seen anybody avoid indicating why they didn't
understand something. The meaning seems perfectly clear - Does it turn
out to be impossible to use overloaded operator new[]/delete[] in a way
that is fast and allocates only from arbitrary locations in the middle
of the overloaded operator's own pool with only the obvious requirements
of an allocator and the validity of the pointer returned without causing
undefined behaviour?

How can it do this? Ask for so much memory in addition to that needed
for the array that it can calculate a pointer somewhere in the extra one
or more objects that has the correct alignment for its size type and
enough space for an instance? I had kind of excluded that in my head as
a wasteful thing for the implementation to do.

Is the fast option really that wasteful even for the most sensible
implementations? Eg If it uses a four byte integer with four byte
alignment for the count and it wants a one byte integer with one byte
alignment for the array then it has to request seven bytes instead of
five - just in case I return a pointer to a one byte integer that is at
0x??...??3

--
Tristan Wibberley

Any opinion expressed is mine (or else I'm playing devils advocate for
the sake of a good argument). My employer had nothing to do with this
communication.
 
A

Alf P. Steinbach

* Tristan Wibberley:
Assuming you meant "external mapping", no, it won't. It can request the
number of bytes it needs to e.g. add an element count at the start of
the region. It's not the allocator function's concern.

I address this at the end, feel free to snip the rest of the email if my
final paragraphs convince you that I'm not the idiot you assumed from
the off and that you thought deserved little more than repeated
application of "? Sorry, that's meaningless to me." Because I'm
actually quite smart, and know enough C++ to know that some features
cannot be used safely in ways that seem totally obvious even when they
seem like they were intended - hence this thread. Anyway, see you at the
end of this email.

In asking this I am in part looking for confirmation that usage of
overloading new[] and delete[] are either:

1) Usable only for wrapping global operator new with statistical records
or forcing failures as for unit tests,
No. Actually, apart from implementing your own per-class allocation
scheme, the main use of implementing your own allocator function that
I'm aware of is to obfuscate the new'ing of an instance of the class in
question, so that you're in practice ensured that only your own special
macro for that is used, which then guarantees that the pointer produced
is wrapped in a suitable smart pointer. In principle I guess it could
also be used to disallow dynamic allocation, although I fail to imagine
any practical use for that.

Oh! So you don't have to return a built-in pointer type? How does the
caller find out the type so it can store an instance of the smart
pointer instead of built-in pointer?

You do have to return a raw pointer from the allocation function. The
purpose of obfuscation (an allocation function taking extra arguments,
technically known as a "placement" allocation function) is to force the
client code to use a macro that adds in the necessary whatever, instead
of using new[] or new directly. That macro can then e.g. rely on a
typedef in the class, or e.g. type traits, in order to select the
relevant smart pointer type to wrap the result in.


? Sorry, that's meaningless to me.

eg, like an external mapping which will get slower as the number of
blocks increases, thus making new[] only sensible, in general, if there
is going to be a limited (and small) number of allocated blocks at any
time. This scenario seems an unlikely one to me, but I've been including
to show that I'm trying to thing of all possibilities in deciding
whether I should make use of new[] overloading lightly.

A C++ implementation that behaved that way would simply not be used.


? Sorry, that's meaningless to me.

Are you an AI? I've never seen anybody avoid indicating why they didn't
understand something. The meaning seems perfectly clear - Does it turn
out to be impossible to use overloaded operator new[]/delete[] in a way
that is fast and allocates only from arbitrary locations in the middle
of the overloaded operator's own pool with only the obvious requirements
of an allocator and the validity of the pointer returned without causing
undefined behaviour?

It's possible to do fast allocations, that was the original purpose for
this language feature.

You might want to check out e.g. Loki's small object allocator for an
example.

How can it do this? Ask for so much memory in addition to that needed
for the array that it can calculate a pointer somewhere in the extra one
or more objects that has the correct alignment for its size type and
enough space for an instance? I had kind of excluded that in my head as
a wasteful thing for the implementation to do.

I suggest buying, borrowing or stealing "Modern C++ Design" by Andrei
Alexandrescu, which discusses the Loki small object allocator.

Is the fast option really that wasteful even for the most sensible
implementations? Eg If it uses a four byte integer with four byte
alignment for the count and it wants a one byte integer with one byte
alignment for the array then it has to request seven bytes instead of
five - just in case I return a pointer to a one byte integer that is at
0x??...??3

Different for operator new[] and operator new, in practice that is.

Again, see above.

Cheers, & hth.,

- Alf
 
J

James Kanze

But how does one implement operator new[]/delete[] I can't see a way to
indicate, on delete[], how many objects must be destroyed (or how big
the space is) - alternatively I can't figure out what are the alignment
requirements so that the implementation, after calling my new[], can
fill in its own size record.
I used to keep track of the base address/size pairs in an internal table
accessible by both operators.
Anyway, what I understand is that you want delete[] to walk through the
array to delete objets it contains/references... IMHO this is not a good
idea since you cannot assert that an entry of the array is valid.
I thought delete[] *does* do that, assuming that they are all valid, and
then passes the pointer into my overloaded operator delete[] for actual
freeing of the memory. In which case how do I ensure that the bit of C++
implementation code that does the destruction knows how many objects
there are (ie, how big the array is) even though it has only called my
operator new[] overload by that point.

That's the compiler's problem, not yours. (The usual solution
is for the compiler to request slightly more memory, and store
the number of elements in a hidden integere.)
 
J

James Kanze

* Tristan Wibberley:
Doesn't "delete ptr;" normally destroy all the elements of the array
before calling the overloaded operator delete[] in my class? How will it
know how many objects to destroy - does it keep a mapping of the
pointers I've returned from my operator new[] to the number of
elements/bytes that it asked my operator new[] to allocate?
You don't have to know or care. Your code is just asked to allocate a
certain number of bytes. How the allocated chunk of memory is used is
not the allocator or deallocator function's concern.
So I don't have to make any particular alignment guarantees, or
guarantee that a second call will be successful even if the first
allocation is deleted?

All operator new() functions are required to return memory that
is suitably aligned for all possible data types (or all data
types which would fit in the amount of memory allocated---I'm
not sure).

There is no formal requirement that operator delete() be
anything other than a no-op. The intent, and quality of
implementation considerations, of course, do suggest a little
more.
I just have to return a pointer to memory such that a pointer
to memory within the requested number of bytes following it
won't be returned by a subsequent call at least until the next
call of my operator delete[].
Are those the only requirements?

There's also alignment.
If so, then I can see why allocating things with new could be
slow - it will definitely need to use some mapping of the
pointers I've returned to the number of bytes it requested.

Most don't. There are all sorts of ways of recovering the
length.

This is independant of which operator new() you're implementing;
operator delete() is not passed the length, either, but just a
pointer.
In asking this I am in part looking for confirmation that
usage of overloading new[] and delete[] are either:
1) Usable only for wrapping global operator new with statistical records
or forcing failures as for unit tests,
2) Slow down as the number of current allocated blocks increases and
therefore something to avoid for all but allocation from limited pools
that starts throwing bad_alloc before the number of current blocks gets
large, or
3) Unspecified in general, essentially undefined behaviour

None of the above.
 
J

James Kanze

[...]
No. Actually, apart from implementing your own per-class allocation
scheme, the main use of implementing your own allocator function that
I'm aware of is to obfuscate the new'ing of an instance of the class in
question, so that you're in practice ensured that only your own special
macro for that is used, which then guarantees that the pointer produced
is wrapped in a suitable smart pointer.

That doesn't work.
In principle I guess it could
also be used to disallow dynamic allocation, although I fail to imagine
any practical use for that.

The most frequent use is debugging. There's a debugging
new/delete at my site, for example, which saves the stack
walkback at the allocation site, sets guard zones at either end
(to detect overwrites), checks for double deletes and memory
leaks, etc. It's doubtlessly significantly slower than the
standard version, but I use it in my unit tests, and have found
a few bugs thanks to it.
 
A

Alf P. Steinbach

* James Kanze:
[...]
No. Actually, apart from implementing your own per-class allocation
scheme, the main use of implementing your own allocator function that
I'm aware of is to obfuscate the new'ing of an instance of the class in
question, so that you're in practice ensured that only your own special
macro for that is used, which then guarantees that the pointer produced
is wrapped in a suitable smart pointer.

That doesn't work.

It works.

For details you can check out my old pointers introduction.

The most frequent use is debugging. There's a debugging
new/delete at my site, for example, which saves the stack
walkback at the allocation site, sets guard zones at either end
(to detect overwrites), checks for double deletes and memory
leaks, etc. It's doubtlessly significantly slower than the
standard version, but I use it in my unit tests, and have found
a few bugs thanks to it.

That's what debuggers are used for. :) On Windows platforms.

Cheers, & hth.,

- Alf
 
J

James Kanze

* James Kanze:
[...]
No. Actually, apart from implementing your own per-class allocation
scheme, the main use of implementing your own allocator function that
I'm aware of is to obfuscate the new'ing of an instance of the class in
question, so that you're in practice ensured that only your own special
macro for that is used, which then guarantees that the pointer produced
is wrapped in a suitable smart pointer.
That doesn't work.
It works.

Maybe we're talking about different things. I thought I
understood that you defined a macro with the name "new", e.g.:

#define new new( __FILE__, __LINE__ )

That doesn't work; formally, it's undefined behavior, and
practically, it's going to cause no end of havoc as soon as you
include something like <vector> (unless, of course, you're
compiler supports "export", and the library was written with
this in mind).
For details you can check out my old pointers introduction.
That's what debuggers are used for. :) On Windows platforms.

How does a debugger tell you what happened in the past? Where a
block was allocated, if it turns out to have leaked, for
example. For that matter, how does it tell you you've
overwritten beyond the end of an allocated block?
 
A

Alf P. Steinbach

* James Kanze:
* James Kanze:
[...]
No. Actually, apart from implementing your own per-class allocation
scheme, the main use of implementing your own allocator function that
I'm aware of is to obfuscate the new'ing of an instance of the class in
question, so that you're in practice ensured that only your own special
macro for that is used, which then guarantees that the pointer produced
is wrapped in a suitable smart pointer.
That doesn't work.
It works.

Maybe we're talking about different things. I thought I
understood that you defined a macro with the name "new", e.g.:

#define new new( __FILE__, __LINE__ )

That doesn't work; formally, it's undefined behavior, and
practically, it's going to cause no end of havoc as soon as you
include something like <vector> (unless, of course, you're
compiler supports "export", and the library was written with
this in mind).

The macro will look something like

#define NEW( Type, Args ) ungrokkablegoobledygookexpression

You will never type ungrokkablegoobledygookexpression by accident as new
or new[] expression.

And I suspect that most C++ programmers, by definition mediocre, will
not even be able to figure out how to write
ungrokkablegoobledygookexpression, except by inspecting the macro.

For details you can check out my old pointers introduction.

[snip]
How does a debugger tell you what happened in the past? Where a
block was allocated, if it turns out to have leaked, for
example. For that matter, how does it tell you you've
overwritten beyond the end of an allocated block?

The compiler emits debugging meta-information, ordinary allocations are
replaced with operations that keep some meta-information, memory is
initialized to special bit-patterns, and so on, to help the debugger.
In short, the compiler + runtime library does the job that can be done
manually, plus things that can't be done manually, in support of
debugging. An example of debug output information (showing the source
code line that allocated some block that wasn't subsequently
deallocated) is shown at <url:
http://msdn2.microsoft.com/en-us/library/e5ewb1h3(VS.80).aspx>.

However, detecting write-beyond-end-of-block is in general not feasible,
because the granularity of protection offered by an OS is typically 4K
or larger, one page. It can be done (e.g., in Windows, using
pageheap.exe, which I searched up now, never used it!), but at the cost
of allocating one extra protection page for each allocation. That
particular problem is easier dealt with at higher levels, e.g. using
std::vector::at instead of operator[], or simply sound design... ;-)

Cheers, & hth.,

- Alf
 
J

James Kanze

* James Kanze:
[...]
No. Actually, apart from implementing your own per-class allocation
scheme, the main use of implementing your own allocator function that
I'm aware of is to obfuscate the new'ing of an instance of the class in
question, so that you're in practice ensured that only your own special
macro for that is used, which then guarantees that the pointer produced
is wrapped in a suitable smart pointer.
That doesn't work.
It works.
Maybe we're talking about different things. I thought I
understood that you defined a macro with the name "new", e.g.:
#define new new( __FILE__, __LINE__ )
That doesn't work; formally, it's undefined behavior, and
practically, it's going to cause no end of havoc as soon as you
include something like <vector> (unless, of course, you're
compiler supports "export", and the library was written with
this in mind).
The macro will look something like
#define NEW( Type, Args ) ungrokkablegoobledygookexpression

OK. But it will only verify memory which uses NEW, and not new.
That is, you can't easily apply it retroactively.
[snip]
How does a debugger tell you what happened in the past? Where a
block was allocated, if it turns out to have leaked, for
example. For that matter, how does it tell you you've
overwritten beyond the end of an allocated block?
The compiler emits debugging meta-information, ordinary
allocations are replaced with operations that keep some
meta-information, memory is initialized to special
bit-patterns, and so on, to help the debugger.

In sum, it's the special new/delete which does the actual work;
you only use the debugger to see the results. It's not that
different from what I do under Unix, except that the program
isn't running under the debugger; the debugger is only used for
the post-mortum (or addr2line in the case of leaks, which don't
produce a core dump).
In short, the compiler + runtime library does the job that can be done
manually, plus things that can't be done manually, in support of
debugging. An example of debug output information (showing the source
code line that allocated some block that wasn't subsequently
deallocated) is shown at <url:http://msdn2.microsoft.com/en-us/library/e5ewb1h3(VS.80).aspx>.
However, detecting write-beyond-end-of-block is in general not
feasible, because the granularity of protection offered by an
OS is typically 4K or larger, one page. It can be done (e.g.,
in Windows, using pageheap.exe, which I searched up now, never
used it!), but at the cost of allocating one extra protection
page for each allocation. That particular problem is easier
dealt with at higher levels, e.g. using std::vector::at
instead of operator[], or simply sound design... ;-)

There are two alternatives for detecting
write-beyond-end-of-block. Purify does it by instrumenting the
code; every hardware store instruction is transformed into
something which verifies the boundaries. (I think they may even
have a patent on the technique.) My own debugging allocators
provide a guard zone before and after each initializer,
initialized with a special pattern, and declare an error if that
pattern isn't present when the memory is freed. That doesn't
indicate where the overwrite occured, but once you've got a
stack walkback of where the memory was allocated, and of where
it was freed, it's usually not too difficult to find the error.
(In practice, I've found very little problem with buffer
overruns in C++. It was a common problem with malloc,
however.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,817
Latest member
DicWeils

Latest Threads

Top