Regression testing for pointers

D

Don Y

Hi Ian,

It hasn't stopped me in the past and I have quite a few (embedded)
allocators and (hosted) test suites.

And apparently have never considered someone else deploying
it in an UNCONSTRAINED manner (i.e., observing ONLY the
constraints that you have PUBLISHED for it).

Or, operated in a vanilla, "UMA-esque" environment.
No. I don't make assumptions about values.

And you quite obviously don't test in all the dark corners!
(since you can't *force* the function to operate in those
places without exercising control over the input conditions!)

*When* your heap ends up at 0xFFFFFFFF, *will* the code
work? Will it *know* that the address space "ends" at
that single byte? Will your pointer arithmetic just
silently wrap around and start treating "low memory"
(i.e., at 0x00000000) as the *real* part of the heap?

How will you know how your code behaves in that case?
How will you PROVE it to others?

Or, will you just wait until "fate" causes your heap
to move into some "risky place" that you hadn't
anticipated or accurately protected against in your
implementation?

Will your defense, in that situation, be "Well, you're
not supposed to put the heap *there*!"? Vaguely
reminiscent of folks complaining when users do things
they "aren't supposed to do"...?

I've watched all sorts of "portable" code fail because
of assumptions that the developers never made explicit.
Move the heap to 0x80000000 and suddenly nothing works!
("Ah, 0x80000000 looks like a 'negative' long and your
pointer math thinks of everything as unsigned...")
Pointers wrapping around the end (or *start*) of
memory, etc.

If you want to make assumptions, fine. But then you have
to put them in the contract so everyone else knows what
the rules are: "Don't try to use this allocator on a
64 bit machine because it can't handle chunks larger
than 4G (2G?)" "Don't use this allocator with heaps
that abut the end of memory because the pointer arithmetic
doesn't check for overflow/wraparound".

I prefer handling all cases -- and being able to *validate*
that compliance.
 
I

Ian Collins

Hi Ian,


Is the exact value of sqrt()'s result of no consequence?

Now I think you are being deliberately obtuse. You were talking about
malloc, not sqrt.
No! I am *not* looking at test internals. I am looking at
the SPECIFIED, CONTRACTUAL BEHAVIOR of the function.

Then I guess the root cause of this debate is that your function is over
specified.
Are
you "looking at internals" when you test the return value
of sqrt(5)? The sqrt function returns a positive value
for all non negative arguments such that the value, when
multiplied by itself, yields the original argument. THAT
is what you test for!

I'm not aware a of a malloc specification that states the value returned
for a given request. It sounds like yours does, which is unusual.
A *good* test picks values that are likely to cause problems
in an implementation and.or *this* implementation.

A memory allocator is no different. Pretend the "addresses"
returned were integers and the specification was written in
terms of *integers*. You wouldn't throw an arbitrary input
value at it and not *carefully* examine the result to see that
it complies with EXACTLY what you would expect, given its
contract.

But is isn't usually specified that way.
There *is* nothing else. the allocator and the deallocator
are the *only* things that massage the internal state of the
heap!

The you should extract some of the internal functionality and test it.
 
B

BartC

When you're moving a few tons of material and "fail to stop" ....
OTOH, The fact that your dialog box didn't get redrawn on an ....
E.g., if I am driving a motorized mechanism "to the left", ....
But, it takes a perverse sort of mind to think about these
sorts of things. And, teaching others to think the same
way is difficult. I resort to having folks sit down and
list the ASSUMPTIONS they are making in any aspect of
design. Then, ask yourself which of those are *truly*
safe!

If these sorts of things are typical of what you're testing, then perhaps
you're not using the right language. C is not the safest language in the
world. Perhaps you should be building a domain-specific language on top.
E.g., if I have a "string" located at 0xFF...F8 that has
8 non-NUL characters in it, what should strlen() return?

If you pass strlen() a pointer to any random set of bytes, or any random
pointer, or even just zero, then anything could happen. You don't even need
to think up any tricky corner cases!

(And passing 0 to string functions tends to crash them. You might try
writing a strlen() that is more robust, but the problem will still be having
such low-level functions scattered through your code (plus now having a set
of string functions that are super-slow because of all the extra checks.)
 
M

Malcolm McLean

בת×ריך ×™×•× ×¨×שון, 11 במרס 2012 18:23:39 UTC, מ×ת Don Y:
Why would you consider this different than a function that
manipulates a doubly linked list? (a pure function in your
lexicon)
Because we construct the linked list it manipulates outside the function, then we get it back. So we can pass an empty list, a single node, a double node, a triple node, and a 4 node list. We can then, usually, easily verify whether the linked list has been correctly manipulated. We know the addresses of the nodes, what data they contain, etc. We might also want to try with a very long list to test out performance constraints. If the linked list routine passes all of those cases, it's very likely that it passes all case, unless there's a error in memory allocation (memory not being freed, lessusually nodes not properly allocated). That's why malloc() is a special case.
 
D

Don Y

Hi Bart,

If these sorts of things are typical of what you're testing, then perhaps
you're not using the right language. C is not the safest language in the
world. Perhaps you should be building a domain-specific language on top.

No, C is NOT the safest language in the world! But, it
can do "almost anything" and is "widely known". Imagine
having to craft a DSL for medical instrumentation. Then,
a year later, another for gaming devices, or process control
equipment, or...

And, consider the folks who follow in your footsteps who
look at this "crazy language" that you've saddled them with.
(not to mention the cost of developing/maintaining tools, etc.)

DSL's tend to work best in domains where there *is* a "standard"
(i.e., you can find other souls that will know the language)
*and* that map closely to the constructs of the application
domain.

E.g., SQL works well for RDBMS's because the constructs that
it is designed to manipulate are directly visible in the
platform on which it executes (i.e., the DBMS!). Writing
C to manipulate DBMS "tables" is clumsy -- because none
of the machinery that you need is readily available in the
language (though you can obviously *implement* it!)
If you pass strlen() a pointer to any random set of bytes, or any random
pointer, or even just zero, then anything could happen. You don't even need
to think up any tricky corner cases!

I trap on a NULL to strlen(). Chances are, this represents an
error in the caller's code. Catching it *here* (where I know
it doesn't make any sense to be used) helps isolate errors
early, before they can propagate.

The case I illustrated, though, will choke most strlen's.
They will gladly keep incrementing the pointer and examining
sequential memory addresses *AS* the pointer wraps around
the end of memory. Clearly (?) this is not what the developer
intends so it should trap instead of wandering around until it
stumbles on something that looks like a NUL.
(And passing 0 to string functions tends to crash them. You might try
writing a strlen() that is more robust, but the problem will still be
having
such low-level functions scattered through your code (plus now having a
set of string functions that are super-slow because of all the extra
checks.)

If you implement the tests *smartly*, there is just a constant
penalty (instead of a multiplier). E.g.,

ASSERT(ptr != NULL);

start = ptr;
while (*ptr != '\0')
ptr++;

ASSERT(ptr >= start);
return ptr-start;

I.e., you don't have to check ptr each time you increment it.

You can also remove these sanity checks in production code.
 
I

Ian Collins

Hi,

There's rarely a problem generating regression tests for
functions/procedures that use concrete *values*:

sqrt.test

3 --> 1.732
4 --> 2
5 --> 2.236

(apologies if I have misremembered any of these values :> )

And, it is easy to erect the necessary scaffolding for those
sorts of tests without introducing any noticeable uncertainty
in the veracity of the results obtained.

But, I can't see a way to handle this sort of testing on
functions that manipulate *pointers* to objects -- without
adding a fair bit of code to the scaffolding that, itself,
represents a testing issue! (i.e., does the scaffolding
have any bugs that result in false positives or negatives??)

In particular, I am trying to develop a test suite for my
dynamic memory management functions (e.g., malloc and free
types of routines).

I.e., I can't come up with "constant" results against
which to compare test-time results. Sticking with the
traditional malloc/free for example (my routines are
more heavily parameterized), I can create a malloc.test
that pushes size_t's to malloc. But, aside from verifying
the result is not NULL (or, *is* NULL, in some cases!),
there's nothing I can do to check that the result is
actually correct!

[Recall, the location and size of the free store might change
from environment to environment; alignment constraints might
vary, etc.]

With the hindsight gained from a few days of posts, one simple and
obvious solution is to consider your allocator as a normal function that
returns predetermined values. Not pointers to usable memory, just values.

You can then writes your tests to test for your expected values. Just
don't try and dereference the values!
 
D

Don Y

Hi Malcolm,

בת×ריך ×™×•× ×¨×שון, 11 במרס 2012 18:23:39 UTC, מ×ת Don Y:
Because we construct the linked list it manipulates outside the
function, then we get it back.

Oh, I think I see the distinction you are making. To you,
a "pure function" can have no side effects. It takes
inputs using call by value semantics and returns a result
maintaining no internal state in the process.

(?)

So, in your case, strlen() is a pure function but memset()
and any math.h functions that modify errno would NOT be?

With this distinction, pure functions are easy to test
because you just look at the return value...
So we can pass an empty list, a single node, a double node, a
triple node, and a 4 node list. We can then, usually, easily
verify whether the linked list has been correctly manipulated.

I am intentionally avoiding that. I believe that I can
*deduce* what the hidden state (i.e., free list) must be
just by throwing carefully selected examples at the
functions and watching their results (assuming I know
or can control the initial conditions/state)

For example, if the heap occupies [7000.,8000.], is currently
"full" and supports byte alignment (one less issue to address
in this example), then a request for:
allocate(100., FIRSTFIT, PICKBACK, TRIMTOFIT)
*will* return a pointer to 7900. (decimal, for example) and a
chunk that is 100 bytes long.

[find the FIRST fragment in the free list that is big enough
to handle the request; use the "end" (highest address) of that
fragment to satisfy the request; trim the fragment to the
requested size, returning the excess to the free list]

At this point, we *know* the free list (should) contain a single
fragment located at 7000. having a size of 900.

A subsequent request for:
allocate(50., FIRSTFIT, PICKFRONT, TRIMTOFIT)
*will* return a pointer to 7000. and a chunk that is 50 bytes
long.

[same explanation as above EXCEPT use the "start" of the selected
fragment to supply the memory to the caller]

The free list should have a single fragment at 7050. of size 850.

We can then choose to release the first chunk with:
release(ptr, ATHEAD, NOCOALESCE)

At this point, we *know* there are two fragments in the
free list -- one at 7900. of size 100. and the *second*
at 7050. of size 850.

A request for <= 100 bytes "ATHEAD" should yield some
portion of that first fragment (at 7900 -- depending on
whether you PICKFRONT or PICKBACK)

A request for > 100 bytes (but <= 850) "ATHEAD" should
yield some portion of that *second* fragment.

A request for <= 850 bytes "ATTAIL" would yield some
portion of the second fragment.

A request for 99 bytes "EXACTFIT" would skip over that
first fragment (because it can't be trimmed to the
exact size specified -- the single remaining byte would
be too small for MINFRAGSIZE) and select the second
fragment, instead.

If the release had been
release(ptr, INORDER, COALESCE)
then the chunk would have been inserted into the free list
based on its starting address wrt the starting addresses
of the other fragments in the free list. I.e., this
would have caused it to be placed *after* the 850 byte
fragment at 7050 (through 7899). The system would
then merge (COALESCE) the two adjacent free fragments into
a single 950 byte fragment at 7050.

I.e., unlike malloc that gives you what *it* wants (and
about all you can count on is that it is at least as big
as requested!), this allocator lets you *predict* how it
will satisfy your request.

Since you (the developer) know how you are going to *use*
that particular heap, you can select policies that give
you the best overall performance. E.g., if you are
allocating memory for a layered menu system (each new
layer displayed atop the previous), then you would
want to treat the heap as a LIFO buffer (no "fragmentation"
possible! You get to use ALL of the heap without
worrying that some combination of allocates and releases
has turned the heap into swiss cheese!)
We know the addresses of the nodes, what data they contain,
etc. We might also want to try with a very long list to test
out performance constraints. If the linked list routine passes
all of those cases, it's very likely that it passes all case,
unless there's a error in memory allocation (memory not being
freed, less usually nodes not properly allocated).

I believe I am doing just that -- though indirectly
and more economically. By looking at the returned
pointers and knowing how (unlike malloc!) the memory
subsystem operates, I can assure myself that the
algorithms hidden within are acting as expected.
*Just* by checking one pointer for each test invocation!
 
D

Don Y

Hi Ian,

On 03/10/12 12:43 PM, Don Y wrote:
[Recall, the location and size of the free store might change
from environment to environment; alignment constraints might
vary, etc.]

With the hindsight gained from a few days of posts, one simple and
obvious solution is to consider your allocator as a normal function that
returns predetermined values. Not pointers to usable memory, just values.

Exactly! The allocator just manipulates "funny integers".
The relationships between those integers are determined
by the initial conditions and the parameters passed to
each (successive) function invocation.

I don't *care* what's *at* that "address" -- that's
*inside* a newly allocated chunk of memory (and not
specified by the allocator).
You can then writes your tests to test for your expected values. Just
don't try and dereference the values!

But, I am still stuck with dealing with a portable way of
testing this.

I can *treat* the values returned as just "values".
But, the *allocator* gives them additional meaning
when it is looking at them. E.g., it *does* dereference
pointers to walk the free list, etc. So, 0x27 could
be a "number" to a test harness -- but 0x27 is an
*address* to the allocator (and must be accessible)

<frown> I am beginning to think that a portable test is just
not possible. The same sorts of issues come up (in spades!) in
the validation of the RTOS, drivers, etc. It might be more
prudent to just release the documentation for the API's and
tell folks to write their own <whatever> to meet the stated
goals. And just concentrate on keeping the "application"
layer portable.

[After all, if you don't understand the hardware well enough
to write these mechanisms, you shouldn't be *porting* it!]
 
D

Don Y

Hi Ian,

Now I think you are being deliberately obtuse. You were talking about
malloc, not sqrt.

My point was, "What makes malloc's result 'of little or no
consequence' while sqrt's *is* consequential?"
Then I guess the root cause of this debate is that your function is over
specified.

No. It specifies exactly as much as is needed to allow the
developer to direct and predict its operation. I can
wrap it in a macro and turn it into any number of different
"malloc(3)" implementations -- by *removing* that flexibility
from the developer.
I'm not aware a of a malloc specification that states the value returned
for a given request. It sounds like yours does, which is unusual.

Yes. It's not "malloc" but, rather, a "memory allocator".
Unless you can control how the function does its work, you
can't control the resources (time and space) that it uses.
But is isn't usually specified that way.


The you should extract some of the internal functionality and test it.

I can essentially do this by examining the results of
successive invocations designed to expose aspects of that
state. (see other post to Malcolm, this date)
 
I

Ian Collins

Hi Ian,

On 03/10/12 12:43 PM, Don Y wrote:
[Recall, the location and size of the free store might change
from environment to environment; alignment constraints might
vary, etc.]

With the hindsight gained from a few days of posts, one simple and
obvious solution is to consider your allocator as a normal function that
returns predetermined values. Not pointers to usable memory, just values.

Exactly! The allocator just manipulates "funny integers".
The relationships between those integers are determined
by the initial conditions and the parameters passed to
each (successive) function invocation.

I don't *care* what's *at* that "address" -- that's
*inside* a newly allocated chunk of memory (and not
specified by the allocator).
You can then writes your tests to test for your expected values. Just
don't try and dereference the values!

But, I am still stuck with dealing with a portable way of
testing this.

I can *treat* the values returned as just "values".
But, the *allocator* gives them additional meaning
when it is looking at them. E.g., it *does* dereference
pointers to walk the free list, etc. So, 0x27 could
be a "number" to a test harness -- but 0x27 is an
*address* to the allocator (and must be accessible)

Do the housekeeping in a different address space than the "heap". For
example your free list entries could be static, or in memory allocated
and used internally by the allocator.
<frown> I am beginning to think that a portable test is just
not possible. The same sorts of issues come up (in spades!) in
the validation of the RTOS, drivers, etc. It might be more
prudent to just release the documentation for the API's and
tell folks to write their own<whatever> to meet the stated
goals. And just concentrate on keeping the "application"
layer portable.

[After all, if you don't understand the hardware well enough
to write these mechanisms, you shouldn't be *porting* it!]

All it takes is a bit of imagination!

Most of what I do is embedded and all the code I write has extensive
(hosted) tests because I write test first. If you code test first, your
code must be written in a test friendly style. The problems come when
you try and bolt the tests on after the fact.
 
I

Ian Collins

Hi Ian,



My point was, "What makes malloc's result 'of little or no
consequence' while sqrt's *is* consequential?"

Because of the way the function is specified. You pass 4 to sqrt, it
must return 2. You ask malloc for 4 bytes, it can give return a pointer
to any correctly aligned bit of memory.
No. It specifies exactly as much as is needed to allow the
developer to direct and predict its operation. I can
wrap it in a macro and turn it into any number of different
"malloc(3)" implementations -- by *removing* that flexibility
from the developer.

Isn't the point of the specification of a function like malloc to give
the implementer the freedom to design the internals to meet a specific
need? Any of the malloc/free implementations I have written over the
years could be swapped out for another (with the except of one that used
386 page registers). They would work to specification, but not give the
best performance in each other's applications.
I can essentially do this by examining the results of
successive invocations designed to expose aspects of that
state. (see other post to Malcolm, this date)

But you can't directly do it. You could easily extract and test the
functions the manipulate the free list for example. I guess the
difference in approach is I would have written and tested those lower
level functions before combining them.
 
D

Don Y

Hi Ian,

Because of the way the function is specified. You pass 4 to sqrt, it
must return 2. You ask malloc for 4 bytes, it can give return a pointer
to any correctly aligned bit of memory.

That's malloc(3)'s contract; not allocateMemory()'s! :>
Isn't the point of the specification of a function like malloc to give
the implementer the freedom to design the internals to meet a specific

The specification just ensures that implementor and "user"
agree on what functionality will be provided and the semantics
for invocation, etc.

Within those constraints, the implementor is free to do as
he chooses. And, to *change* that "on a whim" (to meet
some arbitrary criteria that <someone> deems appropriate).

You can't get deterministic (predictable -- esp temporally)
behavior from something that is free to produce resuts
however it chooses!

E.g., (some implementation of) malloc(3) could satisfy all requests
with (increments of) 4KB blocks of memory and still conform to the
interface specification. If your heap is only 8K, you might
not be real happy with that! :>

The application knows more about how it will be *using*
memory than the memory management system does. And, if
it knows it can influence *how* memory is allocated, then
it can *really* think about how it wants to use memory
as it realizes that this knowledge can improve performance.

E.g., if I need 100 bytes of memory and then want 200 more
(for some other reason), it would be smarter for that second
allocation request to be satisfied "close to" (in terms of
actual memory addresses) that first one -- if I am operating
in a VM environment! Otherwise, the first allocation
requires a physical memory page to be faulted in to "back"
those 100 bytes (even though the rest of the page is "unused")
while another, *different* page might be faulted in to
satisfy that second allocation request.

If the application can influence *how* the free space is
chosen to satisfy a request, it can direct the allocator
in that way. On bigger systems, you would embed this
knowledge of the underlying VM in the allocator and not
the application -- for the exact opposite reason (i.e.,
to take advantage of knowledge of the underlying VM that
the application might not have or want to be concerned
with). But, on bigger systems you often have more
resources to play with and are less concerned about how
"well" the application fits with those resources.
need? Any of the malloc/free implementations I have written over the
years could be swapped out for another (with the except of one that used
386 page registers). They would work to specification, but not give the
best performance in each other's applications.

--^^^^^^^^^^^^^^^^^ Exactly!

When you are trimming the hardware down to the barest minimum
necessary for a task, you want the software to be "frugal"
in how it handles *its* responsibilities.

[You aren't designing a general purpose solution that will
handle an UNCONSTRAINED range of applications but, rather,
a solution for a *particular* application.]
But you can't directly do it.

The contents of the free list aren't specified by the function's
specification. The only thing that is specified is *how* it
goes about providing a pointer to a block of memory per the
parameters passed. Testing anything other than that (peeking
under the hood) ties the test implementation to the function
implementation.

Do you peek inside sqrt() while testing it to verify that
the individual CORDIC iterations are proceeding in the
right order? Or, do you wait to examine the result?

And, all you really care about *is* the result. If you
craft your test cases to put the function in a situation
where you effectively EXPOSE an internal decision

E.g., I know there are two fragments in the free list
at this point -- based on the results I have verified
from my previous allocation requests. And, I know *where*
those fragments must be -- again, because I know what
memory has already been granted to me and where *it*
is located in the address space. So, I can now issue a
request that could be handled by *either* of those
fragments (i.e., both are big enough) but specify a
selection criteria that SHOULD cause the allocator to
pick fragment A instead of fragment B. Let's see if
it does...

I can stack these test cases back to back forcing
the allocator to pick one, then the other, based
solely on the parameters that I pass to it
(since the function is just a state machine and
must respond to its inputs -- which include its
current "state" -- in a predictable manner)
You could easily extract and test the
functions the manipulate the free list for example. I guess the
difference in approach is I would have written and tested those lower
level functions before combining them.

Each of the "criteria" is essentially a (*ftn)() -- with
a suitable parameter list. This makes it easy for me
to add other options/criteria without having to redesign
the entire algorithm. I.e., it imposes a framework *inside*
the function.

So, for example, a (*selector)() takes a pointer to a heap_t
(which contains information about the free list for that
heap as well as constraints on that heap) and a size_t
(for the desired chunk) as well as caller-specified
alignment (in addition tot he alignment constraints imposed
by the platform). The (*selector)() is responsible for
examining the free list for the heap and picking a suitable
fragment for the request based on the strategy that it
embodies.

A (*trimmer)() takes a fragment_t and a size_t and returns
a fragment_t that has been "trimmed" to the correct size
(noting alignment constraints in the process) along with
a fragment_t that is "scrap" (returned to the free list).

Etc.

Below this lies another level of functions that actually
"do the work". This allows code to be shared between
"selectors", etc. (e.g., a selector might examine the
free list differently based on knowledge of how the
free list is *currently* ordered)

The problem is that most of the implementation relies on
storing data *in* the free *fragments* to eliminate the
need for supplemental data structures. E.g., the fragments
are maintained in a doubly linked list with that "overhead"
residing *in* the fragments themselves.

So, the data for the fragment at 0x12345678 is stored *in*
the fragment at 0x12345678. The allocator must dereference
0x12345678 in order to find the size of that fragment,
the next fragment preceding/following it in the free list,
etc.

As such, testing that the code properly handles the
case of a 17 byte fragment existing AT 0xFFFFFFF0
(which clearly can't be the case since the 17th byte
would be past the end of memory!) requires *accessing*
the data at 0xFFFFFFF0 (i.e., I don't even *know* that
the "size" stored in that fragment is INCORRECT until
I can do that!)

By contrast, a 17 byte fragment at 0xFFFFFF00 wouldn't be
an error (at least not for *that* reason!)
 
D

Don Y

Hi Ian,

Do the housekeeping in a different address space than the "heap". For
example your free list entries could be static, or in memory allocated
and used internally by the allocator.

Then I end up having "free" memory sitting idle while also
having to set aside *other* memory to track where the
free memory is located! You can conceivably have
HEAPSIZE/MINFRAGSIZE (note that HEAPSIZE might not be
a manifest constant) fragments in your free list.
This could require OVERHEADPERFRAG*(HEAPSIZE/MINFRAGSIZE)
bytes to track/manage.

If, instead, you ensure OVERHEADPERFRAG <= MINFRAGSIZE,
then each fragment can contain its own "overhead" and
not require additional resources beyond those already
set aside *as* "The Heap". (only free fragments need
to track their overhead)
<frown> I am beginning to think that a portable test is just
not possible. The same sorts of issues come up (in spades!) in
the validation of the RTOS, drivers, etc. It might be more
prudent to just release the documentation for the API's and
tell folks to write their own<whatever> to meet the stated
goals. And just concentrate on keeping the "application"
layer portable.

[After all, if you don't understand the hardware well enough
to write these mechanisms, you shouldn't be *porting* it!]

All it takes is a bit of imagination!

Most of what I do is embedded and all the code I write has extensive
(hosted) tests because I write test first. If you code test first, your
code must be written in a test friendly style. The problems come when
you try and bolt the tests on after the fact.

I know what the test cases need to be before I write the
code (having the specification in hand). I further refine
those test cases as I develop the implementation -- because
I see specific "conditionals" that I need to make sure
the test will exercise for full coverage. The commentary in
the code will literally say something like: "guard against
fragment claiming a size that would have it extend beyond the
end of the address space" (i.e., a >=17 byte fragment at
0xFFFFFFF0)

I write the bulk of the code's framework on a PC as that lets
me cut and paste from the specification into the code body
(the tools I use for the specification don't run anywhere
other than Windows). This lets me transfer the spirit of the
individual tests required into the code base (i.e., verify
a fragment doesn't infer its ending address extends beyond
the end of memory -- regardless of where that might be on
this particular platform)

Then, I make an intial pass at debugging portions of the
code using a PC-based IDE (lately, borland's). Here, I'm
just trying to get the "cruft" out of the code... typos,
consistent declarations, etc.

Then, I move to one of the BSD boxes where I am more comfortable
"working". By the time I finish, there, the code is, IMO,
ready for testing. (note that the compiler here is different
as well as the hardware targeted than the borland/pc effort)

Then, back to the PC and deployment under an ICE or emulator.
I.e., are there any portability issues in the *code* that
the target compiler tickles? (the BSD environments are
big *and* little endian, different alignment requirements,
etc. so I can see how code *really* adapts to those differences)
And, ultimately, does it work in the environment for which it
is *targeted*? Can I put the heap at 0xFFFF0000 and get it
to behave properly? *Will* the code check itself for insane
operating conditions as it is supposed to? etc.
 
M

Malcolm McLean

בת×ריך ×™×•× ×¨×‘×™×¢×™, 14 במרס 2012 08:45:26 UTC, מ×ת Don Y:
Oh, I think I see the distinction you are making. To you,
a "pure function" can have no side effects. It takes
inputs using call by value semantics and returns a result
maintaining no internal state in the process.
A function is a mapping of input to output. A procedure is is a sequence ofactions. Unfortunately these terms are in use to refer to void fucntions and those that return values. This distinction is important for the grammar of the language (if(foo()) is illegal if foo is a void fucntion, legal if it return a value), but not from a softwar eengineering perspective (rewriteint foo(void) as void foo(int *answer) and you're not really changing anything.)

A pure function changes the dta passed to it, but nothing else. That means that, strictly, it can't call malloc(). Even if it just allocates a temporary buffer and frees it, that may change the state of the heap and cause thenext call to malloc() to return null when it otherwise wouldn't have done,so not a pure function.
But that requirement means that a large number of functions either have to be declared non-pure, or they have to use fixed buffers and return buffer exceeded errors. That's not really useful. So it's better to divide your functions into those that do IO and those that don't. That's the best palce todo the division. But it does make malloc() a special function.
 
K

Kaz Kylheku

A pure function changes the dta passed to it, but nothing else. That means
that, strictly, it can't call malloc().

Yeah, sure, and strictly, it can't move the instruction pointer, either,
because that is imperative.

What crap! Pure functions in pure functional languages allocate memory. E.g.
mapping a list to a different list. The different list has to come from
somewhere.

Implementations of anything pure depend on the acceptance of certain impurities
which are managed and hidden away so that they don't interact with the
implementation of the pure formalism.
 
D

Don Y

Hi Malcolm,

בת×ריך ×™×•× ×¨×‘×™×¢×™, 14 במרס 2012 08:45:26 UTC, מ×ת Don Y:

A function is a mapping of input to output. A procedure is is a
sequence of actions. Unfortunately these terms are in use to refer
to void fucntions and those that return values. This distinction
is important for the grammar of the language (if(foo()) is illegal
if foo is a void fucntion, legal if it return a value), but not
from a softwar eengineering perspective (rewrite int foo(void) as
void foo(int *answer) and you're not really changing anything.)

Do you consider the alteration of *answer to have changed the
nature of foo?
A pure function changes the dta passed to it, but nothing else.

In that case, my allocateMemory is a pure function. It only
changes the data passed to it: a heap_t (and some constants).
That means that, strictly, it can't call malloc(). Even if it just
allocates a temporary buffer and frees it, that may change the state
of the heap and cause the next call to malloc() to return null when
it otherwise wouldn't have done, so not a pure function.
Understood.

But that requirement means that a large number of functions either
have to be declared non-pure, or they have to use fixed buffers and
return buffer exceeded errors. That's not really useful. So it's
better to divide your functions into those that do IO and those
that don't. That's the best palce to do the division.

Then, again, does allocateMemory(heap_t theHeap, ...) represent
a procedure doing I/O? When the only thing that changes is theHeap?
But it does make malloc() a special function.

Because malloc implicitly relies on a "global" object (the heap)?
 
G

Guest


<snip>

[testing memory allocators]
I can essentially do this by examining the results of
successive invocations designed to expose aspects of that
state. (see other post to Malcolm, this date)

seems like doing things the hard way. *And* it's error prone. You need white-box knowledge to test your function thoughourly- so why not make it explicit in your test harness?

Only-testing-the-public-interface can be taken to ridiculous extremes. Do they deduce the internal behaviour of air-traffic control systems by flying planes at them at different angles?

To use the over-stretched Civil Engineering analogy. People who build bridges build them out of known good components which they assemble into known good sub-systems that they in turn assemble into complete bridges or whatever.

Somwething complex like a memory allocator needs to be broken into chunks and/or expose much of its internals to facilitate testing.
 
G

Guest

On 03/14/12 10:00 PM, Don Y wrote:


But you can't directly do it. You could easily extract and test the
functions the manipulate the free list for example. I guess the
difference in approach is I would have written and tested those lower
level functions before combining them.

me too. High level testing has failed me too often. And debugging is /much/ harder.
 
D

Don Y

Hi Nick,

seems like doing things the hard way. *And* it's error prone. You
need white-box knowledge to test your function thoughourly- so why
not make it explicit in your test harness?

Doing so means you have now determined the *implementation*
as well as the functionality.

If you want to split out "sub-functions", do so. Then,
formally specify how they must work. And, formally
specify a test suite for them. Thereafter, the implementation
of the allocator must preserve all of these interfaces.

The whole point of regression testing is to formalize the
test cases and ensure changes to a function preserve its
interface semantics and functionality.

I write a CORDIC sqrt. I peek inside and notice that iteration
X for an input of FOO yields -- and *should* yield -- an
approximation having the value BAZ.

You rewrite the sqrt using Newton-Rhapson. Suddenly your test
fails because it converges on the result differently.

Or, maybe my peek-inside test looks at some parameter of
the CORDIC implementation that simply doesn't EXIST in
the Newton-Rhapson version.

You can *test* your code by single stepping through its
execution or "whatever". But *formalizing* that portion
of the test process is folly.
Only-testing-the-public-interface can be taken to ridiculous
extremes. Do they deduce the internal behaviour of air-traffic
control systems by flying planes at them at different angles?

An air traffic control system will have *numerous* layers
of FORMAL SPECIFICATION below "system test/specification".
All of those lower "interfaces" are tested. But, they
are also points of inflexibility. Any change to any of them
can result in changes to the entire *system* above and below
it in the abstraction hierarchy.

"Let's make malloc(3) return a pointer to a struct that contains
a pointer to the allocated memory and the size of that memory".

Suddenly, everything that uses malloc changes. (let's make
sqrt rely on successive iterations of a CORDIC algorithm;
you are free to change the implementation of the CORDIC
algorithm -- but, you have to ensure that the results it
produces agree with those that are being tested for)

If you FORMALIZE test suites then you formalize interfaces.
If that's what you want or need, then do so FORMALLY. Make
sure there is a formal document that contractually specifies
how the interface is expected to function so that:
- implementors know how to implement the function described
- consumers know how to *use* that function
- validation suites know how to exercise that function

If, OTOH, you don't want to go to all this trouble and tie
your future hands, implement whatever ad hoc testing YOU
want and keep it on a scrap of paper in your desk drawer
since it is not a formal part of the specification for the
function/module you are testing!
To use the over-stretched Civil Engineering analogy. People who
build bridges build them out of known good components which they
assemble into known good sub-systems that they in turn assemble
into complete bridges or whatever.

And each of those *components* is formally specified, tested
and "vouched for" (by the manufacturer). So a *lawyer* and
set of "expert witnesses" can testify that the reason the
bridge failed was because ACME Asphalt sold defective road
coating to the builder which didn't meet the specifications
that the supplier FORMALLY AND LEGALLY AGREED TO in his
CONTRACT withe the builder.

You can't go and complain that ACME used *pink* asphalt (while
you EXPECTED black) -- unless it fails to meet some specified
criteria in that contract. Or, that ACME heated the raw material
using a wood fire instead of a gas oven in preparing the road
covering (unless the specification says "shall be heated in
a natural gas fired oven")

ACME has liberties as to how it approaches each specification
to which it has *contractually* agreed to be bound.
Somwething complex like a memory allocator needs to be broken
into chunks and/or expose much of its internals to facilitate testing.

As I said, *you* can look at individual BITS in the CPU, memory,
etc. while *you* are testing it. But, the function formally
only has ONE interface that it must contractually satisfy.
 
M

Malcolm McLean

בת×ריך ×™×•× ×¨×‘×™×¢×™, 14 במרס 2012 22:57:11 UTC, מ×ת Kaz Kylheku:
What crap! Pure functions in pure functional languages allocate memory. E.g.
mapping a list to a different list. The different list has to come from
somewhere.
You need to learn to express yourself in a more cultured manner. Not everyone who disagrees with you is talking crap, and even if they are, often it'sbetter not to say it.

We implement median(const double *x, int N) inthe obvious way, by copying to a temporary and sorting. So if we run out of memory, what do we do? Returning nan would seem a reasonable idea.
So median will sometimes return the mid point of x, sometimes nan, for identical values of x. So it's not, strictly, a pure function of x. It's a purefunction of x and th state of the memory allocator. For indetical x and identical states of the free list, it will always return the same result.

However it's not easy to peep into the memory allocator, nor is it usually useful to do so. It does mean that we're treating the memory allocation system as something special. But that's reasonable. It's the infinite tape in our Turing machine.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,079
Messages
2,570,575
Members
47,207
Latest member
HelenaCani

Latest Threads

Top