Hi Ian,
It may produce a result, but more often than not, the exact value is of
little on no consequence.
Is the exact value of sqrt()'s result of no consequence?
If you start to test for specific values based
on knowledge of the internals, you end up with a very fragile test
No! I am *not* looking at test internals. I am looking at
the SPECIFIED, CONTRACTUAL BEHAVIOR of the function. Are
you "looking at internals" when you test the return value
of sqrt(5)? The sqrt function returns a positive value
for all non negative arguments such that the value, when
multiplied by itself, yields the original argument. THAT
is what you test for!
A *good* test picks values that are likely to cause problems
in an implementation and.or *this* implementation.
A memory allocator is no different. Pretend the "addresses"
returned were integers and the specification was written in
terms of *integers*. You wouldn't throw an arbitrary input
value at it and not *carefully* examine the result to see that
it complies with EXACTLY what you would expect, given its
contract.
And, if your goal was to ensure that the function behaved well
in the wide variety of cases that it might be subjected to
once deployed, you would think very carefully about *which*
conditions you put it in and how you exercised them.
If you developed a general purpose (4 function) calculator,
would you use 1+1, 8-5, 3*4, etc. as your test cases? Or,
would you think: "Hmmm... I wonder if it handles negative
arguments properly? And negative *results*? And, what happens
if I give it something too big -- 99999999+1? Or, something
'undefined' -- 345/0?"
"Settling" for what the linker gives you as the location for
the heap (as well as its *size*!) means you will *never* test
the code with any degree of confidence GIVEN THAT IT WILL
BE DEPLOYED IN ENVIRONMENTS OTHER THAN YOURS!
How do you test sqrt? Don't *you* pick the values that are
of interest to you in proving correct operation of the
function? Don't *you* verify that the results are "as
expected"?
Granted, for an invertible function you could potentially do
a monte carlo simulation having the test suite check the
results (i.e., if result*result =~ argument, then pass).
How would you apply that "random" approach to testing a
non-invertible function? Or, one who's inverse function
is of a complexity greater than or equal to the original
function (e.g., a secure hash)?
harness. I think this may have been what you were saying elsewhere. If
your tests become bonded to single implementation, you are stuck with
it. If your tests don't assume specific values, you are free to change
If your tests don't assume specific values, then you can't guarantee
where in the function's *domain* you are testing.
the internals of the function as long as you don't break its published
behaviour. Obviously if your internal implementation is specified you
are stuck.
The internal implementation need not be "fixed". You are still
free to implement the function however you want -- as long as
the external interface and the guarantees that it affords the
developer are ALWAYS met.
Memory allocators are definitely one of those cases where one size does
not fit all, just look at the range of malloc libraries available for a
typical non-embedded environment. They would all pass the same test
suite assuming the tests didn't make assumptions about the implementation.
No. It would be easier for *your* "implementation agnostic" approach
to test *all* of those allocators with a given test suite. Because
you don't *know/care* how they are making their choices! "Did I
get a result? OK. Does it overlap any other results I have had?
No. Does the chunk returned fit entirely in the *stated* (by
the linker!) area of the heap? Yes. Lather, rinse, repeat."
You break 0x32440 down into platform independent variables.
And how is that any different from platform *dependent*
variables? You still need to "tune" the values to fit
the *particular* (test) deployment. And, you have no
guarantee that you are poking around all the right
"dark corners" for the routine -- since you are not in
control of the conditions that your function is being
subjected to!
Until someone introduces a critical optimisation that changes the layout...
Until someone changes what "sqrt" means!
I *only* -- though THOROUGHLY -- test *published*, contractual
guarantees that the function *must* honor. What goes on under the
hood is never examined, directly.
You can implement sqrt with Newton-Rhapson, large lookup tables,
CORDIC, etc. Your test suite DOESN'T CARE! As long as sqrt(foo)
truly produces the square root of foo!
I can change the internals of the memory allocator any way I choose
AS LONG AS the results returned for a given set of operating
conditions and input variables REMAIN UNCHANGED.
E.g., the allocator makes no claims about the *content* of the
blocks of memory allocated. I could add some code to zero them
out, fill them with pseudo-random numbers -- or just leave
their previous contents "as is". The test suite can't *test*
any of those conditions because none are guaranteed in the
contract. If I find it more efficient to leave assorted
crap *in* the chunks, the test suite can't complain!
(If K&R's malloc wants to do a "first fit" selection strategy,
there's nothing preventing it! Nor anything preventing it
from changing that, in a later edition, to a "last fit".
The test suite for *their* malloc would have to accept both
sets of results as equally valid.)
I still think you are testing at too high a level, test the functions
that manipulate your internal independently.
There *is* nothing else. the allocator and the deallocator
are the *only* things that massage the internal state of the
heap!