R
REH
No, I was suggesting a way to avoid that need!
How to you avoid proving full coverage if it is in your contract to do
so?
REH
No, I was suggesting a way to avoid that need!
James said:Which still doesn't begin to guarantee anything like full
coverage. Given a function which can be called with two
independent conditions, you still can easily end up with tests
that don't test all four combinations of the conditions.
I enjoy reminding people about those! These conditions are problematicNot to
mention tests that never test the "exceptional" paths, e.g. when
new throws bad_alloc, etc.
It may not be a silver bullet, but I have found writing the tests asThere is no silver bullet, and writing the tests before writing
the code really doesn't buy you anything in general.
Just because the code was written to pass some set of tests
doesn't mean that there aren't branches that this set of tests
doesn't test. The most blatant example was in the old days,
when operator new returned a null pointer if there was no
memory. A lot of code back then forgot to test for null, and a
lot of test suites didn't test that they handled the case
correctly either. (Note that today, it wouldn't surprise me if
a lot of code would fail if a new expression raises bad_alloc.)
Of course, a lot of program specifications don't say what the
code should do if you run out of memory either.
With regards to coverage tools:
If you have a function along the lines:
void
f( bool c1, bool c2 )
{
if ( c1 ) {
x ;
} else {
y ;
}
if ( c2 ) {
z ;
} else {
u ;
}
}
The coverage tools I analysed back then would report 100%
coverage if you called the function twice, once with
f(true, true) and once with f(false, false), for example.
Which, of course, is completely wrong. Other cases which were
missed were things like never testing the case where a loop
executed 0 times. Not to speak of exceptions. My conclusion at
the time was that they were worthless; we needed good code
review, including review of the tests.
Hopefully, the situation has improved since then.
On Feb 5, 5:55 am, James Kanze <[email protected]> wrote:
It depends on the type of coverage you are interested in. For full
path coverage, your tool would be wrong (and sometimes it's an
impossible thing to ask for).
If you are only look for statement coverage (or even MCDC),
you tool is correct.
It may sound like a vacuous answer, but we (my teams and I)
have found that writing test first results in a different
style of code where such cases tend not to occur.
I enjoy reminding people about those! These conditions are
problematic with any form of unit test, another pair of eyes
(code reviews or pair programming with collective code
ownership) is a must to make sure they don't fall through the
cracks.
It may not be a silver bullet, but I have found writing the
tests as part of the design process does improve quality by
making a more pleasurable experience for the developers.
Other's experiences may vary, but all the teams I know who
have adopted this approach have stuck with it.
100% full path coverage probably isn't always possible. But if
a tool reports path coverage, it should be as a per cent of path
coverage.
And what use is statement coverage? What does knowing that your
tests have exercised 95% of the statements in the code buy you?
Where I work any code that has never been executed is considered a
bomb, and unreachable code is not allowed. What use is it? I'm not
arguing its merits. My customer requires it in safety critical code,
so I do it.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.