Code coverage tool

E

ev

Hello,
We are looking for any testing tool that is capable of checking code
coverage for C,C ++ and Java code. Or at least for C and C++. We want
to know how much (percentage) of our code written on C/C++ is covered
in terms of function calls and line calls. We tried Rational
PureCoverage. It's excellent but has some limitations in our case.
Any idea would be greatly appreciated.
 
H

Hans

Hello,
We are looking for any testing tool that is capable of checking code
coverage for C,C ++ and Java code. Or at least for C and C++. We want
to know how much (percentage) of our code written on C/C++ is covered
in terms of function calls and line calls. We tried Rational
PureCoverage. It's excellent but has some limitations in our case.
Any idea would be greatly appreciated.

I have tried the following:

Cantata++ (www.ipl.com/products)
DevPartner (www.compuware.com/products/devpartner/enterprise.htm)

Both worked for me. If you specify, in some more detail, what the
limitations of your case is, I could perhaps provide some more detail
on how well they would work for you.
 
I

Ian Collins

ev said:
Hello,
We are looking for any testing tool that is capable of checking code
coverage for C,C ++ and Java code. Or at least for C and C++. We want
to know how much (percentage) of our code written on C/C++ is covered
in terms of function calls and line calls. We tried Rational
PureCoverage. It's excellent but has some limitations in our case.
Any idea would be greatly appreciated.

Write the tests first, that way nothing gets written that isn't tested.
 
G

Gerhard Fiedler

Write the tests first, that way nothing gets written that isn't tested.

How do you know whether every branch/condition in a function gets executed
when you run the tests that you wrote (independently of whether you wrote
them before or after you wrote the function)?

Gerhard
 
E

Erik Wikström

How do you know whether every branch/condition in a function gets executed
when you run the tests that you wrote (independently of whether you wrote
them before or after you wrote the function)?

Because you write the tests so that all branches will be taken. If you
can not do that it means you are not testing at a low enough level. Of
course, just because all unit tests pass does not mean that the unit
work when integrated which is why you need higher level tests as well.
 
P

Pete Becker

Because you write the tests so that all branches will be taken. If you
can not do that it means you are not testing at a low enough level.

Or you changed some code and didn't update the tests, or the tests
missed some subtle condition that the code handles. That's why you do
coverage analysis.
 
I

Ian Collins

Gerhard said:
How do you know whether every branch/condition in a function gets executed
when you run the tests that you wrote (independently of whether you wrote
them before or after you wrote the function)?
My contention is *when* you write the tests is important, you can only
get full coverage without tools when they are written first. The only
production code that gets written is written to pass tests.

This was, there will not be a branch unless it was required to pass a test.
 
I

Ian Collins

Pete said:
Or you changed some code and didn't update the tests, or the tests
missed some subtle condition that the code handles. That's why you do
coverage analysis.
If you changed some code and didn't update the tests, the tests would
fail. If the tests are written first and the code written to pass them,
there will not be any conditions that the code handles but not the tests.
 
P

Pete Becker

If you changed some code and didn't update the tests, the tests would
fail.

Maybe, maybe not.
If the tests are written first and the code written to pass them,
there will not be any conditions that the code handles but not the tests.

Nonsense.
 
P

Pete Becker

Maybe, maybe not.


Nonsense.

Okay, that was a bit harsh. Nevertheless: without making many
assumptions about methodology, that statement is far too sweeping. It
simply isn't true in general.
 
G

Gerhard Fiedler

My contention is *when* you write the tests is important, you can only
get full coverage without tools when they are written first. The only
production code that gets written is written to pass tests.

This was, there will not be a branch unless it was required to pass a
test.

I think I understand what you mean, but I still think that's (partially)
wrong.

If you write complete tests (that is, you test everything that is required
to work), you don't need coverage analysis, because the code simply does
what it needs to do if it passes the tests -- and the tests reflect 100%
the requirements (and the requirements are complete :)

If this is given, it doesn't seem to matter whether you write the tests
before or after the code.

And if it is given, there is no guarantee (independently of whether you
write the tests before or after the code) that there is not some code that
doesn't get executed by the tests. It just doesn't matter -- if the tests
reflect the requirements 100%.

There doesn't seem to be any mechanism that guarantees that the code
written is the minimum that is required to pass the tests.

Gerhard
 
R

REH

My contention is *when* you write the tests is important, you can only
get full coverage without tools when they are written first. The only
production code that gets written is written to pass tests.

This was, there will not be a branch unless it was required to pass a test.

I think you misunderstand he needs. He doesn't need a tool to
generate his test cases. He needs one to prove that his tests execute
all statements in his code. We have to do this all the time for
DO-178 and DEFSTAN projects. Doing the analysis by hand for a million
lines of code (heck, even a few thousand) is too time consuming and
error prone.

REH
 
I

Ian Collins

Then you have simply refactored the code. It that case, your tests will
still cover the new code. If they don't, you have added superfluous code.
Okay, that was a bit harsh. Nevertheless: without making many
assumptions about methodology, that statement is far too sweeping. It
simply isn't true in general.
It was a bit! It may not be true in general, but with practice and
care, it can be.
 
I

Ian Collins

Gerhard said:
On 2008-02-04 16:22:14, Ian Collins wrote:?


I think I understand what you mean, but I still think that's (partially)
wrong.

If you write complete tests (that is, you test everything that is required
to work), you don't need coverage analysis, because the code simply does
what it needs to do if it passes the tests -- and the tests reflect 100%
the requirements (and the requirements are complete :)

If this is given, it doesn't seem to matter whether you write the tests
before or after the code.
Oh but it does, writing tests the event is notoriously tedious and error
prone. That is one of the main reasons for writing code test first,
writing the tests becomes part of the creative process, not a chore.
And if it is given, there is no guarantee (independently of whether you
write the tests before or after the code) that there is not some code that
doesn't get executed by the tests. It just doesn't matter -- if the tests
reflect the requirements 100%.
Correct, it's simply dead code.
There doesn't seem to be any mechanism that guarantees that the code
written is the minimum that is required to pass the tests.
Human nature, people don't go out of their way to write more code than
they require. I've seen plenty of projects without unit tests that have
unused and untested features "because we might need it later". Writing
test first discourages this behaviour.
 
J

James Kanze

If you changed some code and didn't update the tests, the
tests would fail. If the tests are written first and the code
written to pass them, there will not be any conditions that
the code handles but not the tests.

Just because the code was written to pass some set of tests
doesn't mean that there aren't branches that this set of tests
doesn't test. The most blatant example was in the old days,
when operator new returned a null pointer if there was no
memory. A lot of code back then forgot to test for null, and a
lot of test suites didn't test that they handled the case
correctly either. (Note that today, it wouldn't surprise me if
a lot of code would fail if a new expression raises bad_alloc.)

Of course, a lot of program specifications don't say what the
code should do if you run out of memory either.

With regards to coverage tools:
If you have a function along the lines:

void
f( bool c1, bool c2 )
{
if ( c1 ) {
x ;
} else {
y ;
}
if ( c2 ) {
z ;
} else {
u ;
}
}

The coverage tools I analysed back then would report 100%
coverage if you called the function twice, once with
f(true, true) and once with f(false, false), for example.
Which, of course, is completely wrong. Other cases which were
missed were things like never testing the case where a loop
executed 0 times. Not to speak of exceptions. My conclusion at
the time was that they were worthless; we needed good code
review, including review of the tests.

Hopefully, the situation has improved since then.
 
J

James Kanze

My contention is *when* you write the tests is important, you
can only get full coverage without tools when they are written
first. The only production code that gets written is written
to pass tests.
This was, there will not be a branch unless it was required to
pass a test.

Which still doesn't begin to guarantee anything like full
coverage. Given a function which can be called with two
independent conditions, you still can easily end up with tests
that don't test all four combinations of the conditions. Not to
mention tests that never test the "exceptional" paths, e.g. when
new throws bad_alloc, etc.

There is no silver bullet, and writing the tests before writing
the code really doesn't buy you anything in general.
 
P

Pete Becker

Then you have simply refactored the code. It that case, your tests will
still cover the new code. If they don't, you have added superfluous code.

It may or may not be superfluous, but it shows exactly what you claim
doesn't happen: that there is code that isn't covered by the tests.
 
J

James Kanze

On 2008-02-04 22:33:44 -0500, Ian Collins <[email protected]> said:

[...]
It may or may not be superfluous, but it shows exactly what
you claim doesn't happen: that there is code that isn't
covered by the tests.

It doesn't even have to be superfluous. You have a function
which works:

void
f()
{
doSomething() ;
}

You have an exhaustive test for it (probably wishful thinking
already, but let's say that you do). First modification: add
special handling up front for condition a:

void
f()
{
if ( a ) {
pre() ;
}
doSomething() ;
}

You extend the tests to handle the case where a is true. (You
now have complete tests with a and with !a.) Second
modification: add special handling at the end for condition b:

void
f()
{
if ( a ) {
pre() ;
}
doSomething() ;
if ( b ) {
post() ;
}
}

You extend the tests to handle the case where b is true. You
now have tests for !a && !b (the initial case), a && !b, and !a
&& b. There's still one test you've left out. (If pre() or
post() can throw, there are others as well.)

This claim that writing the tests first will somehow
miraculously ensure that the code works perfectly, and is well
written and maintainable, just doesn't hold up in practice.
(Let's not forget that back in the good old days, when you
"compiled" by submitting your deck of cards to the computer
operator, and got the results back the next morning, it was
standard practice to write your tests first. And to consider
any code which passed the tests "correct". Didn't work then,
and it doesn't work now.)
 
J

James Kanze

Oh but it does, writing tests the event is notoriously tedious
and error prone. That is one of the main reasons for writing
code test first, writing the tests becomes part of the
creative process, not a chore.

That's a different argument. Different people will find
different organizations tedious or not. You may prefer writing
tests before, but that doesn't mean everyone does, and it
certainly doesn't imply more or less quality. The quality of
the tests comes from the fact that they are code reviewed with
the code, against the specifications.
Correct, it's simply dead code.

The problem is that tests can't cover 100% of the cases.
Floating point and threading are the two classical examples, but
I'm sure there are others. More to the point, how do you ensure
that the tests are complete for what you can test?
Human nature, people don't go out of their way to write more
code than they require.

Regretfully, the especially includes tests. And since when you
add a feature, you only need to test that feature, a lot of
tests concerning its interaction with other features gets left
out.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top