top-level code quality metrics to track

A

als

If management wants a measure of code quality - will these metrics say
it all? What are the acceptable values for these metrics?

Metric 1 – efferent coupling (acceptable value?)
Metric 2/3 – large methods (lines of code / instruction level)
Metric 4 – cyclomatic complexity (CC is higher than 15 are hard to
understand and maintain)
Metric 5 – variables per method (acceptable value?)
Metric 6 – test coverage (>70% coverage should be aimed)
Metric 7 - Dependency (acceptable value)
metric 8 - LCOM - Types where LCOMHS > 1.0 and NbFields > 10 and
NbMethods >10 should be avoided
Metric 9 - Instability - (acceptable range?)
 
I

Ian Collins

If management wants a measure of code quality - will these metrics say
it all? What are the acceptable values for these metrics?

The only metric I've ever worried about is how much of my team's time is
spent fixing defects. That is the true cost of poor quality.
 
P

Puppet_Sock

The only metric I've ever worried about is how much of my team's time is
spent fixing defects.  That is the true cost of poor quality.

I understand your position and have a lot of empathy for it.

The problem is, managers want something objective and
measurable. They want to be able to do something like
push a code through a program that counts something
and spits out a number. Then they want to be able to
look up on a table how good or bad that is.

The reason that's a problem is, very little is always.
Take as an example, the OP's metric 5.
Metric 5 – variables per method (acceptable value?)

Sure this seems to be aiming at keeping function complexity
down. A monster function with hundreds of variables is
likely to be difficult to understand, difficult to test,
difficult to be confident it is correct, even difficult
to be confident there is a bug when things go wrong.
It will simply be difficult to comprehend.

Yet, now and then here and there, such a monster is the
best design choice available. It's often a symptom of
other poor choices upstream that have pushed you into
a corner. But in the corner you may be. And when the
choices are major rewrites of large chunks of code, and
making one rotund function, it's probably better to make
one rotund function.

Generally I have tried to do the following in parallel.

1) Educate managers that metrics should only be treated
as guides. They are useful but should not remove the
need for judgement and careful consideration of context.
When a manager treats metrics as "Pearl Harbor"
material, get away from him as far and fast as I can.
2) Keep trying to educate all the design and coding
members of a project about good software design.
This usually means trying to get them to read up on
good design advice books, especially the ones that get
the best reviews at www.accu.org and places like that.
3) Keep trying to educate myself about same. I've never
stopped finding new things to improve my code.
Socks
 
A

als

Thanks detailed reply....say, in a green field projects, if i am able
to track these metrics right from beginnning, I as a developer will
get a good indication of potential problem areas. Now, can we reduce
this metric set? Should we rather increase? At the end of the day - an
easy health check mechanism.

When it is not green field, say, you moved into an existing code base,
I think metrics like this help prioritize and see danger! I would like
to short list to a few such metrics.

We moved into a project. Unsuspectingly committed to some feature list
for an agile sprint. Then there were dependency surprises, incomplete
test harness, existing bugs (the we did not know, but became
responsible for), etc. - does is not make sense to do a temperature
check before starting to work on the code base? Does not make sense to
establish paradigm for quality that is easy to gauge and
communication?

Now to coming to the main point - what is the min set of such metrics?
What are the thresholds?
 
A

Andrea Crotti

als said:
Thanks detailed reply....say, in a green field projects, if i am able
to track these metrics right from beginnning, I as a developer will
get a good indication of potential problem areas. Now, can we reduce
this metric set? Should we rather increase? At the end of the day - an
easy health check mechanism.

When it is not green field, say, you moved into an existing code base,
I think metrics like this help prioritize and see danger! I would like
to short list to a few such metrics.

We moved into a project. Unsuspectingly committed to some feature list
for an agile sprint. Then there were dependency surprises, incomplete
test harness, existing bugs (the we did not know, but became
responsible for), etc. - does is not make sense to do a temperature
check before starting to work on the code base? Does not make sense to
establish paradigm for quality that is easy to gauge and
communication?

Now to coming to the main point - what is the min set of such metrics?
What are the thresholds?

Well often also just looking at some code is enough to understand if
it's bad or not.

Absurd names, strange or too smart constructs and crazy design are not
really easily catchable by automatic software.
 
I

Ian Collins

I understand your position and have a lot of empathy for it.

The problem is, managers want something objective and
measurable. They want to be able to do something like
push a code through a program that counts something
and spits out a number. Then they want to be able to
look up on a table how good or bad that is.

I guess in my last couple of roles I was the management! But seriously,
the only measurable metric of code quality is the cost of supporting it.

When I took over a team of 50 where at least 10 developers were working
on field issues, I knew there was a problem and I set myself a
measurable goal of reducing that 5 (which is still too high, but
realistic with a large legacy code base).
The reason that's a problem is, very little is always.
Take as an example, the OP's metric 5.


Sure this seems to be aiming at keeping function complexity
down. A monster function with hundreds of variables is
likely to be difficult to understand, difficult to test,
difficult to be confident it is correct, even difficult
to be confident there is a bug when things go wrong.
It will simply be difficult to comprehend.

Yet, now and then here and there, such a monster is the
best design choice available. It's often a symptom of
other poor choices upstream that have pushed you into
a corner. But in the corner you may be. And when the
choices are major rewrites of large chunks of code, and
making one rotund function, it's probably better to make
one rotund function.

Which is one reason such metrics are a total waste of time.
Generally I have tried to do the following in parallel.

1) Educate managers that metrics should only be treated
as guides. They are useful but should not remove the
need for judgement and careful consideration of context.
When a manager treats metrics as "Pearl Harbor"
material, get away from him as far and fast as I can.
2) Keep trying to educate all the design and coding
members of a project about good software design.
This usually means trying to get them to read up on
good design advice books, especially the ones that get
the best reviews at www.accu.org and places like that.
3) Keep trying to educate myself about same. I've never
stopped finding new things to improve my code.

Good points.
 
A

als

Good point: then I add #1 metric is "bug fix" effort as percentage of
the development effort.
 
J

James Kanze

On 10/28/10 11:05 PM, als wrote:
The only metric I've ever worried about is how much of my
team's time is spent fixing defects. That is the true cost of
poor quality.

You might also want some measure of customer satisfaction:).

You might also want to know why you're spending too much time on
defects, and what you should change to reduce it. Objective
measurements aren't without value, especially on large projects.
And provided that they are taken for what they are worth, and
not more.
 
J

James Kanze

I understand your position and have a lot of empathy for it.
The problem is, managers want something objective and
measurable. They want to be able to do something like
push a code through a program that counts something
and spits out a number. Then they want to be able to
look up on a table how good or bad that is.

The amount of time (or the cost) spent fixing defects is, or
should be, very measurable. The problem is that while it tells
you you have a problem, and will tell you if a particular change
in your processus has improved the situation (or made it worse),
it doesn't give you any hint as to what changes might be best to
try, and it's only applicable once the code has been deployed,
or at least reached integration (for large projects).

Some measurements have been proven effective, at least when used
reasonable and rationally. If for example you have functions
with cyclomatic complexities of 10 or more, and they don't fit
into the standard exceptions (large switches, etc.), then you
can reasonably expect improvements by reducing the complexity.
But it's not a panacea; if systematically large complexity
measures are almost certainly a sign of bad code, the reverse is
far from true; it's possible to write very bad code while still
keeping the measurement low. (Also note that in a largely OO
design, the average measurement will often be considerably lower
than the cases you're really interested in, as there will be
many cases where a virtual function will be almost trivial.)

[...]
1) Educate managers that metrics should only be treated
as guides. They are useful but should not remove the
need for judgement and careful consideration of context.
When a manager treats metrics as "Pearl Harbor"
material, get away from him as far and fast as I can.

Perhaps the most important point. The metrics shouldn't really
be used by managers as much as they should be used by the
programmers themselves, for example in code reviews. And of
course, they don't, and can't cover everything: to consider the
example I cut, a function using 10 variables (local variables,
arguments, members, etc.) named a1 through a10 will be a lot
harder to understand than one with 10 variables having
semantically significant names.
 
J

James Kanze

On 10/29/10 02:30 AM, Puppet_Sock wrote:

[...]
But seriously, the only measurable metric of code quality is
the cost of supporting it.

That is simply false. There are a number of metrics which
measure specific aspects of code quality. They aren't perfect,
but they are useful. And I'm willing to bet that you use some
yourself: you certainly track how many tests fail, and how many
errors slip through your tests (independantly of how much it
costs to fix each one). The final, and important metric might
be cost, but other metrics can give important information as too
why cost is too high. If more than about one or two percent of
developer time is spent fixing bugs (found in the field, or in
integration), then you know you're doing something wrong. And
once you've gotten there, it may be hard to establish what the
cost should be, e.g. for adding a new feature.
 
I

Ian Collins

On 10/29/10 02:30 AM, Puppet_Sock wrote:
[...]
But seriously, the only measurable metric of code quality is
the cost of supporting it.

That is simply false.

Let me qualify that further: the only measurable metric of code quality
senior management is interested in is cost of supporting it.
There are a number of metrics which
measure specific aspects of code quality. They aren't perfect,
but they are useful. And I'm willing to bet that you use some
yourself: you certainly track how many tests fail, and how many
errors slip through your tests (independantly of how much it
costs to fix each one).

Yes, I'm sure we all do, but the OP was asking for "top-level code
quality metrics", so I answered with my manager hat on rather than my
developer one.
 
J

Jorgen Grahn

Well often also just looking at some code is enough to understand if
it's bad or not.

Absurd names, strange or too smart constructs and crazy design are not
really easily catchable by automatic software.

That's better than being handed a paper which says:

The cyclomatic complexity of this
code is 31.41592654.
Do you still want to continue?
(y/n)

But I find it difficult to tell the difference between messed up code
which you /can/ eventually learn to handle safely, and code which is
just hopeless. It can take many months before you can tell.

/Jorgen
 
A

Alf P. Steinbach /Usenet

* Ian Collins, on 01.11.2010 21:13:
On 10/29/10 02:30 AM, Puppet_Sock wrote:
[...]
But seriously, the only measurable metric of code quality is
the cost of supporting it.

That is simply false.

Let me qualify that further: the only measurable metric of code quality senior
management is interested in is cost of supporting it.

Much of this seems to me to not be directly measurable, more like guesswork, but
can't it be more important than direct, immediate costs?

Like, if the customer's immediate goal is "get the car fixed", Cheap Charlie's
shop will fix the car at very low cost and fast too, but it'll probably break
down soon again. Honest Joe's will also fix the car, but will utilize more
expensive original manufacturer's replacement parts, and will not do any shoddy
work even if it takes more time to do it right. These two shops gain different
reputations and different kinds of customer bases.

I can imagine that the time frame considered influences the notion of cost.

E.g., implementing a new window by copying the already repetitive (unfactored)
code for an existing one and modifying a bit and adding a bit of code, is cheap
in the short run, but due to copying of bugs and copying of high complexity and
sheer size (which then tends to just increase) can be be costly in a longer
timeframe. But then in such a longer timeframe it can be conceivably be Someone
Else's Problem? Then the Someone Else applies same thinking...

Also I can imagine that the contracts influence the cost of e.g. doing work to
increase reusability. With one kind of contract and client relationship one may
leverage developed code and acquired competency and
whatever-the-term-is-for-a-team-that's-been-developed in other projects, while
with another kind of contract the code belongs to the client and acquired
competency is scattered when project is finished. And with e.g. a focus on each
single project again reusability (and work expended towards that) may be a net
cost, while with a focus on project ensembles and longer client relationship
code may at least be reused within projects in a project ensemble for the same
client.

And I can imagine indirect paths that influence cost, such as client
satisfaction influencing ability to gain new lucrative contracts.

Yes, I'm sure we all do, but the OP was asking for "top-level code quality
metrics", so I answered with my manager hat on rather than my developer one.

How to increase the context that a manager takes into consideration?


Cheers,

- Alf (speculative)
 
I

Ian Collins

* Ian Collins, on 01.11.2010 21:13:
On 10/29/10 02:30 AM, Puppet_Sock wrote:

[...]
But seriously, the only measurable metric of code quality is
the cost of supporting it.

That is simply false.

Let me qualify that further: the only measurable metric of code
quality senior
management is interested in is cost of supporting it.

Much of this seems to me to not be directly measurable, more like
guesswork, but can't it be more important than direct, immediate costs?

The costs of supporting poor code aren't immediate, they are a perpetual
drain.
Like, if the customer's immediate goal is "get the car fixed", Cheap
Charlie's shop will fix the car at very low cost and fast too, but it'll
probably break down soon again. Honest Joe's will also fix the car, but
will utilize more expensive original manufacturer's replacement parts,
and will not do any shoddy work even if it takes more time to do it
right. These two shops gain different reputations and different kinds of
customer bases.

I can imagine that the time frame considered influences the notion of cost.

The market sector does as well. If you are providing embedded
solutions, the cost of field replacement can be staggering.
How to increase the context that a manager takes into consideration?

The best approach is to minimise it! I've had one MD who used to be an
engineer and thought he still was one who wanted too much information
and one who came form marketing and thought he understood software
development...
 
J

James Kanze

On 10/29/10 02:30 AM, Puppet_Sock wrote:
[...]
But seriously, the only measurable metric of code quality is
the cost of supporting it.
That is simply false.
Let me qualify that further: the only measurable metric of
code quality senior management is interested in is cost of
supporting it.

It's ultimately what the highest level of management is
intereted in, yes. Although at that level, I'm not sure that
they distinguish between support costs and other costs: what
they're interested in is total cost (so if it were cheeper to
write bad code, then spend more on support, they'd favor that).

Mainly, at least. Some senior management may also be concerned
with image (no, or very few bugs leaving house, for example).
Yes, I'm sure we all do, but the OP was asking for "top-level
code quality metrics", so I answered with my manager hat on
rather than my developer one.

OK. I didn't understand it that way, but I'm not sure what
"top-level" should mean in this case. None of the measurements
he mentionned seem "top-level", in the sense that they all are
concerned with low level details of the code.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,982
Messages
2,570,186
Members
46,740
Latest member
JudsonFrie

Latest Threads

Top