James said:
Quite. Take a look at the SEI site, for example.
----8<-----------------------------------------
TDD is one practice of the agile development community. Its goal is "clean
code that works" [Beck 2002a]. In this practice, developers define
requirements for the piece they are assigned to construct by maintaining
close communication with the developers who are constructing related units
and writing test cases that serve as the specification for the unit. The
developer then writes and revises code until the unit passes all the tests.
The rhythm of TDD is very short cycles of these steps:
Define a new test.
Execute all tests.
Write code to fix tests that fail.
Execute all tests.
TDD has the advantage that the test code is always in synch with the product
code, because the test code defines the product code. The disadvantage of
TDD is that there is not a good method for determining whether the set of
test cases is complete, since the completeness of a test set is usually
determined by comparing it to the specification.
TDD is applicable to product line organizations provided it is applied to
units that are first defined in the context of the product line
architecture. TDD does not provide tools and techniques for balancing the
diverse quality attributes usually present in a product line. TDD can be
successful if applied to units that a small group of developers, often a
two-person or pair programming team, can produce in a timely manner. The
range of variability for the unit should also be sufficiently narrow to
allow for timely completion. The success of TDD depends on the availability
of tools, such as JUnit, to assist with development and automate testing.
----8<-----------------------------------------
Exactly. Very, very limited in its range of applicability.
ftp://ftp.sei.cmu.edu/pub/documents/articles/pdf/xp-from-a-cmm-perspe...
XP satisfaction of key process areas, given the appropriate environment
Level Satisfaction Key process area
2 ++ Requirements management
2 ++ Software project planning
2 ++ Software project tracking and oversight
2 - Software subcontract management
2 + Software quality assurance
2 + Software configuration management
3 + Organization process focus
3 + Organization process definition
3 - Training program
3 - Integrated software management
3 ++ Software product engineering
3 ++ Intergroup coordination
3 ++ Peer reviews
4 - Quantitative process management
4 - Software quality management
5 + Defect prevention
5 - Technology change management
5 - Process change management
+ Partially addressed in XP
++ Largely addressed in XP (perhaps by inference)
- Not addressed in XP
You'll note some very important issues that it doesn't address.
Like overall quality. I'd also disagree about it's addressing
software project planning or software project tracking and
oversight---planning definitly involves specifications: how much
time/money to do what, and tracking and oversight pre-suppose a
plan: how can you know if you're on schedule if you don't have a
schedule.
----8<-----------------------------------------
Note that this survey only compares XP's documentation and
verbiage to CMMi's verbiage. It is not a study of real
projects in action. So under "Training program", the -
represents the author, Dr. Mark Paulk, declines to speculate
that pair programming could be used as an ideal training
program.
Pair programming is a very effective *training* tool; it has
been more or less one of standard training tools for some 20 or
30 years now (although it didn't have such a catchy name in the
past).
Next, all Agile projects, in practice, automate their entire
build chain. Maybe the CMMi has higher goals for its
"Integrated software management" KPA.
All projects I've worked on for the past twenty or thirty years
have automated the build chain. That's only a small part of the
problem. The issues of tracking progress and measuring quality
are less often addressed.
And note that "Defect prevention" gets only one +. The actual
response from folks who switched to XP (and did all its
practices, not just the convenient ones) is their code grows
very robust and difficult to break over time.
Do you have anything but annecdotal evidence? What tools do you
use to measure?
On a well managed project, I would estimate that one error per
100,000 lines of code would be about the minimum acceptable
level of quality. Up until that point, at least, reducing the
number of errors reduces cost. (I've heard of projects with
less than one error per million lines of code. These are the
ones, of course, where the developers don't even have access to
a compiler. It can be done. It is done for very critical
systems, where human life is at stake. But I don't think it's
really cost effective otherwise.)
Agile development provides aspects of design and teamwork
which the SEI is not yet capable of interpreting.
So called "agile development" in fact takes us back to what was
current practive 20 or 30 years ago. SEI is roughly a
generation beyond that.
So, in conclusion, I don't think it's the Agile community who
is being immature here.
Just a serious step backwards.