(Joel, please preserve attribution lines on your quoted material so we
can see who wrote it.)
Joel Hedlund said:
My presumption has been that in order to do proper test-driven
development I would have to make enormous test suites covering all
bases for my small hacks before I could getting down and dirty with
coding
This presumption is entirely opposite to the truth. Test-driven
development has the following cycle:
- Red: write one test, watch it fail
- Green: write minimal code to satisfy all existing tests
- Refactor: clean up implementation without changing behaviour
(The colours refer to the graphical display for some unit test
runners: red for a failing test, green for a test pass.)
This cycle is very short; it's the exact opposite to your presumption
of "write huge unit test suites before doing any coding". For me, the
cycles are on the order of a few minutes long; maybe longer if I have
to think about the interface for the next piece of code.
In more detail, the sequence (with rationale) is as follows:
- Write *one* test only, demonstrating one simple assertion about
the code you plan to write.
This requires you to think about exactly what your planned code
change (whether adding new features or fixing bugs) will do, at a
low level, in terms of the externally-visible behaviour, *before*
making changes to the code. The fact that it's a single yes-or-no
assertion keeps the code change small and easily testable.
- Run your automated unit test suite and watch your new test fail.
This ensures that your test actually exercises the code change
you're about to make, and that it will fail when that code isn't
present. Thus your new test becomes a regression test as well.
- Write the simplest thing that could possibly make the new test
pass.
This ensures that you write only the code absolutely necessary to
the new test, and conversely, that *all* your code changes exist
only to satisfy test cases. If, while making the code change, you
think the code should also do something extra, that's not allowed
at this point: you need a new test for that extra feature.
Your current code change must be focussed only on satisfying the
current test, and should do it in the way that lets you write that
code quickly, knowing that you'll be refactoring soon.
- Run your automated unit test suite and watch *all* tests pass.
This ensures that your code change both meets the new test and
doesn't regress any old ones. During this step, while any tests
are failing, you are only allowed to fix them — not add new
features — in the same vein as making the new test pass, by doing
the simplest thing that could possibly work.
Fixing a failing test might mean changing the code, or it might
mean changing the test case if it's become obsoleted by changes in
requirements.
- Refactor the code unit to remove redundancy or other bad design.
This ensures that, while the code unit is fresh in your mind, you
clean it up as you proceed. Refactoring means that you change the
implementation of the code without changing its interface; all
existing tests, including the new one, must continue to pass. If
you cause any test to fail while refactoring, fix the code and
refactor again, until all tests pass.
That is: the discipline requires you to write *one* new test and watch
it fail ("Red"), then write minimal code to make *all* tests pass
("Green"), and only then refactor the code design while maintaining
the "all tests pass" state. Only then do you move on to a new code
change, starting a new cycle by writing a new test.
It ensures that the code process ratchets forward inexorably, with the
code always in a well-designed, well-factored, tested state that meets
all current low-level requirements. This also encourages frequent
commits into version control, because at the end of any cycle you've
got no loose ends.
The above effects — short coding cycles, constant tangible forward
motion, freedom to refactor code as you work on it, freedom to commit
working code to the VCS at the end of any cycle, an ever-increasing
test suite, finding regressed tests the moment you break them instead
of spending ages looking at follow-on symptoms — also have
significantly positive effects on the mood of the programmer.
When I started this discipline, I found to my surprise that I was just
as happy to see a failing test as I was to see all tests passing —
because it was proof that the test worked, and gave contextual
feedback that told me exactly what part of the code unit needed to be
changed, instead of requiring an unexpected, indefinite debugging
trek.
Yes. While that chapter is a good demonstration of how unit tests
work, the impression given by that chapter is an unfortunate
demonstration of the *wrong* way to do unit testing.
The unit test shown in that chapter was *not* written all at once, as
the chapter implies; it was rather built up over time, while
developing the code unit at the same time. I don't know whether
Pilgrim used the above cycle, but I'm positive he wrote the unit test
in small pieces while developing the code unit in correspondingly
small increments.
But if I understand you correctly, if I would formalize what little
testing I do, so that I can add to a growing test suite for each
program as bugs are discovered and needs arise, would you consider
that proper test-driven development? (or rather, is that how you do
it?)
Yes, "test-driven development" is pretty much synonymous with the
above tight cycle of development. If you're writing large amounts of
test code before writing the corresponding code unit, that's not
test-driven development — it's Big Design Up Front in disguise, and is
to be avoided.
For more on test-driven development, the easiest site to start with is
<URL:
http://www.testdriven.com/> — which, though much of its community
is focussed on Java, still has much to say that is relevant to any
programmer trying to adopt the practice. Its "web links" section is
also a useful starting point.