P
Phlip
Bart said:Am I to understand you have two different sets of testcases for TDD
testing and QA testing at the unit level?
_I_ only use TDD tests, and don't have a QA department. At my last gig, the QA
guy wrote Watir tests, in a different test batch, so the unfortunate answer
there is Yes.
If I were in charge of my last gig (synchronous with "If I were still AT my last
gig"), the Grand Wazoo Test Batch would have run all of them.
The correct way to do things is the QA department adds soak- and black-box-tests
to the same test batch as the developers use. That means those tests also help
continuous integration.
Could you enlighten me what you exactly mean with TDD testing and QA
testing? It seems we might be using slightly different definitions
(which is not all that uncommon when discussing agile methodologies).
Some TDD verbiage says "add a test case that can only pass if the correct code
is then written." That's wrong; no test case can defeat a deliberate attempt to
write incorrect production code which fools the test.
All I ask is the test fail for the right reason, and that the correct line of
code be one of many that can pass the test. It's all good.
The role of QA testing is to add the kind of test cases James Kanze likes. TDD
can approach branch-coverage (on a greenfield project). The goal should be
path-coverage; running every combination of every branch.
One test or multiple tests?
Where is this 'mu
And no. I don't always know the test to write. For example, if the next
line only consist of a brace.
You should have already typed it (otherwise your previous test can't run). Stop
playing the "literal interpretation" game - it's not the lines of code, from top
to bottom. It's the edits.
You should have typed that } when you typed its {, when you were passing the
test which required contents on that block.
Furthermore, it is not possible to test C or C++ code on the level of
individual lines.
In the quoted statement, the "next line of code" is shorthand for "the next
small set of edits". You know that; don't just play with the verbiage.
When fixing a bug, I would first extend the existing testcases with one
that fails for the same reasons (and under the same conditions) as the
actual problem. Then I change as many lines as needed to make the entire
suite of testcases pass again.
Capture bugs with tests. However, if I were fixing that bug, I would do anything
to return to all-tests-passing as soon as possible, even if it meant adding an
"if" statement just after the bug.
Once you have all-tests-passing, you then have many more options than just
heroically debugging away. You could, for example, write another test case which
breaks the "if" statement you just wrote, and which forces you to improve the fix.
The test itself is usually straightforward. The hard part, as I have
experienced, is in establishing the correct environmental conditions.
Right - expensive setup is a design smell, especially in "legacy" applications
that were not designed for test.
Some anecdotal evidence:
- In a recent project I worked on a system for playing music files from
an USB stick. After the acceptance tests, we got a failure report that
for a very specific configuration of the filesystem on the stick, we
would select the files for playback in the wrong order. Creating the
testcase itself was trivial (copy-paste of the existing testcase for
that feature), but setting up the correct (simulated) filesystem in the
stubs was a lot more work. Definitely more than half an hour.
Thou shalt mock hardware. The test was not easy to write because you had
"deferred maintenance" in the test rig. You had not _already_ built a mockup of
the filesystem, for your first couple of tests.
Some say that mocking the filesystem is a best practice. In my rubric it's not
strictly "hardware"; I would also have deferred the maintenance. I just would
not have blamed TDD when my decision came back to bite me!
- Longer ago, I have worked as a test engineer writing black-box
conformance testcases. 90% of the code in each testscript was dedicated
to getting the DUT in a state where we could verify our requirement.
(And no, common code did not help, as the conditions were just different
enough each time.)
However, if I were writing a TDD test on a method deep inside that DUT, that
method should be decoupled from all the other methods that require all that
environment. End-to-end tests have more setup than low-level tests on clean code.
And I also want a very visible PASS/FAIL indication, which a failure to
build does not give. At best, I consider a failure to build equivalent
to an ERROR verdict (testcase has detected an internal problem).
Right: Editors should treat test failures both as syntax error diagnostics, AND
as debugger breakpoints.
Because, with my way of writing code, the code would not compile before
I finished writing the function.
I write blocks in the order
- opening brace
- block contents
- closing brace.
Then type the opening brace, closing brace, and back-arrow into the block.
That's just a best keyboarding practice, anyway.
But no I don't mean run the test after each vertical line of code - stop
pretending you think I meant that.
I guess, with your insistence on frequent testing, you hit that button
after each character you typed in the editor.
There are those who have experimented with "zero button testing", where the
editor constantly checks the code state as you type. Amber means it's syntax is
broken, green means its relevant tests pass, and red means they don't. A red-bar
should then give you the option to navigate to the failing tests, to the failing
assertions, to the stack trace in your code, or not.
"Fake it till you make it" does not tell you which conditions you forgot
to fake and did not test.
Absolutely. Yet it forces you to at least think about how to write tests with
different attitudes and assumptions. Cloning tests cases can only take you so far!