TDD/unit testing question

P

Peter Szinek

Hello,

Slightly OT, but since I think Unit testing/TDD is a commonly accepted
methodology here, I'll give it a shot:

I am writing a web extraction framework in Ruby which is now relatively
big. I am doing kind of a semi-TDD (i.e. some things are done pure TDD,
and I am writing tests later for stuff which are not) - so at the end
everything should be covered by tests.

I have also tons of black-box tests (i.e. for the specified input and
output, yell if the output of the current run is different) and as the
code grows, also a great load of unit tests.

My problem is that since the code is being changed a lot (sometimes
quite brutal refactoring, even dropping some classes, joining others,
throwing out inheritance hierarchy to replace it with composition or
vice versa) my development time is 20-30% (maybe more) of updating all
the tests (which is kinda annoying) and the remaining to actually write
the code.

Anyway, I have trapped so much errors when running all the tests between
refactorings that the overall development time is much more faster than
if I had to bug-hunt for all the obscure weirdness that popped up and
catched easily by the tests. I guess this will be even more true as the
code will grow.

So my question is not if I have to follow these practices or not but rather

1) Am I doing something wrong? (i.e. that I spend so much time with
rewriting/modifying/updating the tests? or is this normal)
2) Are there some tools/methods/ideas to speed things up? Or this is
just good so?
3) Maybe I am just too new to these techniques and a skilled TDDist
marches much faster?

I have seen people (now I am concretely talking about Java coders, but
language does not matter) who left TDD/writing lots of unit tests
because of this (and of course because of a boss who wanted a solution
quickly and did not care about long term problems )

What do you think?

Peter

__
http://www.rubyrailways.com
 
J

James Mead

1) Am I doing something wrong? (i.e. that I spend so much time with
rewriting/modifying/updating the tests? or is this normal)
2) Are there some tools/methods/ideas to speed things up? Or this is
just good so?
3) Maybe I am just too new to these techniques and a skilled TDDist
marches much faster?

It's difficult to give specific comments without actually seeing the
codebase and tests, but here are some ideas that might help...

- Don't make a big change to the code and then go through fixing loads
of broken tests. This is always disheartening and makes you feel like
you are spending a lot of time fixing tests. Instead break the big
change down into smaller steps - make a small change to the tests and
then fix the code. You should be alternating back and forth between
test and code frequently. Don't get too far away from a green bar i.e.
try to have as few broken tests as possible at any one time.

- If you find that making a change to a single class is breaking loads
of seemingly unrelated tests, this is probably an indication that too
much code is being covered by each test. It may also indicate a high
level of coupling in your design. Try to keep your unit tests as
fine-grained as possible and ensure they really are black-box tests
i.e. not coupled to the implementation. Using Mock Objects in your
unit tests is a very useful technique to keep the tests as focussed as
possible. If you need to write coarser-grained tests, try to write
them against a relatively stable interface. e.g. the public API you
will expose.
 
E

Eric Hodel

- If you find that making a change to a single class is breaking loads
of seemingly unrelated tests, this is probably an indication that too
much code is being covered by each test. It may also indicate a high
level of coupling in your design. Try to keep your unit tests as
fine-grained as possible and ensure they really are black-box tests
i.e. not coupled to the implementation. Using Mock Objects in your
unit tests is a very useful technique to keep the tests as focussed as
possible. If you need to write coarser-grained tests, try to write
them against a relatively stable interface. e.g. the public API you
will expose.

When I was new to TDD and unit testing this was my biggest problem,
especially the coupling side. I've evolved my coding style to
produce shorter, simpler methods that are easier to test (usually
less than ten lines, very rarely over 25 lines). Also, I heavily
refactor tests to move re-usable test code into utility methods.
 
T

Trans

Hi --

Peter said:
My problem is that since the code is being changed a lot (sometimes
quite brutal refactoring, even dropping some classes, joining others,
throwing out inheritance hierarchy to replace it with composition or
vice versa) my development time is 20-30% (maybe more) of updating all
the tests (which is kinda annoying) and the remaining to actually write
the code.

Anyway, I have trapped so much errors when running all the tests between
refactorings that the overall development time is much more faster than
if I had to bug-hunt for all the obscure weirdness that popped up and
catched easily by the tests. I guess this will be even more true as the
code will grow.

So my question is not if I have to follow these practices or not but rather

1) Am I doing something wrong? (i.e. that I spend so much time with
rewriting/modifying/updating the tests? or is this normal)
2) Are there some tools/methods/ideas to speed things up? Or this is
just good so?
3) Maybe I am just too new to these techniques and a skilled TDDist
marches much faster?

What do you think?

I sympthize. I think TDD makes more sense for applications in which the
general structure of the program is a given. If youre still working on
the "what's the best API" then you are probably better off just putting
the TDD on the back burner until you have it mostly worked out. Then
you can go back and put in tests. I know that TDD concept promotes
putting the test before the code, and really one should strive to do so
(I don;t do it enough myself) but that doesn't mean you always have to
do so. Really, the most important thing is that you end up with tests.

T.
 
R

Rob Sanheim

I sympthize. I think TDD makes more sense for applications in which the
general structure of the program is a given. If youre still working on
the "what's the best API" then you are probably better off just putting
the TDD on the back burner until you have it mostly worked out. Then
you can go back and put in tests. I know that TDD concept promotes
putting the test before the code, and really one should strive to do so
(I don;t do it enough myself) but that doesn't mean you always have to
do so. Really, the most important thing is that you end up with tests.

The problem with this approach is that when you go back to try and add
the tests, you realize your code is not test-friendly and testing is
much harder then it should be. I'd imagine this is more a problem in
less flexible languages then Ruby, but TDD/BDD definitely drive a
certain simple, testable design that you don't arrive at otherwise.

I think a good api can be developed using TDD, and in fact looking at
the tests used to create an api is one of the best ways to learn it.

- Rob
 
B

Bil Kleb

Trans said:
Hi --
Hi.

I sympthize. I think TDD makes more sense for applications in which the
general structure of the program is a given.

I respectfully disagree. I find TDD beneficial even when
just trotting down to the corner store.
If youre still working on
the "what's the best API" then you are probably better off just putting
the TDD on the back burner until you have it mostly worked out.

This is known as Technical Debt, and IMO, is pure evil
as in the movie /Time Bandits/.
Then you can go back and put in tests. I know that TDD concept promotes
putting the test before the code, and really one should strive to do so
(I don;t do it enough myself) but that doesn't mean you always have to
do so. Really, the most important thing is that you end up with tests.

But, it'll probably be more costly to do so after the fact -- see
Beck's /When Should We Test?/ article in the Files section of
the extremeprogramming Yahoo group.

Regards,
 
B

Bil Kleb

Eric said:
When I was new to TDD and unit testing this was my biggest problem,
especially the coupling side. I've evolved my coding style to produce
shorter, simpler methods that are easier to test (usually less than ten
lines, very rarely over 25 lines). Also, I heavily refactor tests to
move re-usable test code into utility methods.

I had a similar experience. TDD reveals design smells in your code.

Regards,
 
G

Giles Bowkett

I think a good api can be developed using TDD, and in fact looking at
the tests used to create an api is one of the best ways to learn it.

Yeah, I think this is true. I really only started using TDD very
recently, but one of the big things that swayed is me that TDD seems
to be at its best when building APIs, or, more accurately, all the
APIs I like the most seem to have been written using TDD.

Anyway, from the original post:
I am writing a web extraction framework in Ruby which is now relatively
big. I am doing kind of a semi-TDD (i.e. some things are done pure TDD,
and I am writing tests later for stuff which are not) - so at the end
everything should be covered by tests.

I have also tons of black-box tests (i.e. for the specified input and
output, yell if the output of the current run is different) and as the
code grows, also a great load of unit tests.

OK, just want to say, there's a definite logical flaw in making
generalizations about TDD from a project which is "kind of a
semi-TDD." The broader question of whether or not TDD rocks (I think
it does) is certainly an interesting question, but at the same time,
as certainly a different question.

Anyway, in terms of the actual practical issue, I think the easiest
interpretation of this is that you're writing too many tests, and that
some of them are probably testing the wrong things. Any resource on
testing will reference the idea that TDD consists of writing a test
and then writing the simplest code that could possibly satisfy that
test. I think the hidden implication there is that you should also
write the simplest test that could possibly articulate your
requirements.

In practical terms, what I'd do in your situation is, if I made a
change and it blew up some tests, I'd examine the tests. Any tests
which depended on implementation details and didn't really have much
to do with the goals of the application itself, I'd simply throw away.
That's just a waste of time. Get rid of it. Any tests which were about
the API internals, rather than just the way a client programmer should
use the API, I'd either throw those tests away too, or possibly
separate them.

This is especially true as you're changing the internals, refactoring,
throwing away objects, etc. What you're really looking at might be a
boundary between what elements of the API you expose and what elements
can change without the client programmer ever even being aware of it.
Even if the client programmer is you, this distinction is still valid,
it's basically the core question of OOP. Anyway, I can't see your
code, I'm just guessing here, but I'd say make that separation more
explicitly, maybe even to the point of having client tests and
internal API tests, the goal being to have less tests overall, and for
those tests you retain to be cleaner and a little more categorized.

Client programmer tests and functional ("black-box") tests, those are
important. They make sure that the results you get are the results you
want. API internal tests, if they're blowing up on details of the
implementation, simply throw them away. If they're testing for objects
that don't exist any more, they're just useless distractions. You
should retain some amount of tests for internals of the API, but
ideally, those are the only tests that should change when you're
refactoring. Refactoring means changing the design without changing
the functionality. If you refactor and a test of the functionality
blows up, that's really bad news. If you refactor and a test of the
API internals blows up, that's nothing.

What it actually sounds like is spaghetti tests. You should really
only test for things that you need to know. The tests that go to
specific bits and pieces of the implementation, those should only be
in a part of the testing code reserved for testing the implementation.
Higher-level tests should only test higher-level behavior.

Ideally, if you change only one thing in your application, you should
only see one test fail. In reality that won't necessarily happen,
certainly not right away in this particular case, but it becomes a lot
more likely if you refactor both the tests and the application at the
same time. In that case, the finer-grained your tests get, the
finer-grained your application will get along with it. What you want
to do is have very simple tests running against very simple code, so
that as soon as a single test breaks, you know exactly what went
wrong.

I literally JUST got started using TDD, but I had been putting it off
for years, and now, I totally 100% advocate it. Being able to
recognize instantly exactly what went wrong is a REALLY nice thing.
What makes legacy spaghetti so horrible to work with is the exact
opposite: it takes you forever to get even a vague idea of where the
mysterious thing that went wrong might be. Get fine-grained tests and
fine-grained code and you'll be very happy.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,969
Messages
2,570,161
Members
46,705
Latest member
Stefkari24

Latest Threads

Top