Requesting critique of a C unit test environment

A

Ark Khasin

Phlip said:
Next, if you are talking about research to generate algorithms for some
situation, then you aren't talking about production code. Disposable code
doesn't need TDD. Once you have a good algorithm, it will have details that
lead to simple test cases.
That's the whole point. I end up with some working prototype code for
which I need to create tests post factum.
 
I

Ian Collins

Phlip said:
Ark said:
[If we agree that a test is a contraption to check if the code works as
expected:]

The weakest possible such contraption - yes.
If we don't know what to expect ("research"), we cannot write a test. [Or
again I am missing something]

If you can think of the next line of code to write, you must perforce be
able to think of a test case that will fail because the line is not there.

Next, if you are talking about research to generate algorithms for some
situation, then you aren't talking about production code. Disposable code
doesn't need TDD. Once you have a good algorithm, it will have details that
lead to simple test cases.
I have found TDD to be a good tool for pointing me at a new algorithm.
It might just be the way a think, but given something I'd forgotten or
was too lazy to look up such as polynomial fitting, I start with a
simple flat line test, then a slope with two points, and so on until I
have a working general solution. I've found a dozen or so tests are
required to pop out a working solution. Given the working tests, the
algorithm can then be optimised.
 
I

Ian Collins

Everett said:
Have you ever worked in a product R&D environment? A lot of concepts
are taken for a test drive without ever seeing the light of day outside
the lab.

We call them spikes, or a proof of concept. Once the concept has been
proven, the code is put to one side and re-written using TDD.

Even these spikes can often be produced faster with TDD, the time saved
not debugging justifies the more formal approach.

It's unfortunate that us C and C++ programmers are spoiled rotten with
decent debuggers. Try developing something complex in an environment
without one and the benefits of TDD become clear. I do a lot of PHP and
I have never bothered looking for a PHP debugger.
 
F

Flash Gordon

Ian Collins wrote, On 01/09/07 08:21:
It may not appear that way, but it is the reality on any project I
manage. In all (C++) cases, the tests take less time to run than the
code takes to build (somewhere between 50 and 100 tests per second,
unoptimised).

This, however, is not always the case. I've written a function of about
20 lines that IIRC required something line 100-200 tests. If test took a
similar amount of time to run as the code took to compile. It was doing
maths and there where a *lot* of cases to consider.

For an audit of the entire piece of SW (rather than just that one
function) the customer insisted that we print out all of the module test
specs. The stack of A4 paper produced was a couple of feet tall! Running
that set of tests would take rather more than overnight.

On another project, doing a build of our piece of the SW took 8 hours.
Doing a build of all of the SW for the processor took 48 hours. Add
testing to that for each build...

Some projects are a lot harder than yours.
 
P

Phlip

Ark said:
That's the whole point. I end up with some working prototype code for
which I need to create tests post factum.

"Unit" tests post-factum. Not "developer tests" that support generating
production code.

Before you create this prototype code, do you _never_ debug it?

When researching, I frequently write disposable code test-free. When I
convert it to production code, I write the tests first. The result is much
cleaner for two reasons: It's a rewrite - that's always cleaner - and it's
super-easy to refactor. Without debugging.
 
P

Phlip

Ian said:
I have found TDD to be a good tool for pointing me at a new algorithm.
It might just be the way a think, but given something I'd forgotten or
was too lazy to look up such as polynomial fitting, I start with a
simple flat line test, then a slope with two points, and so on until I
have a working general solution. I've found a dozen or so tests are
required to pop out a working solution. Given the working tests, the
algorithm can then be optimised.

If you follow the exact refactoring rules, you'll remove all duplication
before adding the next line of code. I once tried that while generating an
algorithm to draw Roman Numerals, and I discovered that the outcome was
sensitive to one of my early refactors. The design I got sucked; it was
harder to code over time, not easier. I had to roll the entire process back
to that refactor, try it the other way, and _this_ time the correct
algorithm popped out.

TDD is a very good way to force a clean design to emerge, following simple
and known algorithms. But it's not a general-purpose algorithm generator.
Whoever discovers _that_ gets to go to the top of the food chain.
 
I

Ian Collins

Phlip said:
If you follow the exact refactoring rules, you'll remove all duplication
before adding the next line of code. I once tried that while generating an
algorithm to draw Roman Numerals, and I discovered that the outcome was
sensitive to one of my early refactors. The design I got sucked; it was
harder to code over time, not easier. I had to roll the entire process back
to that refactor, try it the other way, and _this_ time the correct
algorithm popped out.
There you go, the solution is to hold off all but the most trivial
refactoring until the end!
 
I

Ian Collins

Flash said:
Ian Collins wrote, On 01/09/07 08:21:

This, however, is not always the case. I've written a function of about
20 lines that IIRC required something line 100-200 tests. If test took a
similar amount of time to run as the code took to compile. It was doing
maths and there where a *lot* of cases to consider.
Um, I've never seen one like that before, probably because TDD doesn't
yield that type of code.
For an audit of the entire piece of SW (rather than just that one
function) the customer insisted that we print out all of the module test
specs. The stack of A4 paper produced was a couple of feet tall!

Sounds like the US DOD, I'm sure they just weigh or measure
documentation rather than read it!
On another project, doing a build of our piece of the SW took 8 hours.
Doing a build of all of the SW for the processor took 48 hours. Add
testing to that for each build...
Those were the days. Thank goodness for fast CPUs and distributed building.
 
P

Phlip

Ian said:
There you go, the solution is to hold off all but the most trivial
refactoring until the end!

That's a joke, guys.

When creating production code, not when researching, after passing a test,
try to simplify, and go in order from easy to hard refactors. Never try a
hard refactor first if there's an easy one available in the neighborhood.

The only exception is renaming things. Name them after their roles
stabilize!
 
F

Flash Gordon

Phlip wrote, On 02/09/07 01:36:
That's a joke, guys.

When creating production code, not when researching, after passing a test,
try to simplify, and go in order from easy to hard refactors. Never try a
hard refactor first if there's an easy one available in the neighborhood.

The only exception is renaming things. Name them after their roles
stabilize!

Sometimes when code as been "hacked together" over time the only way to
get a clean design is to start from scratch and design it based on what
you have learned.

There is no one set of rules that is always correct.
 
F

Flash Gordon

Ian Collins wrote, On 02/09/07 01:35:
Um, I've never seen one like that before, probably because TDD doesn't
yield that type of code.

That is only true if it produces more complex code. The 20 odd lines of
code resulted from a requirement to implement two simple looking
equations and one simple statement in English. The reason it was so many
test cases was that it was dealing with one angle in the range +/- 270
degrees, one in the range +/-170 degrees and two in the range +/1 6
degrees. The testing had to verify behaviour with every angle in each
quadrant, every angle at 0, 90 etc, every angle just either side etc. It
was the *maths* together with the chances of selecting the wrong
solution from the trig that meant a lot of test cases, not the
complexity of the code.
Sounds like the US DOD, I'm sure they just weigh or measure
documentation rather than read it!

You guessed right, but the important thing is the quantity of tests and
therefore the time it would take to run them all.
Those were the days. Thank goodness for fast CPUs and distributed building.

Also the larger projects which tie up the processors for just as long
because they are so much more complex.

I know there is still SW that takes hours to build because within the
last few years I have done builds that have taken hours.
 
I

Ian Collins

Flash said:
I know there is still SW that takes hours to build because within the
last few years I have done builds that have taken hours.

Must be huge, the biggest think I build regularly is the OpenSolaris
code base, which takes about 40 minutes on my box.

If a build takes too long, throw more cores at it. If the tools don't
support distributed building, change the tools.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,822
Latest member
israfaceZa

Latest Threads

Top