R
Richard G. Riley
This is very nice for limited amounts of possible input.
But in this case, even simple unit test schemes in combination
with logging suffice.
Possibly. my own perference and that of the teams I have worked with
is to give the code a run through with the debugger befoe submitting
to unit tests. If you can get away without that then great.
[This is different, as far as I can tell, from Chris Hill's description
of automated testing using ICEs etc.]
I *always* use a debugger to run through any meaningful critical
code. It enables me to cast an eye over memory, stacks, locals etc. It
is an added safety barrier beyond my own smug ability to write error
free code
Well, I don't know what you're doing; I've just never been in a situation
where this would be helpful. I'm under no illusions about my ability to
write error-free code, it's just that using a debugger doesn't give me
value for money.
We must come from different schools of thought. I and every programmer
I have ever worked with routinely step through code alone or with a
colleague to check boundary conditions, memory initialisations etc. It
is a bedrock of any development I have done. Using break expressions
means I can put in wierd and wonderful parameters and have the
debugger break when a function is suddenly passed something it doesnt
know how to deal with. We are, after all, fallible.
Every serious project with >= 0.5 MLoC I was ever involved in
had its own resource management to track down certain kinds of
errors.
I dont doubt this and have used automated systems too where appropriate.
In addition, every single complex feature can be switched off
and internal states and debug indices can be made visible in
the output. Without, debugging would be sheer madness.
A good logging system guarded by switches is always invaluable. Again,
no disagreement here.
Only exception: Paranoia checks with debug mode "asserts".
Unit tests should suffice to feed all kind of "weird and
wonderful" parameters to a module. Regression tests make the
whole thing "round" -- customers and other colleagues tend to
find things one could not imagine when stepping through in the
debugger.
Again fine : but using a debugger to examine while developing
module(s) can do no harm and, for me, frequently raises issues with
regard to sensible program flow and frequently hilights unnecessary
loop depths and other such quirks which can be optimised out.
The trace and logging outputs are just builtin "printf()
debugging". When working with huge amounts of data, this may
be the only way to get a first idea what is happening. Without
this idea, you typically don't know what is going wrong, let
alone where.
I would never personally use printfs but a system specific log
function which may, or may not, end up using printf or some other form
of information provision.
What you describe sounds perfectly sensible - but I wouldn't describe
it as "using a debugger"; I think this is the disconnect.
[I don't know if I'd call the tools you mention "debuggers", either, but
it's too late to know for sure whether I wouldn't have /before/ this
discussion.]
Debugger. Eclipse "debugger". gdb. All debuggers.
debuggers (and why is the Eclipse debugger awarded scare-quotes?) ...
All code development tools.
... not (necessaily) debuggers: there's a difference here.
Debugging is part of development in my world. maybe we are talking
nomenclature differences here?
Mmmh. I don't know:
- Feature Specification and System Design
<-> Product Test based on the Spec, developed at the same time
by someone who is not the author of the Spec.
- One to several levels of component specifications and design
documents
<-> Automated Developer Tests, external and internal.
- Regular Automated Regression Tests incorporating Product and
Developer Tests as well as a large base of simple and complex
input
- Tracking system to track all requirements and limitations
through the different levels.
- Source control and configuration management.
During Specification and Design Phases: Several reviewers.
Occasional Code Reviews.
Sources of Bugs:
1) "Holes" in the design or specification documents. Most of
the time caught by the test specification process.
2) Implementation errors. Usually caught by the developer
tests.
2) Is my "debugging" phase. generally.
Debugging: Mostly necessary in legacy code written under time
pressure or circumventing such a process.
All code has bugs :-;
For smaller projects: An adapted version of the above.
Design better test drivers and frameworks. It pays.
If everything were so perfect it would. Even with designs, code reads,
automated testing I just find it better and more profitable to step
through my code at the earliest stage to be sure things are going the
rigth way and that nothing silly is going to waste time & money by
forcing the code to be thrown back at me or someone else at a later stage.
Cheers
Michael
thanks.