Richard said:
Connecting to external processes is more rare, I grant you.
lets not get stuck on connecting to remote processes : lets keep it to
using the debugger to test and check new code.
This is very nice for limited amounts of possible input.
But in this case, even simple unit test schemes in combination
with logging suffice.
[This is different, as far as I can tell, from Chris Hill's description
of automated testing using ICEs etc.]
I *always* use a debugger to run through any meaningful critical
code. It enables me to cast an eye over memory, stacks, locals etc. It
is an added safety barrier beyond my own smug ability to write error
free code
Well, I don't know what you're doing; I've just never been in a situation
where this would be helpful. I'm under no illusions about my ability to
write error-free code, it's just that using a debugger doesn't give me
value for money.
We must come from different schools of thought. I and every programmer
I have ever worked with routinely step through code alone or with a
colleague to check boundary conditions, memory initialisations etc. It
is a bedrock of any development I have done. Using break expressions
means I can put in wierd and wonderful parameters and have the
debugger break when a function is suddenly passed something it doesnt
know how to deal with. We are, after all, fallible.
Every serious project with >= 0.5 MLoC I was ever involved in
had its own resource management to track down certain kinds of
errors.
In addition, every single complex feature can be switched off
and internal states and debug indices can be made visible in
the output. Without, debugging would be sheer madness.
Only exception: Paranoia checks with debug mode "asserts".
Unit tests should suffice to feed all kind of "weird and
wonderful" parameters to a module. Regression tests make the
whole thing "round" -- customers and other colleagues tend to
find things one could not imagine when stepping through in the
debugger.
The trace and logging outputs are just builtin "printf()
debugging". When working with huge amounts of data, this may
be the only way to get a first idea what is happening. Without
this idea, you typically don't know what is going wrong, let
alone where.
What you describe sounds perfectly sensible - but I wouldn't describe
it as "using a debugger"; I think this is the disconnect.
[I don't know if I'd call the tools you mention "debuggers", either, but
it's too late to know for sure whether I wouldn't have /before/ this
discussion.]
Debugger. Eclipse "debugger". gdb. All debuggers.
debuggers (and why is the Eclipse debugger awarded scare-quotes?) ...
All code development tools.
... not (necessaily) debuggers: there's a difference here.
Debugging is part of development in my world. maybe we are talking
nomenclature differences here?
Mmmh. I don't know:
- Feature Specification and System Design
<-> Product Test based on the Spec, developed at the same time
by someone who is not the author of the Spec.
- One to several levels of component specifications and design
documents
<-> Automated Developer Tests, external and internal.
- Regular Automated Regression Tests incorporating Product and
Developer Tests as well as a large base of simple and complex
input
- Tracking system to track all requirements and limitations
through the different levels.
- Source control and configuration management.
During Specification and Design Phases: Several reviewers.
Occasional Code Reviews.
Sources of Bugs:
1) "Holes" in the design or specification documents. Most of
the time caught by the test specification process.
2) Implementation errors. Usually caught by the developer
tests.
Debugging: Mostly necessary in legacy code written under time
pressure or circumventing such a process.
For smaller projects: An adapted version of the above.
Such as? An initial use of a debugger to monitor a programs progress
can show up lots of issues as well as facilitating routines boundary
tests. It just makes plain sense.
Design better test drivers and frameworks. It pays.
Cheers
Michael