F
Flash Gordon
Dik said:Again not received the original, that is why I respond to this.
What I am missing here is that using breakpoints debugging code can be
*more* time consuming than using printf's to give the state. I once
had to debug a program I had written (80k+ lines of code). On some
machine it did not work. It appeared that on the umpteenth occurrence
of some call to some routine something was wrong. It is impossible
to detect such using breakpoints or watchpoints. Using proper printf's
and scrutinising the output will get you much faster the answer to why
it did not work.
Personally I find debuggers extremely useful under *some* conditions. In
other situations, I find printf (or a more sophisticated logging system)
far more useful.
One true, but extreme, example where a debugger and ICE combination were
invaluable was when trying to find what was causing all units to crash
when taken out of storage and powered up. Every unit crashed in about
the same place in its power up tests, all wrote garbage over the
display, and generally gave all the symptoms of the processor having run
off in to the wild blue yonder for some reason. I had examined the code
(which I did not write) on a few occasions trying to find any possible
reason for the crashes. I could find none. After many attempts at
playing with various break condition I eventually caught the problem. I
could see quite clearly in the trace that just before it all went to pot
the processor had read a *different* instruction than the ROM actually
contained. Before anyone convinces me that debuggers are of no use (most
have said limited use) they will have to explain to me how I could have
found that and proved it to anyone else *without* the use of the debugger.
Before anyone says ah, but that is a once in a lifetime situation, I've
also managed to catch other "impossible" crashes in debuggers and
demonstrate to people that it was actually the hardware doing something
screwy.
I've also used logic analysers in the same way I might use a debugger to
see what a program is doing where I had no way of capturing realistic
input data. The code was actually implementing a control loop, so the
input for one loop depended on the output of the previous loop *and* the
outside world. Capturing selective data with some very clever triggers
(as complex as you can use with many debuggers) I could then use the
information to work out how the algorithm was failing. I actually used
this method on at least three different algorithms on the same system,
and also used it to prove to the HW engineers yet again when the
hardware was faulty.
A bigger use for debuggers for me is when we have built a beta test or
production version of the software (a lot of which is not written by me)
and someone doing testing can easily crash the software but I can't (or
a customer has crashed it when coming in to do testing for us) and I
attach the debugger to examine what state they have got the program in
to. Sometimes the call stack is sufficient to point in the right
direction, sometimes examining the states of variable provides a big
insight, often I just pass the information on to another developer who
then examines the code and finds the problem.
Sometimes I use a debugger to break the code at specific points and see
what the state is because I am too lazy to add in the printf statements
and rebuild.
However, I am gradually extending the logging throughout the code in a
way that can easily be enabled at runtime (by setting an environment
variable) and as I extend it to cover more of the functionality of the
program I am gradually finding it of more and more use.
So my position is both tools have there usage, and which you use more
will depend on a lot of things outside your control, such as the quality
of the HW, the quality of code written by others, the variability of
external inputs, how reproducible problems are etc.
I almost forgot, another time when a debugger was invaluable was whith a
highly complex processor where I had thought a particular combination of
options on an assembler instruction was valid, the assembler accepted
it, but stepping through in the debugger because I could not see how the
code was failing I saw that the disassembly showed a roll where I
specified a shift. Not C, but a use of a debugger worked where other
tools had failed and neither myself nor another software developer could
see anything wrong with the code. On this code we really were after
every clock cycle we could get and it was sometimes worth the half hour
it took to work out if a particularly complex instruction was allowed.
--
Flash Gordon
Living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidlines and intro -
http://clc-wiki.net/wiki/Intro_to_clc