How do you debug?

D

desktop

When I write code I use a lot of:

std::cout << "TEST1\n";

....
....
<some code>
....
....

std::cout << "TEST2\n";

etc.

But is there some better way do this kind of debugging (maybe with some
preprocessor commands)?

I know that I could use the gdb debugger but often I prefer the above
procedure.
 
Z

Zeppe

desktop said:
When I write code I use a lot of:

std::cout << "TEST1\n";

...
...
<some code>
...
...

std::cout << "TEST2\n";

etc.

But is there some better way do this kind of debugging (maybe with some
preprocessor commands)?

I know that I could use the gdb debugger but often I prefer the above
procedure.

In general, a temporary instrumentation of the code in order to perform
debug is not a good choice. It is good, anyway, to put some assertions
from time to time and some preconditions checks (boundary checks, etc)
that are compiled in debug only. You can do that with the standard
assert (relying on the fact that in Release you can disable the assert
by specifying _NDEBUG and thus improving the code efficiency) or with
something similar written by you.

A good debugger anyway is often essential - for linux, a friendly
frontend of gdb is kdbg.

Regards,

Zeppe
 
M

Michael DOUBEZ

Zeppe a écrit :
In general, a temporary instrumentation of the code in order to perform
debug is not a good choice. It is good, anyway, to put some assertions
from time to time and some preconditions checks (boundary checks, etc)
that are compiled in debug only. You can do that with the standard
assert (relying on the fact that in Release you can disable the assert
by specifying _NDEBUG and thus improving the code efficiency) or with
something similar written by you.

A good debugger anyway is often essential - for linux, a friendly
frontend of gdb is kdbg.

Actually, keeping asserts in the release code (i.e. not using NDEBUG) is
a good idea; just like wearing a safety belt after you have learned how
to drive.

In my code, I use macros to log debug information that are not included
in the released version and are useful only for tracing execution or
remarquable values. Other log are kept in the release version. Both use
the same mecanism.

Note that "TEST1", "TEST2", "OK!!!", "HERE WE GO" and "H4CkEr washere"
are not very informative and you should stick with meaningful debug log
(including file, line numer and is usually add function). It is not very
hard to do that automatically with macros but there are also library
availables for that around (log4cpp, Boost.Log, pantheios to name a few).

I rarely use the debugger except as post mortem analysis; my unit tests
are usually sufficient.

Michael
 
J

James Kanze

In general, a temporary instrumentation of the code in order
to perform debug is not a good choice. It is good, anyway, to
put some assertions from time to time and some preconditions
checks (boundary checks, etc) that are compiled in debug only.
You can do that with the standard assert (relying on the fact
that in Release you can disable the assert by specifying
_NDEBUG and thus improving the code efficiency) or with
something similar written by you.

The symbol to suppress assert is NDEBUG, not _NDEBUG. But of
course, you almost never want to use it. On larger projects,
it's also usual to have some sort of logging facilities, with
different log levels. If some subsystem seems to be causing
problems, you turn up the log levels in that subsystem.
A good debugger anyway is often essential - for linux, a friendly
frontend of gdb is kdbg.

I'll admit that I've never found much use for a debugger
professionally. If you write the code so that it's easy to
understand, and have it correctly code reviewed, there generally
aren't any bugs in it anyway. (I forget who said it, maybe
Hoare: "Code is either so simple that it obviously has no
errors, or so complicated that it has no obvious errors."
Obviously, you should strive for the first.)
 
Z

Zeppe

Michael said:
Actually, keeping asserts in the release code (i.e. not using NDEBUG) is
a good idea; just like wearing a safety belt after you have learned how
to drive.

It depends on the efficiency level that you need. For example, in many
libraries keeping _NDEBUG retains boundary checks on all the containers,
and that's not only desirable. That's why I suggested to create his own
asserts for release checks. I agree that, if you can, keeping asserts
may be a good thing (even though a failed assert makes the program crash
anyway).
I rarely use the debugger except as post mortem analysis; my unit tests
are usually sufficient.

And what about debugging unit tests?

Regards,

Zeppe
 
I

Ian Collins

Zeppe said:
It depends on the efficiency level that you need. For example, in many
libraries keeping _NDEBUG retains boundary checks on all the containers,
and that's not only desirable. That's why I suggested to create his own
asserts for release checks. I agree that, if you can, keeping asserts
may be a good thing (even though a failed assert makes the program crash
anyway).


And what about debugging unit tests?
The best way to debug unit tests is to undo the last edit!
 
Z

Zeppe

James said:
The symbol to suppress assert is NDEBUG, not _NDEBUG. But of
course, you almost never want to use it. On larger projects,
it's also usual to have some sort of logging facilities, with
different log levels. If some subsystem seems to be causing
problems, you turn up the log levels in that subsystem.

True. It depends, as always, on the performance level, robustness level,
and complexity level that you want to achieve. In critical systems, you
may want to implement some mechanism that tries to partially recover a
bad situation, implementing most of the checks as exception that are
then handled properly.
I'll admit that I've never found much use for a debugger
professionally. If you write the code so that it's easy to
understand, and have it correctly code reviewed, there generally
aren't any bugs in it anyway. (I forget who said it, maybe
Hoare: "Code is either so simple that it obviously has no
errors, or so complicated that it has no obvious errors."
Obviously, you should strive for the first.)

Obviously, I would say, the debugger is useful when the first is not
achievable (and there are situations in which it isn't) ;)

Regards,

Zeppe
 
M

Michael DOUBEZ

Zeppe a écrit :
It depends on the efficiency level that you need. For example, in many
libraries keeping _NDEBUG retains boundary checks on all the containers,
and that's not only desirable. That's why I suggested to create his own
asserts for release checks. I agree that, if you can, keeping asserts
may be a good thing (even though a failed assert makes the program crash
anyway).

Better crashing than having incoherent results.

Imagine what would happen in a trading application:
"You just bought 4294967295 actions."

You'd better have a good dislaimer :)
And what about debugging unit tests?

I usually spend more time inspecting my code than the unit test
themselves. ;)
The execution path is not that complicated in unit test and I use CHECK
macros that doesn't terminate the test to verify some assertion. I find
it enough.

In truth, I also use the debugger in some case for bugs difficult to
locate in multithreaded application to get a feeling of what happen but
it is usually after integration. It can also be useful to attach gdb to
an existing thread that isn't performing correctly (not responding ...).

Michael
 
J

Jim Langston

desktop said:
When I write code I use a lot of:

std::cout << "TEST1\n";

...
...
<some code>
...
...

std::cout << "TEST2\n";

etc.

But is there some better way do this kind of debugging (maybe with some
preprocessor commands)?

I know that I could use the gdb debugger but often I prefer the above
procedure.

I've done this before where there is some bug in a program and it is not
known where it is and for whatever reason running it in the debugger is not
feasable. Usually when it is an intermittent bug that only happens
sometimes and I'm trying to figure out both why and where. Rather than send
the output to cout though I'll usually send it to a file.

For most debugging, however, I'll use an interactive debugger.
 
P

Puppet_Sock

Actually, keeping asserts in the release code (i.e. not using NDEBUG) is
a good idea; just like wearing a safety belt after you have learned how
to drive.

You will find a lot of people who disagree with that.
Certainly there are many situations where it is just
unacceptable.

Such things as asserts should not be used as error
trapping during normal operation. That is, they
shouldn't be catching such things as bad input.
An assert should be catching only developer error.
That is, if an assert fires while a user is on
the system, it should only ever indicate a bug in
the code.

C++ exceptions are yet another refinement here. These
should be things outside the contract of interface
of the code, but still inside the design. These are
the "known unknowns."
In my code, I use macros to log debug information that are not included
in the released version and are useful only for tracing execution or
remarquable values. Other log are kept in the release version. Both use
the same mecanism.

See, that's troublesome. "Remarkable values" shouldn't
be traced through a debug mechanism. They should be
designed into the error checking of the code. That is,
they should be "in the contract."

Or, to put it another way: If the interface is supposed
to handle the user typing "blue" when asked for a speed,
then there shouldn't be an assert fired on it. That
should fire a pre-defined user-oriented error handling
routine, not an assert.
I rarely use the debugger except as post mortem analysis; my unit tests
are usually sufficient.

Again, you will get many people who disagree with that.
Indeed, my advice is to *always* step through every line
of code in the debugger, at least once. There should
*also* be unit tests.
Socks
 
P

Phlip

desktop said:
std::cout << "TEST2\n";

etc.

But is there some better way do this kind of debugging (maybe with some
preprocessor commands)?

Sure. Write "unit" tests for everything. (Actually "developer" tests - unit
tests are a QA thing.)

You can write the code completely decoupled from itself, so all functions
respond to tests as well as they respond to each other. The diagnostics you
need will go into the assertion diagostics.

When the time comes to actually debug, tests make a perfect platform for all
kinds of traces and experiments. Further, if you run the tests after every
few edits, and add to the tests whenever you add new abilities, you can
typically revert nearly any change that unexpectedly breaks the tests. This
implies you can take bigger risks with your changes, and at the same time
reduce your exposure to debugging. People using this system report their
time spent debugging goes way down.
 
J

James Kanze

[snip]
Actually, keeping asserts in the release code (i.e. not
using NDEBUG) is a good idea; just like wearing a safety
belt after you have learned how to drive.
You will find a lot of people who disagree with that.

You'll also find a lot of people who think that the world is
flat. I've never seen a professional who disagreed.
Certainly there are many situations where it is just
unacceptable.

Such as?
Such things as asserts should not be used as error
trapping during normal operation. That is, they
shouldn't be catching such things as bad input.
An assert should be catching only developer error.
That is, if an assert fires while a user is on
the system, it should only ever indicate a bug in
the code.

Who has ever claimed the contrary? And what does that have to
do with leaving them in in released code?
C++ exceptions are yet another refinement here. These
should be things outside the contract of interface
of the code, but still inside the design. These are
the "known unknowns."

I like that formulation: "outside the contract, but inside the
design". Except that throwing for specific input is often part
of the contract.
See, that's troublesome. "Remarkable values" shouldn't
be traced through a debug mechanism. They should be
designed into the error checking of the code. That is,
they should be "in the contract."
Or, to put it another way: If the interface is supposed
to handle the user typing "blue" when asked for a speed,
then there shouldn't be an assert fired on it. That
should fire a pre-defined user-oriented error handling
routine, not an assert.
Again, you will get many people who disagree with that.

Again, no professional. Most places I've worked, I've not even
had access to a debugger, and in places where a debugger has
been available, it's easy to see who uses it: they have the
worst code.
Indeed, my advice is to *always* step through every line
of code in the debugger, at least once.

And what does that achieve, except waste time? If you have to
"step through" the code to understand it, the code isn't well
written, and should be rewritten.
There should *also* be unit tests.

And code review, which is doubtlessly the most important and
effective means of reducing errors.
 
M

Michael DOUBEZ

Puppet_Sock a écrit :
You will find a lot of people who disagree with that.
Certainly there are many situations where it is just
unacceptable.

I work on an aeronautic embedded system with real time and fault
tolerant cobstraints. It is pretty critical and I can tell you the
asserts are still in the code when it takes off.
Such things as asserts should not be used as error
trapping during normal operation. That is, they
shouldn't be catching such things as bad input.
An assert should be catching only developer error.
That is, if an assert fires while a user is on
the system, it should only ever indicate a bug in
the code.

And of course, all relased code is bug free and in case there is a bug,
the log are always enough locate the bug. Supposing the bug is detected
by the user (when there is a user).
C++ exceptions are yet another refinement here. These
should be things outside the contract of interface
of the code, but still inside the design. These are
the "known unknowns."

I don't know what you call a "known unknowns" but exception are not part
of debug process but rather of design. If you want to design an
exception aware code, you'd better know who is likely to throw and if
you wan to recover from it, you'd better know what is thrown.
See, that's troublesome. "Remarkable values" shouldn't
be traced through a debug mechanism. They should be
designed into the error checking of the code. That is,
they should be "in the contract."

I don't see how you can put a contract on let say the number of
connection a server is holding or outputting the state of a system.
Those are just information log that are of no interest in the release or
not critical but help in building a mental picture of the state of the
programme.

And it gives something to read when you run the tests :)
Or, to put it another way: If the interface is supposed
to handle the user typing "blue" when asked for a speed,
then there shouldn't be an assert fired on it. That
should fire a pre-defined user-oriented error handling
routine, not an assert.


Again, you will get many people who disagree with that.
Indeed, my advice is to *always* step through every line
of code in the debugger, at least once.

What for ?


Michael
 
K

Kai-Uwe Bux

James said:
[snip]
Actually, keeping asserts in the release code (i.e. not
using NDEBUG) is a good idea; just like wearing a safety
belt after you have learned how to drive.
You will find a lot of people who disagree with that.

You'll also find a lot of people who think that the world is
flat.

Really? I never found one. :)
I've never seen a professional who disagreed.


Such as?

One fundamental question is whether there is a notion of "correctness" for
the behavior of the program. Whenever correctness of the results matter, I
would hope that asserts are left in so that in case of a bug that causes
the state of the program to be unforseen (and therefore potentially bogus)
there is a chance that it will crash. Few things are worse than output that
is believed to be correct but isn't; that can be very costly, too.

Compilers are an example: I prefer a compiler crash to the generation of
potentially faulty object code any day.

On the other hand, there are programs where correctness is not that
meaningful a concept. Think of a game. The physics engine may sometimes not
detect that two objects are at the same place at the same time or the
rendering may be a little off due to rounding errors or arithmetic
overflows. Now, those effects might be gone after two or three frames and
as long as the user experience is good, there is a reason to prefer the
game going on to the game crashing. In that case, it could be a correct
business decision to eliminate the asserts in release code.


[snip]


Best

Kai-Uwe Bux
 
D

Duane Hebert

[snip]
Again, you will get many people who disagree with that.
Again, no professional. Most places I've worked, I've not even
had access to a debugger, and in places where a debugger has
been available, it's easy to see who uses it: they have the
worst code.

I guess that depends on how you define a professional.
I've been a software engineer using c++ > 10 years and I
use a debugger. I understand what you mean when you
say that with the proper design it's not needed, but I've
never worked anywhere where the design methodology
was perfect. The tools aren't always perfect either.
 
M

Michael DOUBEZ

Kai-Uwe Bux a écrit :
James said:
[snip]
Actually, keeping asserts in the release code (i.e. not
using NDEBUG) is a good idea; just like wearing a safety
belt after you have learned how to drive.
You will find a lot of people who disagree with that.
You'll also find a lot of people who think that the world is
flat.

Really? I never found one. :)

Giordano Bruno did but it was under the roman inqusition.
One fundamental question is whether there is a notion of "correctness" for
the behavior of the program. Whenever correctness of the results matter, I
would hope that asserts are left in so that in case of a bug that causes
the state of the program to be unforseen (and therefore potentially bogus)
there is a chance that it will crash. Few things are worse than output that
is believed to be correct but isn't; that can be very costly, too.

If there is a part of a programm where uncorrect behavior is manageable,
it is treated specifically and locally. I can't thing of a relevant
example where it would be the case.
Compilers are an example: I prefer a compiler crash to the generation of
potentially faulty object code any day.

On the other hand, there are programs where correctness is not that
meaningful a concept. Think of a game. The physics engine may sometimes not
detect that two objects are at the same place at the same time or the
rendering may be a little off due to rounding errors or arithmetic
overflows. Now, those effects might be gone after two or three frames and
as long as the user experience is good, there is a reason to prefer the
game going on to the game crashing. In that case, it could be a correct
business decision to eliminate the asserts in release code.

From the practical point of view, there are things that are not
assertable or asserted but are part of test. In the example you gave,
typically the game engine won't assert this kind of conditions.

I have worked on error correcting codes where post condition (the
checksum) were not respected but the data was used. It was by design: a
small noise or cracks in the voice could be bearable.

My understanding is that is not the topic here, we are talking about
asserts in pre an post conditions or invariants and check points.


Michael
 
P

Phlip

Duane said:
I've been a software engineer using c++ > 10 years and I
use a debugger.

Learn to use every tool in your kit. And part of "learn to use" is learning
how to avoid abuse. Programming in the debugger, or programming to the
debugger, are abuse.
 
P

Puppet_Sock

Puppet_Sock a écrit :



I work on an aeronautic embedded system with real time and fault
tolerant cobstraints. It is pretty critical and I can tell you the
asserts are still in the code when it takes off.

I'm a nuke. If one of the control computers did an
assert, the operator would come and find me. This
would not end well.

I don't think you mean the same thing by "assert"
that I do. When a program hits an assert and fails,
the program stops. All over, bang, goodbye, only
way to get it back is to restart the program.

This is something you put in your aircraft?

An assert should not be thought of as a safety belt.
This is the "known unknown" divide I talked about.
A safety belt is intended for expectable conditions:
A car in a crash. An assert is intended for implentation
errors: The steering wheel is not connected to the
steering mechanism because it's in the trunk.

You put the "safety belt" stuff into the interface
design, putting the details into the spec (the contract).
I don't know what you call a "known unknowns" but exception are not part
of debug process but rather of design. If you want to design an
exception aware code, you'd better know who is likely to throw and if
you wan to recover from it, you'd better know what is thrown.

Um. That's pretty much the point I'm making. The exceptions
are the "safety belt" stuff. An assert is a tool for detecting
implementation errors. An exception is part of the design.
I don't see how you can put a contract on let say the number of
connection a server is holding or outputting the state of a system.

Um. You don't? You put it in the spec. Then you make
the code check how many it has before it adds one, and
if it would be over the limit it does whatever the right
thing is about that. Maybe send a denial message to the
request, or pass it to another server, or some other
specified thing.
What for ?

I'm just guessing, but what I'm guessing is that you are
a bigtime Unix coder.

You step through your code to catch stupid mistakes. I've
seen this lots of times where guys put code back into the
codebase without ever stepping through it. And when I come
along to see why the smoke is getting out (Electronics all
runs on smoke, which is shown by it always stopping when
the smoke gets out.) I find that the code branches the wrong
way or does not assign the calculated value, or has some
stupid thing like the classical for(yada yada); thing.
Which would have been instantly obvious as soon as you
stepped through the code and watched the values and the
execution. You can also catch off-by-one errors, and often
see non-initialized variables, wild pointers, etc. etc.

I've also had the case where the coder refused to believe
there was anything wrong with his code till I fired it
up in the debugger and showed him the bad behaviour.

Note that I'm not saying you step through the code as
a replacement for any other activity. I'm saying you do
this in addition to all other good practices. It's very
easy in a good debugger. And it's quite quick.

Note also that I'm not saying step through every line
of code every time you touch the app. Just the changed
or new lines.
Socks
 
I

Ian Collins

I don't think you mean the same thing by "assert"
that I do. When a program hits an assert and fails,
the program stops. All over, bang, goodbye, only
way to get it back is to restart the program.

This is something you put in your aircraft?
Software lives in an imperfect world. Do you code for all possible
breaches of contract by all the libraries your code calls? If you code
the system from the ground up, you may be able to ensure protection, in
that case the asserts will never fire, so why take them out? In some
instances the only safe thing to to when contractually impossible data
is presented is to restart.

You step through your code to catch stupid mistakes.

You write unit tests to catch stupid mistakes. If a code change breaks
a test, undo it and try again. If you have the time and are curious,
that you might consider stepping through to see why you broke the test,
but doing will slow you down.

Try working with a language that lacks a decent debugger for a wile, it
will sharpen up your testing or TDD skills.
I've
seen this lots of times where guys put code back into the
codebase without ever stepping through it.

Set you SCM system up to not accept commits that break the tests.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,270
Messages
2,571,353
Members
48,038
Latest member
HunterDela

Latest Threads

Top