B
BGB
Op 06-Aug-11 11:42, Ian Collins schreef:Op 06-Aug-11 5:25, Ian Collins schreef:
On 08/ 6/11 01:30 PM, BGB wrote:
On 8/5/2011 3:20 PM, Ian Collins wrote:
On 08/ 6/11 10:10 AM, BGB wrote:
On 8/5/2011 2:53 PM, Ian Collins wrote:
On 08/ 5/11 11:58 PM, Victor Bazarov wrote:
On 8/5/2011 5:41 AM, Miles Bader wrote:
[..]
My general issue with the VS debugger was that very often there
would
be cases where you could _see_ interesting data, but not be
able to
manipulate it, or be able to manipulate it, but only rather
awkwardly
(o-n-e-s-t-e-p-a-t-a-t-i-m-e... argh!) For complex debugging
tasks,
this quickly became absolutely miserable
With gdb, on the other hand, which is largely based on expression
evaluation, while it was harder to visualize data, manipulation
of it
was vastly faster and easier (and more easily repeatable; often
one
wants to do the weird manipulation several times).
In other words, with VC++'s debugger you're actually deBUGging,
finding
out what's wrong, one step at a time, stopping to think,
exiting to
change the code, compile, run again, etc.. With gdb you're
actually
developing by tweaking the data, tweaking the program, altering
execution, etc.. Yes, no, maybe? It's a style debate, not the
quality
of tools debate.
If you need a debugger, your unit tests aren't good enough!
Unit tests never are. Even when the code coverage is 100% there is still
no guarantee that you have covered all possible execution flows.
There should be if you wrote the tests first.
If you believe that than those tests give you a false sense of security.
If you write the test first (as I do) you still have no guarantee it
covers every possible scenario (even when the code coverage is 100%).
Usually code fails on scenarios that weren't anticipated, rather than
the scenario's that were anticipated (and tested). As long as tests (and
specifications for that matter) are created by humans chances are that
they are flawed as well.
That all depends whether the platform (and language) you are using has a
decent debugger.
If it doesn't I consider that a handicap, no matter how good the unit
tests are. Sometimes things you depend on just don't work as advertised,
a decent debugger can help analyzing what is really going a lot.
Debuggers don't make unit tests obsolete, nor vise versa.
yep.
unit tests are good at testing that the thing does what it is defined as
doing, so one can write out a basic spec, and tests to make sure that
behavior is what it is supposed to be (most often, some things are
harder to run automated tests on than others).
debuggers are better at finding random errors and edge cases which may
not have been adequately tested for, or which may fall outside the scope
of its defined behavior, for example:
what happens when the API is called prior to initialization?
what happens if the API's Init functions are called during operation or
multiple times?
what happens if NULL is passed in unexpected places?
....
these things are not normally tested in unit tests, which typically
handle defined behavior and use, and not anomalous behavior and
use-patterns (for which there may be far more anomalous patterns to test
for than to test for defined behavior, but programming errors are often
much better at stepping on these edge cases).
or such...