Unit testing of expected failures -- what do you use?

A

Alf P. Steinbach

OK, this displays my ignorance of what's out there (it's been a long time since
I developed for a living), and also my laziness not googling. :)

However.

I want to unit-test some library code I'm sort of extracting from some old code
I have.

The things that should work without error are easy to test, and it's currently
not so much code that I've considered a testing framework, although the code
size increases. I'm thinking that perhaps the popular frameworks don't support
my needs: there are cases where the code /should/ assert at run time. And worse,
there are cases where the could should assert at compile time...

How do you deal with this kind of testing, testing that things fail as they
should (at compile time and at run time)?

I'm particularly interested in anything that works well with Visual Studio 7.x
or thereabouts, but I'm thinking that I most likely will have to create makefiles?


Cheers,

- Alf
 
I

Ian Collins

Alf said:
OK, this displays my ignorance of what's out there (it's been a long
time since I developed for a living), and also my laziness not googling.
:)

However.

I want to unit-test some library code I'm sort of extracting from some
old code I have.

The things that should work without error are easy to test, and it's
currently not so much code that I've considered a testing framework,
although the code size increases. I'm thinking that perhaps the popular
frameworks don't support my needs: there are cases where the code
/should/ assert at run time. And worse, there are cases where the could
should assert at compile time...

I used to use cppUnit (and still do for older projects) but now I use
gtest (http://code.google.com/p/googletest/). Compile time asserts are
beyond the scope of a unit testing framework; if it won't compile, there
isn't a unit to test!
How do you deal with this kind of testing, testing that things fail as
they should (at compile time and at run time)?

For run time asserts, I interpose whatever function the system's assert
macro calls (__assert on Solaris) and have __assert throw an exception
with the file, line and expression passed by assert.

A boiled down (no framework) example:

#include <iostream>
#include <exception>
#include <assert.h>

struct AssertionException : std::runtime_error
{
const char* expression;
const char* file;
int line;

AssertionException(const char* expression, const char* file, int line)
: std::runtime_error( expression ),
expression(expression), file(file), line(line) {}
};

void __assert(const char* expression, const char* file, int line)
{
throw AssertionException( expression, file, line );
}

void fut( const char* p )
{
assert( NULL != p );
}

int main()
{
try
{
fut( NULL );
std::cerr << "Oops" << std::endl;
}
catch( const AssertionException& e )
{
std::cerr << "Caught" << ": " << e.what() << std::endl;
}
}
 
J

Jorgen Grahn

I used to use cppUnit (and still do for older projects) but now I use
gtest (http://code.google.com/p/googletest/). Compile time asserts are
beyond the scope of a unit testing framework; if it won't compile, there
isn't a unit to test!

I don't even know what a "unit" is in this context. But I disagree --
with C++ it's sometimes as important to assure that some things don't
compile or fail noisily at runtime, as it is that the "normal tests"
pass.

I seem to recall that *some* of them support static checks. But I
haven't used any, and it wouldn't surprise me if they were clumpsy.
It *does* mean stepping into the build system's territory after all.

I've never done it, but compile-time checks I think I would implement
myself, in my Makefile. The "make check" target would of course drive
both the normal unit tests and the compile-time things.

But I'm not on Windows, and I wouldn't hesitate to use Gnu
Make-specific features.
For run time asserts, I interpose whatever function the system's assert
macro calls (__assert on Solaris) and have __assert throw an exception
with the file, line and expression passed by assert.

/Jorgen
 
R

Robert Fendt

I don't even know what a "unit" is in this context. But I disagree --
with C++ it's sometimes as important to assure that some things don't
compile or fail noisily at runtime, as it is that the "normal tests"
pass.

I seem to recall that *some* of them support static checks. But I
haven't used any, and it wouldn't surprise me if they were clumpsy.
It *does* mean stepping into the build system's territory after all.

There's some stuff on compile-time assertions in the
Alexandrescu book ('Modern C++ Design'). Have a look at chapter
2, 'techniques', right at the beginning.

Regards,
Robert
 
V

Vladimir Jovic

Alf said:
OK, this displays my ignorance of what's out there (it's been a long
time since I developed for a living), and also my laziness not googling.
:)

However.

I want to unit-test some library code I'm sort of extracting from some
old code I have.

For unit testing, see this:
http://cxxtest.sourceforge.net/guide.html
The things that should work without error are easy to test, and it's
currently not so much code that I've considered a testing framework,
although the code size increases. I'm thinking that perhaps the popular
frameworks don't support my needs: there are cases where the code
/should/ assert at run time. And worse, there are cases where the could
should assert at compile time...

How do you deal with this kind of testing, testing that things fail as
they should (at compile time and at run time)?

For compile time testing, see this:
http://www.boost.org/doc/libs/1_42_0/doc/html/boost_staticassert.html

For run time testing, I am using a macro, which throws an exception if
the condition fails. The exception class prints backtrace and the failed
condition.
 
A

Alf P. Steinbach

* Vladimir Jovic:
For unit testing, see this:
http://cxxtest.sourceforge.net/guide.html


For compile time testing, see this:
http://www.boost.org/doc/libs/1_42_0/doc/html/boost_staticassert.html

For run time testing, I am using a macro, which throws an exception if
the condition fails. The exception class prints backtrace and the failed
condition.

Thanks, but I think you misunderstood the question.

E.g., the problem isn't to produce compile time asserts. The problem is testing
them, systematically. Preferably in an automated way.


Cheers, & thanks,

- Alf
 
A

Alf P. Steinbach

* Alf P. Steinbach:
OK, this displays my ignorance of what's out there (it's been a long
time since I developed for a living), and also my laziness not googling.
:)

However.

I want to unit-test some library code I'm sort of extracting from some
old code I have.

The things that should work without error are easy to test, and it's
currently not so much code that I've considered a testing framework,
although the code size increases. I'm thinking that perhaps the popular
frameworks don't support my needs: there are cases where the code
/should/ assert at run time. And worse, there are cases where the could
should assert at compile time...

OK, I made a unit test driver GUI in Python 3.x, <url:
http://pastebin.com/bB1jeP5Z>.

It reports success or failure depending on whether build is expected to succeed
or fail, and depending on whether running the executable is expected to succeed
or fail.

It's very unfinished but this is a first usable version.

Tests and subtests are identified by C++ macro symbols defined in an [.ini]
file. Running a test the GUI (1) places #define's of the selected symbols in
file [_config.h], (2) invokes a 'build' script which in Windows can simply be a
batch file (building a /test/ is ordinarily very fast), and (3) if the build
succeeds and is expected to succeed, it invokes a 'run' script.

My 'build' script looks like this:

<code file="build.bat">
@echo off
devenv /nologo my_msvc_solution.sln /project my_lib_project.vcproj /build Debug
</code>

And the 'run' script is simply:

<code file="run.bat">
@Debug\the_name_of_the_executable
</code>

In the Python code the script (batch file) directory is hardcoded as
build_dir = os.path.normpath( "../build/msvc_7_1" )
but it should be easy to change.

For a main test the build is always expected to succeed.

For a subtest the build is expected to fail if the subtest id starts with "CERR_".


Cheers,

- Alf
 
T

tonydee

OK, I made a unit test driver GUI in Python 3.x, <url:http://pastebin.com/bB1jeP5Z>.

I've done something similar (conceptually, didn't check out your
Python code). Just thought I'd mention another alternative (at least
on UNIX) ala "configure" scripts - they often check for success or
failure in compiling little test programs.

Cheers,
Tony
 
V

Vladimir Jovic

Alf said:
* Alf P. Steinbach:
OK, this displays my ignorance of what's out there (it's been a long
time since I developed for a living), and also my laziness not
googling. :)

However.

I want to unit-test some library code I'm sort of extracting from some
old code I have.

The things that should work without error are easy to test, and it's
currently not so much code that I've considered a testing framework,
although the code size increases. I'm thinking that perhaps the
popular frameworks don't support my needs: there are cases where the
code /should/ assert at run time. And worse, there are cases where the
could should assert at compile time...

OK, I made a unit test driver GUI in Python 3.x, <url:
http://pastebin.com/bB1jeP5Z>.

It reports success or failure depending on whether build is expected to
succeed or fail, and depending on whether running the executable is
expected to succeed or fail.

It's very unfinished but this is a first usable version.

Tests and subtests are identified by C++ macro symbols defined in an
[.ini] file. Running a test the GUI (1) places #define's of the selected
symbols in file [_config.h], (2) invokes a 'build' script which in
Windows can simply be a batch file (building a /test/ is ordinarily very
fast), and (3) if the build succeeds and is expected to succeed, it
invokes a 'run' script.

My 'build' script looks like this:

<code file="build.bat">
@echo off
devenv /nologo my_msvc_solution.sln /project my_lib_project.vcproj
/build Debug
</code>

And the 'run' script is simply:

<code file="run.bat">
@Debug\the_name_of_the_executable
</code>

In the Python code the script (batch file) directory is hardcoded as
build_dir = os.path.normpath( "../build/msvc_7_1" )
but it should be easy to change.

For a main test the build is always expected to succeed.

For a subtest the build is expected to fail if the subtest id starts
with "CERR_".

Ok, now I get it. For this kind of testing, you can use this:
http://expect.nist.gov/
 
D

DeMarcus

Ian said:
I used to use cppUnit (and still do for older projects) but now I use
gtest (http://code.google.com/p/googletest/). Compile time asserts are
beyond the scope of a unit testing framework; if it won't compile, there
isn't a unit to test!


For run time asserts, I interpose whatever function the system's assert
macro calls (__assert on Solaris) and have __assert throw an exception
with the file, line and expression passed by assert.

A boiled down (no framework) example:

#include <iostream>
#include <exception>
#include <assert.h>

struct AssertionException : std::runtime_error
{
const char* expression;
const char* file;
int line;

AssertionException(const char* expression, const char* file, int line)
: std::runtime_error( expression ),
expression(expression), file(file), line(line) {}
};

void __assert(const char* expression, const char* file, int line)
{
throw AssertionException( expression, file, line );
}

void fut( const char* p )
{
assert( NULL != p );
}

int main()
{
try
{
fut( NULL );
std::cerr << "Oops" << std::endl;
}
catch( const AssertionException& e )
{
std::cerr << "Caught" << ": " << e.what() << std::endl;
}
}

I'm not a complete expert, nor am I a sulky person, but before writing
assert exceptions one might consider following that I found in the book
C++ Coding Standards by Sutter & Alexandrescu, Item 68 - Assert
liberally to document internal assumptions and invariants.

Quote:
"It is not recommended to throw an exception instead of asserting, even
though the standard std::logic_error exception class was originally
designed for this purpose. The primary disadvantage of using an
exception to report a programming error is that you don't really want
stack unwinding to occur - you want the debugger to launch on the exact
line where the violation was detected, with the line's state intact."


/Daniel
 
C

Carlo Milanesi

Alf said:
E.g., the problem isn't to produce compile time asserts. The problem is
testing them, systematically. Preferably in an automated way.

I published the following article that addresses specifically this
problem (after a tutorial on testing C++ programs):
http://www.drdobbs.com/cpp/205801074

I published also the open source utility described in that article.
It is in the files "staticpp.txt" and "staticpp.zip" in the following
volume:
ftp://66.77.27.238/sourcecode/ddj/2008/0802.zip
 
A

Alf P. Steinbach

* Carlo Milanesi:
I published the following article that addresses specifically this
problem (after a tutorial on testing C++ programs):
http://www.drdobbs.com/cpp/205801074

I published also the open source utility described in that article.
It is in the files "staticpp.txt" and "staticpp.zip" in the following
volume:
ftp://66.77.27.238/sourcecode/ddj/2008/0802.zip

Hm, it's very similar to my approach. But instead of your @LEGAL and @ILLEGAL I
just use ordinary C++ preprocessor directives. This allows testing to be
performed manually, without any preprocessing or having a script at hand.

It also allows having just a single Visual C++ project and project settings for
the test suite for a library, and almost completely decouples the scripting side
from the C++ source code side.

The script -> C++ connection is that at the scripting side one defines the
relevant preprocessor macros for each test.

Before building a test the script places the macro definitions for the selected
test in a file '_config.h', which is the only C++ file that it knows about.

The Python test driver script (not exactly finished, but working!) is available
at <url: http://pastebin.com/NK8yVcyv>.


---


The main program for my test suite for the lib I'm testing starts like this:


<code>
#include "_config.h" // Generated.

#if \
defined( USE_STATIC_ASSERT ) || \
defined( USE_STATIC_ASSERT__OVERRIDE ) || \
defined( USE_GENERAL_WARNINGS_SUPPRESSION ) || \
defined( USE_DEBUGGER_API ) || \
defined( USE_DEBUGGING ) || \
defined( USE_FIXED_SIZE_TYPES ) || \
defined( USE_TYPECHECKING ) || \
defined( USE_TYPECHECKING__DOWNCAST_OF_POINTER ) || \
defined( USE_TYPECHECKING__DOWNCAST_OF_REFERENCE ) || \
defined( USE_PRIMITIVE_TYPES__CASTBYTEPTRFROM ) || \
0
// OK
#else
# error Unknown test symbol or no test symbol defined.
#endif
</code>


I haven't gotten around to replace the "USE_" prefix with "TEST_" yet... :)

A typical file to be tested (I just chose a short one):


<code>
// Copyright (c) Alf P. Steinbach, 2010.
#include <progrock/cppx/devsupport/static_assert.h>

#if \
defined( CPPXTEST_NO_SUBTESTS ) || \
defined( CERR_NONEXISTING ) || \
0
// OK
#else
# error Unknown or no subtest symbol defined (define CPPXTEST_NO_SUBTESTS?).
#endif


class Base
{
protected:
virtual void foo( int, char const* ) const = 0;
};

class Derived:
public Base
{
protected:
virtual void foo( int a, char const* b ) const
{
CPPX_IS_OVERRIDE_OF( Base::foo,(a, b) ); // Should compile fine.
}

virtual void bar() const
{
#if defined( CERR_NONEXISTING )
CPPX_IS_OVERRIDE_OF( Base::bar,() ); // Should yield
compilation error.
#endif
}
};
</code>


Cheers, & thanks for that article ref!,

- Alf
 
I

Ian Collins

DeMarcus said:
I'm not a complete expert, nor am I a sulky person, but before writing
assert exceptions one might consider following that I found in the book
C++ Coding Standards by Sutter & Alexandrescu, Item 68 - Assert
liberally to document internal assumptions and invariants.

Quote:
"It is not recommended to throw an exception instead of asserting, even
though the standard std::logic_error exception class was originally
designed for this purpose. The primary disadvantage of using an
exception to report a programming error is that you don't really want
stack unwinding to occur - you want the debugger to launch on the exact
line where the violation was detected, with the line's state intact."

My example above was for a unit test harness used to confirm that
assertions trigger when expected. It was not a suggestion for
production code!

In a test harness, you could get the mock __assert function to set a
flag to indicate that it has been called, but that won't case the called
function to terminate. It will blunder on with invalid inputs and do
nasty things. In this instance, mapping assertions to exceptions is the
only practical solution.
 
D

DeMarcus

Ian said:
My example above was for a unit test harness used to confirm that
assertions trigger when expected. It was not a suggestion for
production code!

In a test harness, you could get the mock __assert function to set a
flag to indicate that it has been called, but that won't case the called
function to terminate. It will blunder on with invalid inputs and do
nasty things. In this instance, mapping assertions to exceptions is the
only practical solution.

Oh, I see. That's useful.
 
A

Alf P. Steinbach

* Ian Collins:
My example above was for a unit test harness used to confirm that
assertions trigger when expected. It was not a suggestion for
production code!

In a test harness, you could get the mock __assert function to set a
flag to indicate that it has been called, but that won't case the called
function to terminate. It will blunder on with invalid inputs and do
nasty things. In this instance, mapping assertions to exceptions is the
only practical solution.

I must disagree. I think it *can* be practical when you have good knowledge of
the code being tested. But for arbitrary code there might just be a catch(...)
somewhere.

So I think that in general it's better to just have the test harness (script,
likely) detect the failed execution.

And sometimes that failed execution is what's expected for a successful test.


Cheers,

- Alf
 
I

Ian Collins

Alf said:
* Ian Collins:

I must disagree. I think it *can* be practical when you have good
knowledge of the code being tested. But for arbitrary code there might
just be a catch(...) somewhere.

So I think that in general it's better to just have the test harness
(script, likely) detect the failed execution.

No, we were (at least I was) talking about unit tests here. There might
be hundreds or thousands of tests. You want to see the test assertions
pass or fail, not the execution of the tests.

In my original example:

try
{
fut( NULL );
std::cerr << "Oops" << std::endl;
}
catch( const AssertionException& e )
{
std::cerr << "Caught" << ": " << e.what() << std::endl;
}

The code is asserting fut() asserts when passed NULL. WIthin a test
framework (cppUnit in this case) the code would look something like:

try
{
fut( NULL );
CPPUNT_ASSERT( !"fut failed to assert on NULL" );
}
catch( const AssertionException& e ) {}
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,822
Latest member
israfaceZa

Latest Threads

Top