mirror of
https://github.com/ioacademy-jikim/debugging
synced 2025-06-09 08:56:15 +00:00
2008 lines
72 KiB
Plaintext
2008 lines
72 KiB
Plaintext
\input texinfo @c -*-texinfo-*-
|
|
@c %**start of header
|
|
@setfilename check.info
|
|
@include version.texi
|
|
@settitle Check @value{VERSION}
|
|
@syncodeindex fn cp
|
|
@syncodeindex tp cp
|
|
@syncodeindex vr cp
|
|
@c %**end of header
|
|
|
|
@copying
|
|
This manual is for Check
|
|
(version @value{VERSION}, @value{UPDATED}),
|
|
a unit testing framework for C.
|
|
|
|
Copyright @copyright{} 2001--2014 Arien Malec, Branden Archer, Chris Pickett,
|
|
Fredrik Hugosson, and Robert Lemmen.
|
|
|
|
@quotation
|
|
Permission is granted to copy, distribute and/or modify this document
|
|
under the terms of the @acronym{GNU} Free Documentation License,
|
|
Version 1.2 or any later version published by the Free Software
|
|
Foundation; with no Invariant Sections, no Front-Cover texts, and no
|
|
Back-Cover Texts. A copy of the license is included in the section
|
|
entitled ``@acronym{GNU} Free Documentation License.''
|
|
@end quotation
|
|
@end copying
|
|
|
|
@dircategory Software development
|
|
@direntry
|
|
* Check: (check)Introduction.
|
|
@end direntry
|
|
|
|
@titlepage
|
|
@title Check
|
|
@subtitle A Unit Testing Framework for C
|
|
@subtitle for version @value{VERSION}, @value{UPDATED}
|
|
@author Arien Malec
|
|
@author Branden Archer
|
|
@author Chris Pickett
|
|
@author Fredrik Hugosson
|
|
@author Robert Lemmen
|
|
@author Robert Collins
|
|
|
|
@c The following two commands start the copyright page.
|
|
@page
|
|
@vskip 0pt plus 1filll
|
|
@insertcopying
|
|
@end titlepage
|
|
|
|
@c Output the table of contents at the beginning.
|
|
@contents
|
|
|
|
@ifnottex
|
|
@node Top, Introduction, (dir), (dir)
|
|
@top Check
|
|
|
|
@insertcopying
|
|
|
|
Please send corrections to this manual to
|
|
@email{check-devel AT lists.sourceforge.net}. We'd prefer it if you can
|
|
send a unified diff (@command{diff -u}) against the
|
|
@file{doc/check.texi} file that ships with Check, but if that is not
|
|
possible something is better than nothing.
|
|
@end ifnottex
|
|
|
|
@menu
|
|
* Introduction::
|
|
* Unit Testing in C::
|
|
* Tutorial::
|
|
* Advanced Features::
|
|
* Supported Build Systems::
|
|
* Conclusion and References::
|
|
* Environment Variable Reference::
|
|
* Copying This Manual::
|
|
* Index::
|
|
|
|
@detailmenu
|
|
--- The Detailed Node Listing ---
|
|
|
|
Unit Testing in C
|
|
|
|
* Other Frameworks for C::
|
|
|
|
Tutorial: Basic Unit Testing
|
|
|
|
* How to Write a Test::
|
|
* Setting Up the Money Build Using Autotools::
|
|
* Setting Up the Money Build Using CMake::
|
|
* Test a Little::
|
|
* Creating a Suite::
|
|
* SRunner Output::
|
|
|
|
Advanced Features
|
|
|
|
* Convenience Test Functions::
|
|
* Running Multiple Cases::
|
|
* No Fork Mode::
|
|
* Test Fixtures::
|
|
* Multiple Suites in one SRunner::
|
|
* Selective Running of Tests::
|
|
* Testing Signal Handling and Exit Values::
|
|
* Looping Tests::
|
|
* Test Timeouts::
|
|
* Determining Test Coverage::
|
|
* Finding Memory Leaks::
|
|
* Test Logging::
|
|
* Subunit Support::
|
|
|
|
Test Fixtures
|
|
|
|
* Test Fixture Examples::
|
|
* Checked vs Unchecked Fixtures::
|
|
|
|
Test Logging
|
|
|
|
* XML Logging::
|
|
* TAP Logging::
|
|
|
|
Environment Variable Reference
|
|
|
|
Copying This Manual
|
|
|
|
* GNU Free Documentation License:: License for copying this manual.
|
|
|
|
@end detailmenu
|
|
@end menu
|
|
|
|
@node Introduction, Unit Testing in C, Top, Top
|
|
@chapter Introduction
|
|
@cindex introduction
|
|
|
|
Check is a unit testing framework for C. It was inspired by similar
|
|
frameworks that currently exist for most programming languages; the
|
|
most famous example being @uref{http://www.junit.org, JUnit} for Java.
|
|
There is a list of unit test frameworks for multiple languages at
|
|
@uref{http://www.xprogramming.com/software.htm}. Unit testing has a
|
|
long history as part of formal quality assurance methodologies, but
|
|
has recently been associated with the lightweight methodology called
|
|
Extreme Programming. In that methodology, the characteristic practice
|
|
involves interspersing unit test writing with coding (``test a
|
|
little, code a little''). While the incremental unit test/code
|
|
approach is indispensable to Extreme Programming, it is also
|
|
applicable, and perhaps indispensable, outside of that methodology.
|
|
|
|
The incremental test/code approach provides three main benefits to the
|
|
developer:
|
|
|
|
@enumerate
|
|
@item
|
|
Because the unit tests use the interface to the unit being tested,
|
|
they allow the developer to think about how the interface should be
|
|
designed for usage early in the coding process.
|
|
|
|
@item
|
|
They help the developer think early about aberrant cases, and code
|
|
accordingly.
|
|
|
|
@item
|
|
By providing a documented level of correctness, they allow the
|
|
developer to refactor (see @uref{http://www.refactoring.com})
|
|
aggressively.
|
|
@end enumerate
|
|
|
|
That third reason is the one that turns people into unit testing
|
|
addicts. There is nothing so satisfying as doing a wholesale
|
|
replacement of an implementation, and having the unit tests reassure
|
|
you at each step of that change that all is well. It is like the
|
|
difference between exploring the wilderness with and without a good
|
|
map and compass: without the proper gear, you are more likely to
|
|
proceed cautiously and stick to the marked trails; with it, you can
|
|
take the most direct path to where you want to go.
|
|
|
|
Look at the Check homepage for the latest information on Check:
|
|
@uref{http://check.sourceforge.net}.
|
|
|
|
The Check project page is at:
|
|
@uref{http://sourceforge.net/projects/check/}.
|
|
|
|
@node Unit Testing in C, Tutorial, Introduction, Top
|
|
@chapter Unit Testing in C
|
|
@ C unit testing
|
|
|
|
The approach to unit testing frameworks used for Check originated with
|
|
Smalltalk, which is a late binding object-oriented language supporting
|
|
reflection. Writing a framework for C requires solving some special
|
|
problems that frameworks for Smalltalk, Java or Python don't have to
|
|
face. In all of those language, the worst that a unit test can do is
|
|
fail miserably, throwing an exception of some sort. In C, a unit test
|
|
is just as likely to trash its address space as it is to fail to meet
|
|
its test requirements, and if the test framework sits in the same
|
|
address space, goodbye test framework.
|
|
|
|
To solve this problem, Check uses the @code{fork()} system call to
|
|
create a new address space in which to run each unit test, and then
|
|
uses message queues to send information on the testing process back to
|
|
the test framework. That way, your unit test can do all sorts of
|
|
nasty things with pointers, and throw a segmentation fault, and the
|
|
test framework will happily note a unit test error, and chug along.
|
|
|
|
The Check framework is also designed to play happily with common
|
|
development environments for C programming. The author designed Check
|
|
around Autoconf/Automake (thus the name Check: @command{make check} is
|
|
the idiom used for testing with Autoconf/Automake). Note however that
|
|
Autoconf/Automake are NOT necessary to use Check; any build system
|
|
is sufficient. The test failure messages thrown up by Check use the
|
|
common idiom of @samp{filename:linenumber:message} used by @command{gcc}
|
|
and family to report problems in source code. With (X)Emacs, the output
|
|
of Check allows one to quickly navigate to the location of the unit test
|
|
that failed; presumably that also works in VI and IDEs.
|
|
|
|
@menu
|
|
* Other Frameworks for C::
|
|
@end menu
|
|
|
|
@node Other Frameworks for C, , Unit Testing in C, Unit Testing in C
|
|
@section Other Frameworks for C
|
|
@cindex other frameworks
|
|
@cindex frameworks
|
|
|
|
The authors know of the following additional unit testing frameworks
|
|
for C:
|
|
|
|
@table @asis
|
|
|
|
@item AceUnit
|
|
AceUnit (Advanced C and Embedded Unit) bills itself as a comfortable C
|
|
code unit test framework. It tries to mimic JUnit 4.x and includes
|
|
reflection-like capabilities. AceUnit can be used in resource
|
|
constraint environments, e.g. embedded software development, and
|
|
importantly it runs fine in environments where you cannot include a
|
|
single standard header file and cannot invoke a single standard C
|
|
function from the ANSI / ISO C libraries. It also has a Windows port.
|
|
It does not use forks to trap signals, although the authors have
|
|
expressed interest in adding such a feature. See the
|
|
@uref{http://aceunit.sourceforge.net/, AceUnit homepage}.
|
|
|
|
@item GNU Autounit
|
|
Much along the same lines as Check, including forking to run unit tests
|
|
in a separate address space (in fact, the original author of Check
|
|
borrowed the idea from @acronym{GNU} Autounit). @acronym{GNU} Autounit
|
|
uses GLib extensively, which means that linking and such need special
|
|
options, but this may not be a big problem to you, especially if you are
|
|
already using GTK or GLib. See the @uref{http://autounit.tigris.org/,
|
|
GNU Autounit homepage}.
|
|
|
|
@item cUnit
|
|
Also uses GLib, but does not fork to protect the address space of unit
|
|
tests. See the
|
|
@uref{http://web.archive.org/web/*/http://people.codefactory.se/~spotty/cunit/,
|
|
archived cUnit homepage}.
|
|
|
|
@item CUnit
|
|
Standard C, with plans for a Win32 GUI implementation. Does not
|
|
currently fork or otherwise protect the address space of unit tests.
|
|
In early development. See the @uref{http://cunit.sourceforge.net,
|
|
CUnit homepage}.
|
|
|
|
@item CuTest
|
|
A simple framework with just one .c and one .h file that you drop into
|
|
your source tree. See the @uref{http://cutest.sourceforge.net, CuTest
|
|
homepage}.
|
|
|
|
@item CppUnit
|
|
The premier unit testing framework for C++; you can also use it to test C
|
|
code. It is stable, actively developed, and has a GUI interface. The
|
|
primary reasons not to use CppUnit for C are first that it is quite
|
|
big, and second you have to write your tests in C++, which means you
|
|
need a C++ compiler. If these don't sound like concerns, it is
|
|
definitely worth considering, along with other C++ unit testing
|
|
frameworks. See the
|
|
@uref{http://cppunit.sourceforge.net/cppunit-wiki, CppUnit homepage}.
|
|
|
|
@item embUnit
|
|
embUnit (Embedded Unit) is another unit test framework for embedded
|
|
systems. This one appears to be superseded by AceUnit.
|
|
@uref{https://sourceforge.net/projects/embunit/, Embedded Unit
|
|
homepage}.
|
|
|
|
@item MinUnit
|
|
A minimal set of macros and that's it! The point is to
|
|
show how easy it is to unit test your code. See the
|
|
@uref{http://www.jera.com/techinfo/jtns/jtn002.html, MinUnit
|
|
homepage}.
|
|
|
|
@item CUnit for Mr. Ando
|
|
A CUnit implementation that is fairly new, and apparently still in
|
|
early development. See the
|
|
@uref{http://park.ruru.ne.jp/ando/work/CUnitForAndo/html/, CUnit for
|
|
Mr. Ando homepage}.
|
|
@end table
|
|
|
|
This list was last updated in March 2008. If you know of other C unit
|
|
test frameworks, please send an email plus description to
|
|
@email{check-devel AT lists.sourceforge.net} and we will add the entry
|
|
to this list.
|
|
|
|
It is the authors' considered opinion that forking or otherwise
|
|
trapping and reporting signals is indispensable for unit testing (but
|
|
it probably wouldn't be hard to add that to frameworks without that
|
|
feature). Try 'em all out: adapt this tutorial to use all of the
|
|
frameworks above, and use whichever you like. Contribute, spread the
|
|
word, and make one a standard. Languages such as Java and Python are
|
|
fortunate to have standard unit testing frameworks; it would be desirable
|
|
that C have one as well.
|
|
|
|
@node Tutorial, Advanced Features, Unit Testing in C, Top
|
|
@chapter Tutorial: Basic Unit Testing
|
|
|
|
This tutorial will use the JUnit
|
|
@uref{http://junit.sourceforge.net/doc/testinfected/testing.htm, Test
|
|
Infected} article as a starting point. We will be creating a library
|
|
to represent money, @code{libmoney}, that allows conversions between
|
|
different currency types. The development style will be ``test a
|
|
little, code a little'', with unit test writing preceding coding.
|
|
This constantly gives us insights into module usage, and also makes
|
|
sure we are constantly thinking about how to test our code.
|
|
|
|
@menu
|
|
* How to Write a Test::
|
|
* Setting Up the Money Build Using Autotools::
|
|
* Setting Up the Money Build Using CMake::
|
|
* Test a Little::
|
|
* Creating a Suite::
|
|
* SRunner Output::
|
|
@end menu
|
|
|
|
@node How to Write a Test, Setting Up the Money Build Using Autotools, Tutorial, Tutorial
|
|
@section How to Write a Test
|
|
|
|
Test writing using Check is very simple. The file in which the checks
|
|
are defined must include @file{check.h} as so:
|
|
@example
|
|
@verbatim
|
|
#include <check.h>
|
|
@end verbatim
|
|
@end example
|
|
|
|
The basic unit test looks as follows:
|
|
@example
|
|
@verbatim
|
|
START_TEST (test_name)
|
|
{
|
|
/* unit test code */
|
|
}
|
|
END_TEST
|
|
@end verbatim
|
|
@end example
|
|
|
|
The @code{START_TEST}/@code{END_TEST} pair are macros that setup basic
|
|
structures to permit testing. It is a mistake to leave off the
|
|
@code{END_TEST} marker; doing so produces all sorts of strange errors
|
|
when the check is compiled.
|
|
|
|
@node Setting Up the Money Build Using Autotools, Setting Up the Money Build Using CMake, How to Write a Test, Tutorial
|
|
@section Setting Up the Money Build Using Autotools
|
|
|
|
Since we are creating a library to handle money, we will first create
|
|
an interface in @file{money.h}, an implementation in @file{money.c},
|
|
and a place to store our unit tests, @file{check_money.c}. We want to
|
|
integrate these core files into our build system, and will need some
|
|
additional structure. To manage everything we'll use Autoconf,
|
|
Automake, and friends (collectively known as Autotools) for this
|
|
example. Note that one could do something similar with ordinary
|
|
Makefiles, or any other build system. It is in the authors' opinion that
|
|
it is generally easier to use Autotools than bare Makefiles, and they
|
|
provide built-in support for running tests.
|
|
|
|
Note that this is not the place to explain how Autotools works. If
|
|
you need help understanding what's going on beyond the explanations
|
|
here, the best place to start is probably Alexandre Duret-Lutz's
|
|
excellent
|
|
@uref{http://www.lrde.epita.fr/~adl/autotools.html,
|
|
Autotools tutorial}.
|
|
|
|
The examples in this section are part of the Check distribution; you
|
|
don't need to spend time cutting and pasting or (worse) retyping them.
|
|
Locate the Check documentation on your system and look in the
|
|
@samp{example} directory. The standard directory for GNU/Linux
|
|
distributions should be @samp{/usr/share/doc/check/example}. This
|
|
directory contains the final version reached the end of the tutorial. If
|
|
you want to follow along, create backups of @file{money.h},
|
|
@file{money.c}, and @file{check_money.c}, and then delete the originals.
|
|
|
|
We set up a directory structure as follows:
|
|
@example
|
|
@verbatim
|
|
.
|
|
|-- Makefile.am
|
|
|-- README
|
|
|-- configure.ac
|
|
|-- src
|
|
| |-- Makefile.am
|
|
| |-- main.c
|
|
| |-- money.c
|
|
| `-- money.h
|
|
`-- tests
|
|
|-- Makefile.am
|
|
`-- check_money.c
|
|
@end verbatim
|
|
@end example
|
|
|
|
Note that this is the output of @command{tree}, a great directory
|
|
visualization tool. The top-level @file{Makefile.am} is simple; it
|
|
merely tells Automake how to process sub-directories:
|
|
@example
|
|
@verbatim
|
|
SUBDIRS = src . tests
|
|
@end verbatim
|
|
@end example
|
|
|
|
Note that @code{tests} comes last, because the code should be testing
|
|
an already compiled library. @file{configure.ac} is standard Autoconf
|
|
boilerplate, as specified by the Autotools tutorial and as suggested
|
|
by @command{autoscan}.
|
|
|
|
@file{src/Makefile.am} builds @samp{libmoney} as a Libtool archive,
|
|
and links it to an application simply called @command{main}. The
|
|
application's behavior is not important to this tutorial; what's
|
|
important is that none of the functions we want to unit test appear in
|
|
@file{main.c}; this probably means that the only function in
|
|
@file{main.c} should be @code{main()} itself. In order to test the
|
|
whole application, unit testing is not appropriate: you should use a
|
|
system testing tool like Autotest. If you really want to test
|
|
@code{main()} using Check, rename it to something like
|
|
@code{_myproject_main()} and write a wrapper around it.
|
|
|
|
The primary build instructions for our unit tests are in
|
|
@file{tests/Makefile.am}:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude example/tests/Makefile.am
|
|
@end example
|
|
@end cartouche
|
|
|
|
@code{TESTS} tells Automake which test programs to run for
|
|
@command{make check}. Similarly, the @code{check_} prefix in
|
|
@code{check_PROGRAMS} actually comes from Automake; it says to build
|
|
these programs only when @command{make check} is run. (Recall that
|
|
Automake's @code{check} target is the origin of Check's name.) The
|
|
@command{check_money} test is a program that we will build from
|
|
@file{tests/check_money.c}, linking it against both
|
|
@file{src/libmoney.la} and the installed @file{libcheck.la} on our
|
|
system. The appropriate compiler and linker flags for using Check are
|
|
found in @code{@@CHECK_CFLAGS@@} and @code{@@CHECK_LIBS@@}, values
|
|
defined by the @code{AM_PATH_CHECK} macro.
|
|
|
|
Now that all this infrastructure is out of the way, we can get on with
|
|
development. @file{src/money.h} should only contain standard C header
|
|
boilerplate:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude example/src/money.1.h
|
|
@end example
|
|
@end cartouche
|
|
|
|
@file{src/money.c} should be empty, and @file{tests/check_money.c}
|
|
should only contain an empty @code{main()} function:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude example/tests/check_money.1.c
|
|
@end example
|
|
@end cartouche
|
|
|
|
Create the GNU Build System for the project and then build @file{main}
|
|
and @file{libmoney.la} as follows:
|
|
@example
|
|
@verbatim
|
|
$ autoreconf --install
|
|
$ ./configure
|
|
$ make
|
|
@end verbatim
|
|
@end example
|
|
|
|
(@command{autoreconf} determines which commands are needed in order
|
|
for @command{configure} to be created or brought up to date.
|
|
Previously one would use a script called @command{autogen.sh} or
|
|
@command{bootstrap}, but that practice is unnecessary now.)
|
|
|
|
Now build and run the @command{check_money} test with @command{make
|
|
check}. If all goes well, @command{make} should report that our tests
|
|
passed. No surprise, because there aren't any tests to fail. If you
|
|
have problems, make sure to see @ref{Supported Build Systems}.
|
|
|
|
This was tested on the isadora distribution of Linux Mint
|
|
GNU/Linux in November 2012, using Autoconf 2.65, Automake 1.11.1,
|
|
and Libtool 2.2.6b. Please report any problems to
|
|
@email{check-devel AT lists.sourceforge.net}.
|
|
|
|
@node Setting Up the Money Build Using CMake, Test a Little, Setting Up the Money Build Using Autotools, Tutorial
|
|
@section Setting Up the Money Build Using CMake
|
|
|
|
Since we are creating a library to handle money, we will first create
|
|
an interface in @file{money.h}, an implementation in @file{money.c},
|
|
and a place to store our unit tests, @file{check_money.c}. We want to
|
|
integrate these core files into our build system, and will need some
|
|
additional structure. To manage everything we'll use CMake for this
|
|
example. Note that one could do something similar with ordinary
|
|
Makefiles, or any other build system. It is in the authors' opinion that
|
|
it is generally easier to use CMake than bare Makefiles, and they
|
|
provide built-in support for running tests.
|
|
|
|
Note that this is not the place to explain how CMake works. If
|
|
you need help understanding what's going on beyond the explanations
|
|
here, the best place to start is probably the @uref{http://www.cmake.org,
|
|
CMake project's homepage}.
|
|
|
|
The examples in this section are part of the Check distribution; you
|
|
don't need to spend time cutting and pasting or (worse) retyping them.
|
|
Locate the Check documentation on your system and look in the
|
|
@samp{example} directory, or look in the Check source. If on a GNU/Linux
|
|
system the standard directory should be @samp{/usr/share/doc/check/example}.
|
|
This directory contains the final version reached the end of the tutorial. If
|
|
you want to follow along, create backups of @file{money.h},
|
|
@file{money.c}, and @file{check_money.c}, and then delete the originals.
|
|
|
|
We set up a directory structure as follows:
|
|
@example
|
|
@verbatim
|
|
.
|
|
|-- Makefile.am
|
|
|-- README
|
|
|-- CMakeLists.txt
|
|
|-- cmake
|
|
| |-- config.h.in
|
|
| |-- FindCheck.cmake
|
|
|-- src
|
|
| |-- CMakeLists.txt
|
|
| |-- main.c
|
|
| |-- money.c
|
|
| `-- money.h
|
|
`-- tests
|
|
|-- CMakeLists.txt
|
|
`-- check_money.c
|
|
@end verbatim
|
|
@end example
|
|
|
|
The top-level @file{CMakeLists.txt} contains the configuration checks
|
|
for available libraries and types, and also defines sub-directories
|
|
to process. The @file{cmake/FindCheck.cmake} file contains instructions
|
|
for locating Check on the system and setting up the build to use it.
|
|
If the system does not have pkg-config installed, @file{cmake/FindCheck.cmake}
|
|
may not be able to locate Check successfully. In this case, the install
|
|
directory of Check must be located manually, and the following line
|
|
added to @file{tests/CMakeLists.txt} (assuming Check was installed under
|
|
C:\\Program Files\\check:
|
|
|
|
@verbatim
|
|
set(CHECK_INSTALL_DIR "C:/Program Files/check")
|
|
@end verbatim
|
|
|
|
Note that @code{tests} comes last, because the code should be testing
|
|
an already compiled library.
|
|
|
|
@file{src/CMakeLists.txt} builds @samp{libmoney} as an archive,
|
|
and links it to an application simply called @command{main}. The
|
|
application's behavior is not important to this tutorial; what's
|
|
important is that none of the functions we want to unit test appear in
|
|
@file{main.c}; this probably means that the only function in
|
|
@file{main.c} should be @code{main()} itself. In order to test the
|
|
whole application, unit testing is not appropriate: you should use a
|
|
system testing tool like Autotest. If you really want to test
|
|
@code{main()} using Check, rename it to something like
|
|
@code{_myproject_main()} and write a wrapper around it.
|
|
|
|
Now that all this infrastructure is out of the way, we can get on with
|
|
development. @file{src/money.h} should only contain standard C header
|
|
boilerplate:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude example/src/money.1.h
|
|
@end example
|
|
@end cartouche
|
|
|
|
@file{src/money.c} should be empty, and @file{tests/check_money.c}
|
|
should only contain an empty @code{main()} function:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude example/tests/check_money.1.c
|
|
@end example
|
|
@end cartouche
|
|
|
|
Create the CMake Build System for the project and then build @file{main}
|
|
and @file{libmoney.la} as follows for Unix-compatible systems:
|
|
@example
|
|
@verbatim
|
|
$ cmake .
|
|
$ make
|
|
@end verbatim
|
|
@end example
|
|
|
|
and for MSVC on Windows:
|
|
@example
|
|
@verbatim
|
|
$ cmake -G "NMake Makefiles" .
|
|
$ nmake
|
|
@end verbatim
|
|
@end example
|
|
|
|
Now build and run the @command{check_money} test, with either @command{make
|
|
test} on a Unix-compatible system or @command{nmake test} if on Windows using MSVC.
|
|
If all goes well, the command should report that our tests
|
|
passed. No surprise, because there aren't any tests to fail.
|
|
|
|
This was tested on Windows 7 using CMake 2.8.12.1 and MSVC 16.00.30319.01/
|
|
Visual Studios 10 in February 2014. Please report any problems to
|
|
@email{check-devel AT lists.sourceforge.net}.
|
|
|
|
@node Test a Little, Creating a Suite, Setting Up the Money Build Using CMake, Tutorial
|
|
@section Test a Little, Code a Little
|
|
|
|
The @uref{http://junit.sourceforge.net/doc/testinfected/testing.htm,
|
|
Test Infected} article starts out with a @code{Money} class, and so
|
|
will we. Of course, we can't do classes with C, but we don't really
|
|
need to. The Test Infected approach to writing code says that we
|
|
should write the unit test @emph{before} we write the code, and in
|
|
this case, we will be even more dogmatic and doctrinaire than the
|
|
authors of Test Infected (who clearly don't really get this stuff,
|
|
only being some of the originators of the Patterns approach to
|
|
software development and OO design).
|
|
|
|
Here are the changes to @file{check_money.c} for our first unit test:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude check_money.1-2.c.diff
|
|
@end example
|
|
@end cartouche
|
|
|
|
@findex ck_assert_int_eq
|
|
@findex ck_assert_str_eq
|
|
A unit test should just chug along and complete. If it exits early,
|
|
or is signaled, it will fail with a generic error message. (Note: it
|
|
is conceivable that you expect an early exit, or a signal and there is
|
|
functionality in Check to specifically assert that we should expect a
|
|
signal or an early exit.) If we want to get some information
|
|
about what failed, we need to use some calls that will point out a failure.
|
|
Two such calls are @code{ck_assert_int_eq} (used to determine if two integers
|
|
are equal) and @code{ck_assert_str_eq} (used to determine if two null terminated
|
|
strings are equal). Both of these functions (actually macros) will signal an error
|
|
if their arguments are not equal.
|
|
|
|
@findex ck_assert
|
|
An alternative to using @code{ck_assert_int_eq} and @code{ck_assert_str_eq}
|
|
is to write the expression under test directly using @code{ck_assert}.
|
|
This takes one Boolean argument which must be True for the check to pass.
|
|
The second test could be rewritten as follows:
|
|
@example
|
|
@verbatim
|
|
ck_assert(strcmp (money_currency (m), "USD") == 0);
|
|
@end verbatim
|
|
@end example
|
|
|
|
@findex ck_assert_msg
|
|
@code{ck_assert} will find and report failures, but will not print any
|
|
user supplied message in the unit test result. To print a user defined
|
|
message along with any failures found, use @code{ck_assert_msg}. The first
|
|
argument is a Boolean argument. The remaining arguments support @code{varargs}
|
|
and accept @code{printf}-style format strings and arguments. This is especially
|
|
useful while debugging. For example, the second test could be rewritten as:
|
|
@example
|
|
@verbatim
|
|
ck_assert_msg(strcmp (money_currency (m), "USD") == 0,
|
|
"Was expecting a currency of USD, but found %s", money_currency (m));
|
|
@end verbatim
|
|
@end example
|
|
|
|
@findex ck_abort
|
|
@findex ck_abort_msg
|
|
If the Boolean argument is too complicated to elegantly express within
|
|
@code{ck_assert()}, there are the alternate functions @code{ck_abort()}
|
|
and @code{ck_abort_msg()} that unconditionally fail. The second test inside
|
|
@code{test_money_create} above could be rewritten as follows:
|
|
@example
|
|
@verbatim
|
|
if (strcmp (money_currency (m), "USD") != 0)
|
|
{
|
|
ck_abort_msg ("Currency not set correctly on creation");
|
|
}
|
|
@end verbatim
|
|
@end example
|
|
|
|
For your convenience ck_assert, which does not accept a user supplied message,
|
|
substitutes a suitable message for you. (This is also equivalent to
|
|
passing a NULL message to ck_assert_msg). So you could also
|
|
write a test as follows:
|
|
@example
|
|
@verbatim
|
|
ck_assert (money_amount (m) == 5);
|
|
@end verbatim
|
|
@end example
|
|
|
|
This is equivalent to:
|
|
@example
|
|
@verbatim
|
|
ck_assert_msg (money_amount (m) == 5, NULL);
|
|
@end verbatim
|
|
@end example
|
|
|
|
which will print the file, line number, and the message
|
|
@code{"Assertion 'money_amount (m) == 5' failed"} if
|
|
@code{money_amount (m) != 5}.
|
|
|
|
When we try to compile and run the test suite now using @command{make
|
|
check}, we get a whole host of compilation errors. It may seem a bit
|
|
strange to deliberately write code that won't compile, but notice what
|
|
we are doing: in creating the unit test, we are also defining
|
|
requirements for the money interface. Compilation errors are, in a
|
|
way, unit test failures of their own, telling us that the
|
|
implementation does not match the specification. If all we do is edit
|
|
the sources so that the unit test compiles, we are actually making
|
|
progress, guided by the unit tests, so that's what we will now do.
|
|
|
|
We will patch our header @file{money.h} as follows:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude money.1-2.h.diff
|
|
@end example
|
|
@end cartouche
|
|
|
|
Our code compiles now, and again passes all of the tests. However,
|
|
once we try to @emph{use} the functions in @code{libmoney} in the
|
|
@code{main()} of @code{check_money}, we'll run into more problems, as
|
|
they haven't actually been implemented yet.
|
|
|
|
@node Creating a Suite, SRunner Output, Test a Little, Tutorial
|
|
@section Creating a Suite
|
|
|
|
To run unit tests with Check, we must create some test cases,
|
|
aggregate them into a suite, and run them with a suite runner. That's
|
|
a bit of overhead, but it is mostly one-off. Here's a diff for the
|
|
new version of @file{check_money.c}. Note that we include stdlib.h to
|
|
get the definitions of @code{EXIT_SUCCESS} and @code{EXIT_FAILURE}.
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude check_money.2-3.c.diff
|
|
@end example
|
|
@end cartouche
|
|
|
|
Most of the @code{money_suite()} code should be self-explanatory. We are
|
|
creating a suite, creating a test case, adding the test case to the
|
|
suite, and adding the unit test we created above to the test case.
|
|
Why separate this off into a separate function, rather than inline it
|
|
in @code{main()}? Because any new tests will get added in
|
|
@code{money_suite()}, but nothing will need to change in @code{main()}
|
|
for the rest of this example, so main will stay relatively clean and
|
|
simple.
|
|
|
|
Unit tests are internally defined as static functions. This means
|
|
that the code to add unit tests to test cases must be in the same
|
|
compilation unit as the unit tests themselves. This provides another
|
|
reason to put the creation of the test suite in a separate function:
|
|
you may later want to keep one source file per suite; defining a
|
|
uniquely named suite creation function allows you later to define a
|
|
header file giving prototypes for all the suite creation functions,
|
|
and encapsulate the details of where and how unit tests are defined
|
|
behind those functions. See the test program defined for Check itself
|
|
for an example of this strategy.
|
|
|
|
The code in @code{main()} bears some explanation. We are creating a
|
|
suite runner object of type @code{SRunner} from the @code{Suite} we
|
|
created in @code{money_suite()}. We then run the suite, using the
|
|
@code{CK_NORMAL} flag to specify that we should print a summary of the
|
|
run, and list any failures that may have occurred. We capture the
|
|
number of failures that occurred during the run, and use that to
|
|
decide how to return. The @code{check} target created by Automake
|
|
uses the return value to decide whether the tests passed or failed.
|
|
|
|
Now that the tests are actually being run by @command{check_money}, we
|
|
encounter linker errors again we try out @code{make check}. Try it
|
|
for yourself and see. The reason is that the @file{money.c}
|
|
implementation of the @file{money.h} interface hasn't been created
|
|
yet. Let's go with the fastest solution possible and implement stubs
|
|
for each of the functions in @code{money.c}. Here is the diff:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude money.1-3.c.diff
|
|
@end example
|
|
@end cartouche
|
|
|
|
Note that we @code{#include <stdlib.h>} to get the definition of
|
|
@code{NULL}. Now, the code compiles and links when we run @code{make
|
|
check}, but our unit test fails. Still, this is progress, and we can
|
|
focus on making the test pass.
|
|
|
|
@node SRunner Output, , Creating a Suite, Tutorial
|
|
@section SRunner Output
|
|
|
|
@findex srunner_run_all
|
|
@findex srunner_run
|
|
The functions to run tests in an @code{SRunner} are defined as follows:
|
|
@example
|
|
@verbatim
|
|
void srunner_run_all (SRunner * sr, enum print_output print_mode);
|
|
|
|
void srunner_run (SRunner *sr, const char *sname, const char *tcname,
|
|
enum print_output print_mode);
|
|
@end verbatim
|
|
@end example
|
|
|
|
Those functions do two things:
|
|
|
|
@enumerate
|
|
@item
|
|
They run all of the unit tests for the selected test cases defined for
|
|
the selected suites in the SRunner, and collect the results in the
|
|
SRunner. The determination of the selected test cases and suites
|
|
depends on the specific function used.
|
|
|
|
@code{srunner_run_all} will run all the defined test cases of all
|
|
defined suites except if the environment variables @code{CK_RUN_CASE}
|
|
or @code{CK_RUN_SUITE} are defined. If defined, those variables shall
|
|
contain the name of a test suite or a test case, defining in that way
|
|
the selected suite/test case.
|
|
|
|
@code{srunner_run} will run the suite/case selected by the
|
|
@code{sname} and @code{tcname} parameters. A value of @code{NULL}
|
|
in some of those parameters means ``any suite/case''.
|
|
|
|
@item
|
|
They print the results according to the @code{print_mode} specified.
|
|
@end enumerate
|
|
|
|
For SRunners that have already been run, there is also a separate
|
|
printing function defined as follows:
|
|
@example
|
|
@verbatim
|
|
void srunner_print (SRunner *sr, enum print_output print_mode);
|
|
@end verbatim
|
|
@end example
|
|
|
|
The enumeration values of @code{print_output} defined in Check that
|
|
parameter @code{print_mode} can assume are as follows:
|
|
|
|
@table @code
|
|
@vindex CK_SILENT
|
|
@item CK_SILENT
|
|
Specifies that no output is to be generated. If you use this flag, you
|
|
either need to programmatically examine the SRunner object, print
|
|
separately, or use test logging (@pxref{Test Logging}.)
|
|
|
|
@vindex CK_MINIMAL
|
|
@item CK_MINIMAL
|
|
Only a summary of the test run will be printed (number run, passed,
|
|
failed, errors).
|
|
|
|
@vindex CK_NORMAL
|
|
@item CK_NORMAL
|
|
Prints the summary of the run, and prints one message per failed
|
|
test.
|
|
|
|
@vindex CK_VERBOSE
|
|
@item CK_VERBOSE
|
|
Prints the summary, and one message per test (passed or failed)
|
|
|
|
@vindex CK_ENV
|
|
@vindex CK_VERBOSITY
|
|
@item CK_ENV
|
|
Gets the print mode from the environment variable @code{CK_VERBOSITY},
|
|
which can have the values "silent", "minimal", "normal", "verbose". If
|
|
the variable is not found or the value is not recognized, the print
|
|
mode is set to @code{CK_NORMAL}.
|
|
|
|
@vindex CK_SUBUNIT
|
|
@item CK_SUBUNIT
|
|
Prints running progress through the @uref{https://launchpad.net/subunit/,
|
|
subunit} test runner protocol. See 'subunit support' under the Advanced Features section for more information.
|
|
@end table
|
|
|
|
With the @code{CK_NORMAL} flag specified in our @code{main()}, let's
|
|
rerun @code{make check} now. The output from the unit test is as follows:
|
|
@example
|
|
@verbatim
|
|
Running suite(s): Money
|
|
0%: Checks: 1, Failures: 1, Errors: 0
|
|
check_money.c:9:F:Core:test_money_create:0: Assertion 'money_amount (m)==5' failed:
|
|
money_amount (m)==0, 5==5
|
|
FAIL: check_money
|
|
=====================================================
|
|
1 of 1 test failed
|
|
Please report to check-devel AT lists.sourceforge.net
|
|
=====================================================
|
|
@end verbatim
|
|
@end example
|
|
|
|
Note that the output from @code{make check} prior to Automake 1.13 will
|
|
be the output of the unit test program. Starting with 1.13 Automake will
|
|
run all unit test programs concurrently and store the output in
|
|
log files. The output listed above should be present in a log file.
|
|
|
|
The first number in the summary line tells us that 0% of our tests
|
|
passed, and the rest of the line tells us that there was one check in
|
|
total, and of those checks, one failure and zero errors. The next
|
|
line tells us exactly where that failure occurred, and what kind of
|
|
failure it was (P for pass, F for failure, E for error).
|
|
|
|
After that we have some higher level output generated by Automake: the
|
|
@code{check_money} program failed, and the bug-report address given in
|
|
@file{configure.ac} is printed.
|
|
|
|
Let's implement the @code{money_amount} function, so that it will pass
|
|
its tests. We first have to create a Money structure to hold the
|
|
amount, and then implement the function to return the correct amount:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude money.3-4.c.diff
|
|
@end example
|
|
@end cartouche
|
|
|
|
We will now rerun make check and@dots{} what's this? The output is
|
|
now as follows:
|
|
@example
|
|
@verbatim
|
|
Running suite(s): Money
|
|
0%: Checks: 1, Failures: 0, Errors: 1
|
|
check_money.c:5:E:Core:test_money_create:0: (after this point)
|
|
Received signal 11 (Segmentation fault)
|
|
@end verbatim
|
|
@end example
|
|
|
|
@findex mark_point
|
|
What does this mean? Note that we now have an error, rather than a
|
|
failure. This means that our unit test either exited early, or was
|
|
signaled. Next note that the failure message says ``after this
|
|
point''; This means that somewhere after the point noted
|
|
(@file{check_money.c}, line 5) there was a problem: signal 11 (a.k.a.
|
|
segmentation fault). The last point reached is set on entry to the
|
|
unit test, and after every call to the @code{ck_assert()},
|
|
@code{ck_abort()}, @code{ck_assert_int_*()}, @code{ck_assert_str_*()},
|
|
or the special function @code{mark_point()}. For example, if we wrote some test
|
|
code as follows:
|
|
@example
|
|
@verbatim
|
|
stuff_that_works ();
|
|
mark_point ();
|
|
stuff_that_dies ();
|
|
@end verbatim
|
|
@end example
|
|
|
|
then the point returned will be that marked by @code{mark_point()}.
|
|
|
|
The reason our test failed so horribly is that we haven't implemented
|
|
@code{money_create()} to create any @code{Money}. We'll go ahead and
|
|
implement that, the symmetric @code{money_free()}, and
|
|
@code{money_currency()} too, in order to make our unit test pass again,
|
|
here is a diff:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude money.4-5.c.diff
|
|
@end example
|
|
@end cartouche
|
|
|
|
@node Advanced Features, Supported Build Systems, Tutorial, Top
|
|
@chapter Advanced Features
|
|
|
|
What you've seen so far is all you need for basic unit testing. The
|
|
features described in this section are additions to Check that make it
|
|
easier for the developer to write, run, and analyze tests.
|
|
|
|
@menu
|
|
* Convenience Test Functions::
|
|
* Running Multiple Cases::
|
|
* No Fork Mode::
|
|
* Test Fixtures::
|
|
* Multiple Suites in one SRunner::
|
|
* Selective Running of Tests::
|
|
* Testing Signal Handling and Exit Values::
|
|
* Looping Tests::
|
|
* Test Timeouts::
|
|
* Determining Test Coverage::
|
|
* Finding Memory Leaks::
|
|
* Test Logging::
|
|
* Subunit Support::
|
|
@end menu
|
|
|
|
@node Convenience Test Functions, Running Multiple Cases, Advanced Features, Advanced Features
|
|
@section Convenience Test Functions
|
|
|
|
Using the @code{ck_assert} function for all tests can lead to lot of
|
|
repetitive code that is hard to read. For your convenience Check
|
|
provides a set of functions (actually macros) for testing often used
|
|
conditions.
|
|
|
|
@ftable @code
|
|
@item ck_abort
|
|
Unconditionally fails test with default message.
|
|
|
|
@item ck_abort_msg
|
|
Unconditionally fails test with user supplied message.
|
|
|
|
@item ck_assert
|
|
Fails test if supplied condition evaluates to false.
|
|
|
|
@item ck_assert_msg
|
|
Fails test if supplied condition evaluates to false and displays user
|
|
provided message.
|
|
|
|
@item ck_assert_int_eq
|
|
@itemx ck_assert_int_ne
|
|
@itemx ck_assert_int_lt
|
|
@itemx ck_assert_int_le
|
|
@itemx ck_assert_int_gt
|
|
@itemx ck_assert_int_ge
|
|
|
|
Compares two signed integer values (@code{intmax_t}) and displays predefined
|
|
message with condition and values of both input parameters on failure. The
|
|
operator used for comparison is different for each function and is indicated
|
|
by the last two letters of the function name. The abbreviations @code{eq},
|
|
@code{ne}, @code{lt}, @code{le}, @code{gt}, and @code{ge} correspond to
|
|
@code{==}, @code{!=}, @code{<}, @code{<=}, @code{>}, and @code{>=}
|
|
respectively.
|
|
|
|
@item ck_assert_uint_eq
|
|
@itemx ck_assert_uint_ne
|
|
@itemx ck_assert_uint_lt
|
|
@itemx ck_assert_uint_le
|
|
@itemx ck_assert_uint_gt
|
|
@itemx ck_assert_uint_ge
|
|
|
|
Similar to @code{ck_assert_int_*}, but compares two unsigned integer values
|
|
(@code{uintmax_t}) instead.
|
|
|
|
@item ck_assert_str_eq
|
|
@itemx ck_assert_str_ne
|
|
@itemx ck_assert_str_lt
|
|
@itemx ck_assert_str_le
|
|
@itemx ck_assert_str_gt
|
|
@itemx ck_assert_str_ge
|
|
|
|
Compares two null-terminated @code{char *} string values, using the
|
|
@code{strcmp()} function internally, and displays predefined message
|
|
with condition and input parameter values on failure. The comparison
|
|
operator is again indicated by last two letters of the function name.
|
|
@code{ck_assert_str_lt(a, b)} will pass if the unsigned numerical value
|
|
of the character string @code{a} is less than that of @code{b}.
|
|
|
|
@item ck_assert_ptr_eq
|
|
@itemx ck_assert_ptr_ne
|
|
|
|
Compares two pointers and displays predefined message with
|
|
condition and values of both input parameters on failure. The operator
|
|
used for comparison is different for each function and is indicated by
|
|
the last two letters of the function name. The abbreviations @code{eq} and
|
|
@code{ne} correspond to @code{==} and @code{!=} respectively.
|
|
|
|
@item fail
|
|
(Deprecated) Unconditionally fails test with user supplied message.
|
|
|
|
@item fail_if
|
|
(Deprecated) Fails test if supplied condition evaluates to true and
|
|
displays user provided message.
|
|
|
|
@item fail_unless
|
|
(Deprecated) Fails test if supplied condition evaluates to false and
|
|
displays user provided message.
|
|
|
|
|
|
@end ftable
|
|
|
|
@node Running Multiple Cases, No Fork Mode, Convenience Test Functions, Advanced Features
|
|
@section Running Multiple Cases
|
|
|
|
What happens if we pass @code{-1} as the @code{amount} in
|
|
@code{money_create()}? What should happen? Let's write a unit test.
|
|
Since we are now testing limits, we should also test what happens when
|
|
we create @code{Money} where @code{amount == 0}. Let's put these in a
|
|
separate test case called ``Limits'' so that @code{money_suite} is
|
|
changed like so:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude check_money.3-6.c.diff
|
|
@end example
|
|
@end cartouche
|
|
|
|
Now we can rerun our suite, and fix the problem(s). Note that errors
|
|
in the ``Core'' test case will be reported as ``Core'', and errors in
|
|
the ``Limits'' test case will be reported as ``Limits'', giving you
|
|
additional information about where things broke.
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude money.5-6.c.diff
|
|
@end example
|
|
@end cartouche
|
|
|
|
@node No Fork Mode, Test Fixtures, Running Multiple Cases, Advanced Features
|
|
@section No Fork Mode
|
|
|
|
Check normally forks to create a separate address space. This allows
|
|
a signal or early exit to be caught and reported, rather than taking
|
|
down the entire test program, and is normally very useful. However,
|
|
when you are trying to debug why the segmentation fault or other
|
|
program error occurred, forking makes it difficult to use debugging
|
|
tools. To define fork mode for an @code{SRunner} object, you can do
|
|
one of the following:
|
|
|
|
@vindex CK_FORK
|
|
@findex srunner_set_fork_status
|
|
@enumerate
|
|
@item
|
|
Define the CK_FORK environment variable to equal ``no''.
|
|
|
|
@item
|
|
Explicitly define the fork status through the use of the following
|
|
function:
|
|
|
|
@verbatim
|
|
void srunner_set_fork_status (SRunner * sr, enum fork_status fstat);
|
|
@end verbatim
|
|
@end enumerate
|
|
|
|
The enum @code{fork_status} allows the @code{fstat} parameter to
|
|
assume the following values: @code{CK_FORK} and @code{CK_NOFORK}. An
|
|
explicit call to @code{srunner_set_fork_status()} overrides the
|
|
@code{CK_FORK} environment variable.
|
|
|
|
@node Test Fixtures, Multiple Suites in one SRunner, No Fork Mode, Advanced Features
|
|
@section Test Fixtures
|
|
|
|
We may want multiple tests that all use the same Money. In such
|
|
cases, rather than setting up and tearing down objects for each unit
|
|
test, it may be convenient to add some setup that is constant across
|
|
all the tests in a test case. Each such setup/teardown pair is called
|
|
a @dfn{test fixture} in test-driven development jargon.
|
|
|
|
A fixture is created by defining a setup and/or a teardown function,
|
|
and associating it with a test case. There are two kinds of test
|
|
fixtures in Check: checked and unchecked fixtures. These are defined
|
|
as follows:
|
|
|
|
@table @asis
|
|
@item Checked fixtures
|
|
are run inside the address space created by the fork to create the
|
|
unit test. Before each unit test in a test case, the @code{setup()}
|
|
function is run, if defined. After each unit test, the
|
|
@code{teardown()} function is run, if defined. Since they run inside
|
|
the forked address space, if checked fixtures signal or otherwise
|
|
fail, they will be caught and reported by the @code{SRunner}. A
|
|
checked @code{teardown()} fixture will not run if the unit test
|
|
fails.
|
|
|
|
@item Unchecked fixtures
|
|
are run in the same address space as the test program. Therefore they
|
|
may not signal or exit, but may use the fail functions. The unchecked
|
|
@code{setup()}, if defined, is run before the test case is
|
|
started. The unchecked @code{teardown()}, if defined, is run after the
|
|
test case is done. An unchecked @code{teardown()} fixture will run even
|
|
if a unit test fails.
|
|
@end table
|
|
|
|
An important difference is that the checked fixtures are run once per
|
|
unit test and the unchecked fixtures are run once per test case.
|
|
So for a test case that contains @code{check_one()} and
|
|
@code{check_two()} unit tests,
|
|
@code{checked_setup()}/@code{checked_teardown()} checked fixtures, and
|
|
@code{unchecked_setup()}/@code{unchecked_teardown()} unchecked
|
|
fixtures, the control flow would be:
|
|
@example
|
|
@verbatim
|
|
unchecked_setup();
|
|
fork();
|
|
checked_setup();
|
|
check_one();
|
|
checked_teardown();
|
|
wait();
|
|
fork();
|
|
checked_setup();
|
|
check_two();
|
|
checked_teardown();
|
|
wait();
|
|
unchecked_teardown();
|
|
@end verbatim
|
|
@end example
|
|
|
|
@menu
|
|
* Test Fixture Examples::
|
|
* Checked vs Unchecked Fixtures::
|
|
@end menu
|
|
|
|
@node Test Fixture Examples, Checked vs Unchecked Fixtures, Test Fixtures, Test Fixtures
|
|
@subsection Test Fixture Examples
|
|
|
|
We create a test fixture in Check as follows:
|
|
|
|
@enumerate
|
|
@item
|
|
Define global variables, and functions to setup and teardown the
|
|
globals. The functions both take @code{void} and return @code{void}.
|
|
In our example, we'll make @code{five_dollars} be a global created and
|
|
freed by @code{setup()} and @code{teardown()} respectively.
|
|
|
|
@item
|
|
@findex tcase_add_checked_fixture
|
|
Add the @code{setup()} and @code{teardown()} functions to the test
|
|
case with @code{tcase_add_checked_fixture()}. In our example, this
|
|
belongs in the suite setup function @code{money_suite}.
|
|
|
|
@item
|
|
Rewrite tests to use the globals. We'll rewrite our first to use
|
|
@code{five_dollars}.
|
|
@end enumerate
|
|
|
|
Note that the functions used for setup and teardown do not need to be
|
|
named @code{setup()} and @code{teardown()}, but they must take
|
|
@code{void} and return @code{void}. We'll update @file{check_money.c}
|
|
with the following patch:
|
|
|
|
@cartouche
|
|
@example
|
|
@verbatiminclude check_money.6-7.c.diff
|
|
@end example
|
|
@end cartouche
|
|
|
|
@node Checked vs Unchecked Fixtures, , Test Fixture Examples, Test Fixtures
|
|
@subsection Checked vs Unchecked Fixtures
|
|
|
|
Checked fixtures run once for each unit test in a test case, and so
|
|
they should not be used for expensive setup. However, if a checked
|
|
fixture fails and @code{CK_FORK} mode is being used, it will not bring
|
|
down the entire framework.
|
|
|
|
On the other hand, unchecked fixtures run once for an entire test
|
|
case, as opposed to once per unit test, and so can be used for
|
|
expensive setup. However, since they may take down the entire test
|
|
program, they should only be used if they are known to be safe.
|
|
|
|
Additionally, the isolation of objects created by unchecked fixtures
|
|
is not guaranteed by @code{CK_NOFORK} mode. Normally, in
|
|
@code{CK_FORK} mode, unit tests may abuse the objects created in an
|
|
unchecked fixture with impunity, without affecting other unit tests in
|
|
the same test case, because the fork creates a separate address space.
|
|
However, in @code{CK_NOFORK} mode, all tests live in the same address
|
|
space, and side effects in one test will affect the unchecked fixture
|
|
for the other tests.
|
|
|
|
A checked fixture will generally not be affected by unit test side
|
|
effects, since the @code{setup()} is run before each unit test. There
|
|
is an exception for side effects to the total environment in which the
|
|
test program lives: for example, if the @code{setup()} function
|
|
initializes a file that a unit test then changes, the combination of
|
|
the @code{teardown()} function and @code{setup()} function must be able
|
|
to restore the environment for the next unit test.
|
|
|
|
If the @code{setup()} function in a fixture fails, in either checked
|
|
or unchecked fixtures, the unit tests for the test case, and the
|
|
@code{teardown()} function for the fixture will not be run. A fixture
|
|
error will be created and reported to the @code{SRunner}.
|
|
|
|
@node Multiple Suites in one SRunner, Selective Running of Tests, Test Fixtures, Advanced Features
|
|
@section Multiple Suites in one SRunner
|
|
|
|
In a large program, it will be convenient to create multiple suites,
|
|
each testing a module of the program. While one can create several
|
|
test programs, each running one @code{Suite}, it may be convenient to
|
|
create one main test program, and use it to run multiple suites. The
|
|
Check test suite provides an example of how to do this. The main
|
|
testing program is called @code{check_check}, and has a header file
|
|
that declares suite creation functions for all the module tests:
|
|
@example
|
|
@verbatim
|
|
Suite *make_sub_suite (void);
|
|
Suite *make_sub2_suite (void);
|
|
Suite *make_master_suite (void);
|
|
Suite *make_list_suite (void);
|
|
Suite *make_msg_suite (void);
|
|
Suite *make_log_suite (void);
|
|
Suite *make_limit_suite (void);
|
|
Suite *make_fork_suite (void);
|
|
Suite *make_fixture_suite (void);
|
|
Suite *make_pack_suite (void);
|
|
@end verbatim
|
|
@end example
|
|
|
|
@findex srunner_add_suite
|
|
The function @code{srunner_add_suite()} is used to add additional
|
|
suites to an @code{SRunner}. Here is the code that sets up and runs
|
|
the @code{SRunner} in the @code{main()} function in
|
|
@file{check_check_main.c}:
|
|
@example
|
|
@verbatim
|
|
SRunner *sr;
|
|
sr = srunner_create (make_master_suite ());
|
|
srunner_add_suite (sr, make_list_suite ());
|
|
srunner_add_suite (sr, make_msg_suite ());
|
|
srunner_add_suite (sr, make_log_suite ());
|
|
srunner_add_suite (sr, make_limit_suite ());
|
|
srunner_add_suite (sr, make_fork_suite ());
|
|
srunner_add_suite (sr, make_fixture_suite ());
|
|
srunner_add_suite (sr, make_pack_suite ());
|
|
@end verbatim
|
|
@end example
|
|
|
|
@node Selective Running of Tests, Testing Signal Handling and Exit Values, Multiple Suites in one SRunner, Advanced Features
|
|
@section Selective Running of Tests
|
|
|
|
@vindex CK_RUN_SUITE
|
|
@vindex CK_RUN_CASE
|
|
After adding a couple of suites and some test cases in each, it is
|
|
sometimes practical to be able to run only one suite, or one
|
|
specific test case, without recompiling the test code. There are
|
|
two environment variables available that offers this ability,
|
|
@code{CK_RUN_SUITE} and @code{CK_RUN_CASE}. Just set the value to
|
|
the name of the suite and/or test case you want to run. These
|
|
environment variables can also be a good integration tool for
|
|
running specific tests from within another tool, e.g. an IDE.
|
|
|
|
@node Testing Signal Handling and Exit Values, Looping Tests, Selective Running of Tests, Advanced Features
|
|
@section Testing Signal Handling and Exit Values
|
|
|
|
@findex tcase_add_test_raise_signal
|
|
|
|
To enable testing of signal handling, there is a function
|
|
@code{tcase_add_test_raise_signal()} which is used instead of
|
|
@code{tcase_add_test()}. This function takes an additional signal
|
|
argument, specifying a signal that the test expects to receive. If no
|
|
signal is received this is logged as a failure. If a different signal
|
|
is received this is logged as an error.
|
|
|
|
The signal handling functionality only works in CK_FORK mode.
|
|
|
|
@findex tcase_add_exit_test
|
|
|
|
To enable testing of expected exits, there is a function
|
|
@code{tcase_add_exit_test()} which is used instead of @code{tcase_add_test()}.
|
|
This function takes an additional expected exit value argument,
|
|
specifying a value that the test is expected to exit with. If the test
|
|
exits with any other value this is logged as a failure. If the test exits
|
|
early this is logged as an error.
|
|
|
|
The exit handling functionality only works in CK_FORK mode.
|
|
|
|
@node Looping Tests, Test Timeouts, Testing Signal Handling and Exit Values, Advanced Features
|
|
@section Looping Tests
|
|
|
|
Looping tests are tests that are called with a new context for each
|
|
loop iteration. This makes them ideal for table based tests. If
|
|
loops are used inside ordinary tests to test multiple values, only the
|
|
first error will be shown before the test exits. However, looping
|
|
tests allow for all errors to be shown at once, which can help out
|
|
with debugging.
|
|
|
|
@findex tcase_add_loop_test
|
|
Adding a normal test with @code{tcase_add_loop_test()} instead of
|
|
@code{tcase_add_test()} will make the test function the body of a
|
|
@code{for} loop, with the addition of a fork before each call. The
|
|
loop variable @code{_i} is available for use inside the test function;
|
|
for example, it could serve as an index into a table. For failures,
|
|
the iteration which caused the failure is available in error messages
|
|
and logs.
|
|
|
|
Start and end values for the loop are supplied when adding the test.
|
|
The values are used as in a normal @code{for} loop. Below is some
|
|
pseudo-code to show the concept:
|
|
@example
|
|
@verbatim
|
|
for (_i = tfun->loop_start; _i < tfun->loop_end; _i++)
|
|
{
|
|
fork(); /* New context */
|
|
tfun->f(_i); /* Call test function */
|
|
wait(); /* Wait for child to terminate */
|
|
}
|
|
@end verbatim
|
|
@end example
|
|
|
|
An example of looping test usage follows:
|
|
@example
|
|
@verbatim
|
|
static const int primes[5] = {2,3,5,7,11};
|
|
|
|
START_TEST (check_is_prime)
|
|
{
|
|
ck_assert (is_prime (primes[_i]));
|
|
}
|
|
END_TEST
|
|
|
|
...
|
|
|
|
tcase_add_loop_test (tcase, check_is_prime, 0, 5);
|
|
@end verbatim
|
|
@end example
|
|
|
|
Looping tests work in @code{CK_NOFORK} mode as well, but without the
|
|
forking. This means that only the first error will be shown.
|
|
|
|
@node Test Timeouts, Determining Test Coverage, Looping Tests, Advanced Features
|
|
@section Test Timeouts
|
|
|
|
@findex tcase_set_timeout
|
|
@vindex CK_DEFAULT_TIMEOUT
|
|
@vindex CK_TIMEOUT_MULTIPLIER
|
|
To be certain that a test won't hang indefinitely, all tests are run
|
|
with a timeout, the default being 4 seconds. If the test is not
|
|
finished within that time, it is killed and logged as an error.
|
|
|
|
The timeout for a specific test case, which may contain multiple unit
|
|
tests, can be changed with the @code{tcase_set_timeout()} function.
|
|
The default timeout used for all test cases can be changed with the
|
|
environment variable @code{CK_DEFAULT_TIMEOUT}, but this will not
|
|
override an explicitly set timeout. Another way to change the timeout
|
|
length is to use the @code{CK_TIMEOUT_MULTIPLIER} environment variable,
|
|
which multiplies all timeouts, including those set with
|
|
@code{tcase_set_timeout()}, with the supplied integer value. All timeout
|
|
arguments are in seconds and a timeout of 0 seconds turns off the timeout
|
|
functionality. On systems that support it, the timeout can be specified
|
|
using a nanosecond precision. Otherwise, second precision is used.
|
|
|
|
Test timeouts are only available in CK_FORK mode.
|
|
|
|
@node Determining Test Coverage, Finding Memory Leaks, Test Timeouts, Advanced Features
|
|
@section Determining Test Coverage
|
|
|
|
The term @dfn{code coverage} refers to the extent that the statements
|
|
of a program are executed during a run. Thus, @dfn{test coverage}
|
|
refers to code coverage when executing unit tests. This information
|
|
can help you to do two things:
|
|
|
|
@itemize
|
|
@item
|
|
Write better tests that more fully exercise your code, thereby
|
|
improving confidence in it.
|
|
|
|
@item
|
|
Detect dead code that could be factored away.
|
|
@end itemize
|
|
|
|
Check itself does not provide any means to determine this test
|
|
coverage; rather, this is the job of the compiler and its related
|
|
tools. In the case of @command{gcc} this information is easy to
|
|
obtain, and other compilers should provide similar facilities.
|
|
|
|
Using @command{gcc}, first enable test coverage profiling when
|
|
building your source by specifying the @option{-fprofile-arcs} and
|
|
@option{-ftest-coverage} switches:
|
|
@example
|
|
@verbatim
|
|
$ gcc -g -Wall -fprofile-arcs -ftest-coverage -o foo foo.c foo_check.c
|
|
@end verbatim
|
|
@end example
|
|
|
|
You will see that an additional @file{.gcno} file is created for each
|
|
@file{.c} input file. After running your tests the normal way, a
|
|
@file{.gcda} file is created for each @file{.gcno} file. These
|
|
contain the coverage data in a raw format. To combine this
|
|
information and a source file into a more readable format you can use
|
|
the @command{gcov} utility:
|
|
@example
|
|
@verbatim
|
|
$ gcov foo.c
|
|
@end verbatim
|
|
@end example
|
|
|
|
This will produce the file @file{foo.c.gcov} which looks like this:
|
|
@example
|
|
@verbatim
|
|
-: 41: * object */
|
|
18: 42: if (ht->table[p] != NULL) {
|
|
-: 43: /* replaces the current entry */
|
|
#####: 44: ht->count--;
|
|
#####: 45: ht->size -= ht->table[p]->size +
|
|
#####: 46: sizeof(struct hashtable_entry);
|
|
@end verbatim
|
|
@end example
|
|
|
|
As you can see this is an annotated source file with three columns:
|
|
usage information, line numbers, and the original source. The usage
|
|
information in the first column can either be '-', which means that
|
|
this line does not contain code that could be executed; '#####', which
|
|
means this line was never executed although it does contain
|
|
code---these are the lines that are probably most interesting for you;
|
|
or a number, which indicates how often that line was executed.
|
|
|
|
This is of course only a very brief overview, but it should illustrate
|
|
how determining test coverage generally works, and how it can help
|
|
you. For more information or help with other compilers, please refer
|
|
to the relevant manuals.
|
|
|
|
@node Finding Memory Leaks, Test Logging, Determining Test Coverage, Advanced Features
|
|
@section Finding Memory Leaks
|
|
|
|
It is possible to determine if any code under test leaks memory during
|
|
a test. Check itself does not have an API for memory leak detection,
|
|
however Valgrind can be used against a unit testing program to search
|
|
for potential leaks.
|
|
|
|
Before discussing memory leak detection, first a "memory leak" should be
|
|
better defined. There are two primary definitions of a memory leak:
|
|
|
|
@enumerate
|
|
@item
|
|
Memory that is allocated but not freed before a program terminates.
|
|
However, it was possible for the program to free the memory if it had
|
|
wanted to. Valgrind refers to these as "still reachable" leaks.
|
|
@item
|
|
Memory that is allocated, and any reference to the memory is lost.
|
|
The program could not have freed the memory. Valgrind refers to these
|
|
as "definitely lost" leaks.
|
|
@end enumerate
|
|
|
|
Valgrind uses the second definition by default when defining a memory leak.
|
|
These leaks are the ones which are likely to cause a program issues due
|
|
to heap depletion.
|
|
|
|
If one wanted to run Valgrind against a unit testing program to determine
|
|
if leaks are present, the following invocation of Valgrind will work:
|
|
|
|
@example
|
|
@verbatim
|
|
valgrind --leak-check=full ${UNIT_TEST_PROGRAM}
|
|
...
|
|
==3979== LEAK SUMMARY:
|
|
==3979== definitely lost: 0 bytes in 0 blocks
|
|
==3979== indirectly lost: 0 bytes in 0 blocks
|
|
==3979== possibly lost: 0 bytes in 0 blocks
|
|
==3979== still reachable: 548 bytes in 24 blocks
|
|
==3979== suppressed: 0 bytes in 0 blocks
|
|
@end verbatim
|
|
@end example
|
|
|
|
In that example, there were no "definitely lost" memory leaks found.
|
|
However, why would there be such a large number of "still reachable"
|
|
memory leaks? It turns out this is a consequence of using @code{fork()}
|
|
to run a unit test in its own process memory space, which Check does by
|
|
default on platforms with @code{fork()} available.
|
|
|
|
Consider the example where a unit test program creates one suite with
|
|
one test. The flow of the program will look like the following:
|
|
|
|
@example
|
|
@b{Main process:} @b{Unit test process:}
|
|
create suite
|
|
srunner_run_all()
|
|
fork unit test unit test process created
|
|
wait for test start test
|
|
... end test
|
|
... exit(0)
|
|
test complete
|
|
report result
|
|
free suite
|
|
exit(0)
|
|
@end example
|
|
|
|
The unit testing process has a copy of all memory that the main process
|
|
allocated. In this example, that would include the suite allocated in
|
|
main. When the unit testing process calls @code{exit(0)}, the suite
|
|
allocated in @code{main()} is reachable but not freed. As the unit test
|
|
has no reason to do anything besides die when its test is finished, and
|
|
it has no reasonable way to free everything before it dies, Valgrind
|
|
reports that some memory is still reachable but not freed.
|
|
|
|
If the "still reachable" memory leaks are a concern, and one required that
|
|
the unit test program report that there were no memory leaks regardless
|
|
of the type, then the unit test program needs to run without fork. To
|
|
accomplish this, either define the @code{CK_FORK=no} environment variable,
|
|
or use the @code{srunner_set_fork_status()} function to set the fork mode
|
|
as @code{CK_NOFORK} for all suite runners.
|
|
|
|
Running the same unit test program by disabling @code{fork()} results
|
|
in the following:
|
|
|
|
@example
|
|
@verbatim
|
|
CK_FORK=no valgrind --leak-check=full ${UNIT_TEST_PROGRAM}
|
|
...
|
|
==4924== HEAP SUMMARY:
|
|
==4924== in use at exit: 0 bytes in 0 blocks
|
|
==4924== total heap usage: 482 allocs, 482 frees, 122,351 bytes allocated
|
|
==4924==
|
|
==4924== All heap blocks were freed -- no leaks are possible
|
|
@end verbatim
|
|
@end example
|
|
|
|
@node Test Logging, Subunit Support, Finding Memory Leaks, Advanced Features
|
|
@section Test Logging
|
|
|
|
@findex srunner_set_log
|
|
Check supports an operation to log the results of a test run. To use
|
|
test logging, call the @code{srunner_set_log()} function with the name
|
|
of the log file you wish to create:
|
|
@example
|
|
@verbatim
|
|
SRunner *sr;
|
|
sr = srunner_create (make_s1_suite ());
|
|
srunner_add_suite (sr, make_s2_suite ());
|
|
srunner_set_log (sr, "test.log");
|
|
srunner_run_all (sr, CK_NORMAL);
|
|
@end verbatim
|
|
@end example
|
|
|
|
In this example, Check will write the results of the run to
|
|
@file{test.log}. The @code{print_mode} argument to
|
|
@code{srunner_run_all()} is ignored during test logging; the log will
|
|
contain a result entry, organized by suite, for every test run. Here
|
|
is an example of test log output:
|
|
@example
|
|
@verbatim
|
|
Running suite S1
|
|
ex_log_output.c:8:P:Core:test_pass: Test passed
|
|
ex_log_output.c:14:F:Core:test_fail: Failure
|
|
ex_log_output.c:18:E:Core:test_exit: (after this point) Early exit
|
|
with return value 1
|
|
Running suite S2
|
|
ex_log_output.c:26:P:Core:test_pass2: Test passed
|
|
Results for all suites run:
|
|
50%: Checks: 4, Failures: 1, Errors: 1
|
|
@end verbatim
|
|
@end example
|
|
|
|
Another way to enable test logging is to use the @code{CK_LOG_FILE_NAME}
|
|
environment variable. When set tests will be logged to the specified file name.
|
|
If log file is specified with both @code{CK_LOG_FILE_NAME} and
|
|
@code{srunner_set_log()}, the name provided to @code{srunner_set_log()} will
|
|
be used.
|
|
|
|
If the log name is set to "-" either via @code{srunner_set_log()} or
|
|
@code{CK_LOG_FILE_NAME}, the log data will be printed to stdout instead
|
|
of to a file.
|
|
|
|
|
|
@menu
|
|
* XML Logging::
|
|
* TAP Logging::
|
|
@end menu
|
|
|
|
@node XML Logging, , Test Logging, Test Logging
|
|
@subsection XML Logging
|
|
|
|
@findex srunner_set_xml
|
|
@findex srunner_has_xml
|
|
@findex srunner_xml_fname
|
|
The log can also be written in XML. The following functions define
|
|
the interface for XML logs:
|
|
@example
|
|
@verbatim
|
|
void srunner_set_xml (SRunner *sr, const char *fname);
|
|
int srunner_has_xml (SRunner *sr);
|
|
const char *srunner_xml_fname (SRunner *sr);
|
|
@end verbatim
|
|
@end example
|
|
|
|
XML output is enabled by a call to @code{srunner_set_xml()} before the tests
|
|
are run. Here is an example of an XML log:
|
|
@example
|
|
@verbatim
|
|
<?xml version="1.0"?>
|
|
<?xml-stylesheet type="text/xsl" href="http://check.sourceforge.net/xml/check_unittest.xslt"?>
|
|
<testsuites xmlns="http://check.sourceforge.net/ns">
|
|
<datetime>2012-10-19 09:56:06</datetime>
|
|
<suite>
|
|
<title>S1</title>
|
|
<test result="success">
|
|
<path>.</path>
|
|
<fn>ex_xml_output.c:10</fn>
|
|
<id>test_pass</id>
|
|
<iteration>0</iteration>
|
|
<duration>0.000013</duration>
|
|
<description>Core</description>
|
|
<message>Passed</message>
|
|
</test>
|
|
<test result="failure">
|
|
<path>.</path>
|
|
<fn>ex_xml_output.c:16</fn>
|
|
<id>test_fail</id>
|
|
<iteration>0</iteration>
|
|
<duration>-1.000000</duration>
|
|
<description>Core</description>
|
|
<message>Failure</message>
|
|
</test>
|
|
<test result="error">
|
|
<path>.</path>
|
|
<fn>ex_xml_output.c:20</fn>
|
|
<id>test_exit</id>
|
|
<iteration>0</iteration>
|
|
<duration>-1.000000</duration>
|
|
<description>Core</description>
|
|
<message>Early exit with return value 1</message>
|
|
</test>
|
|
</suite>
|
|
<suite>
|
|
<title>S2</title>
|
|
<test result="success">
|
|
<path>.</path>
|
|
<fn>ex_xml_output.c:28</fn>
|
|
<id>test_pass2</id>
|
|
<iteration>0</iteration>
|
|
<duration>0.000011</duration>
|
|
<description>Core</description>
|
|
<message>Passed</message>
|
|
</test>
|
|
<test result="failure">
|
|
<path>.</path>
|
|
<fn>ex_xml_output.c:34</fn>
|
|
<id>test_loop</id>
|
|
<iteration>0</iteration>
|
|
<duration>-1.000000</duration>
|
|
<description>Core</description>
|
|
<message>Iteration 0 failed</message>
|
|
</test>
|
|
<test result="success">
|
|
<path>.</path>
|
|
<fn>ex_xml_output.c:34</fn>
|
|
<id>test_loop</id>
|
|
<iteration>1</iteration>
|
|
<duration>0.000010</duration>
|
|
<description>Core</description>
|
|
<message>Passed</message>
|
|
</test>
|
|
<test result="failure">
|
|
<path>.</path>
|
|
<fn>ex_xml_output.c:34</fn>
|
|
<id>test_loop</id>
|
|
<iteration>2</iteration>
|
|
<duration>-1.000000</duration>
|
|
<description>Core</description>
|
|
<message>Iteration 2 failed</message>
|
|
</test>
|
|
</suite>
|
|
<suite>
|
|
<title>XML escape " ' < > & tests</title>
|
|
<test result="failure">
|
|
<path>.</path>
|
|
<fn>ex_xml_output.c:40</fn>
|
|
<id>test_xml_esc_fail_msg</id>
|
|
<iteration>0</iteration>
|
|
<duration>-1.000000</duration>
|
|
<description>description " ' < > &</description>
|
|
<message>fail " ' < > & message</message>
|
|
</test>
|
|
</suite>
|
|
<duration>0.001610</duration>
|
|
</testsuites>
|
|
@end verbatim
|
|
@end example
|
|
|
|
XML logging can be enabled by an environment variable as well. If
|
|
@code{CK_XML_LOG_FILE_NAME} environment variable is set, the XML test log will
|
|
be written to specified file name. If XML log file is specified with both
|
|
@code{CK_XML_LOG_FILE_NAME} and @code{srunner_set_xml()}, the name provided
|
|
to @code{srunner_set_xml()} will be used.
|
|
|
|
If the log name is set to "-" either via @code{srunner_set_xml()} or
|
|
@code{CK_XML_LOG_FILE_NAME}, the log data will be printed to stdout instead
|
|
of to a file.
|
|
|
|
If both plain text and XML log files are specified, by any of above methods,
|
|
then check will log to both files. In other words logging in plain text and XML
|
|
format simultaneously is supported.
|
|
|
|
@node TAP Logging, , Test Logging, Test Logging
|
|
@subsection TAP Logging
|
|
|
|
@findex srunner_set_tap
|
|
@findex srunner_has_tap
|
|
@findex srunner_tap_fname
|
|
The log can also be written in Test Anything Protocol (TAP) format.
|
|
Refer to the @uref{http://podwiki.hexten.net/TAP/TAP.html,TAP Specification}
|
|
for information on valid TAP output and parsers of TAP. The following
|
|
functions define the interface for TAP logs:
|
|
@example
|
|
@verbatim
|
|
void srunner_set_tap (SRunner *sr, const char *fname);
|
|
int srunner_has_tap (SRunner *sr);
|
|
const char *srunner_tap_fname (SRunner *sr);
|
|
@end verbatim
|
|
@end example
|
|
|
|
TAP output is enabled by a call to @code{srunner_set_tap()} before the tests
|
|
are run. Here is an example of an TAP log:
|
|
@example
|
|
@verbatim
|
|
ok 1 - mytests.c:test_suite_name:my_test_1: Passed
|
|
ok 2 - mytests.c:test_suite_name:my_test_2: Passed
|
|
not ok 3 - mytests.c:test_suite_name:my_test_3: Foo happened
|
|
ok 4 - mytests.c:test_suite_name:my_test_1: Passed
|
|
1..4
|
|
@end verbatim
|
|
@end example
|
|
|
|
TAP logging can be enabled by an environment variable as well. If
|
|
@code{CK_TAP_LOG_FILE_NAME} environment variable is set, the TAP test log will
|
|
be written to specified file name. If TAP log file is specified with both
|
|
@code{CK_TAP_LOG_FILE_NAME} and @code{srunner_set_tap()}, the name provided
|
|
to @code{srunner_set_tap()} will be used.
|
|
|
|
If the log name is set to "-" either via @code{srunner_set_tap()} or
|
|
@code{CK_TAP_LOG_FILE_NAME}, the log data will be printed to stdout instead
|
|
of to a file.
|
|
|
|
If both plain text and TAP log files are specified, by any of above methods,
|
|
then check will log to both files. In other words logging in plain text and TAP
|
|
format simultaneously is supported.
|
|
|
|
|
|
@node Subunit Support, , Test Logging, Advanced Features
|
|
@section Subunit Support
|
|
|
|
Check supports running test suites with subunit output. This can be useful to
|
|
combine test results from multiple languages, or to perform programmatic
|
|
analysis on the results of multiple check test suites or otherwise handle test
|
|
results in a programmatic manner. Using subunit with check is very straight
|
|
forward. There are two steps:
|
|
1) In your check test suite driver pass 'CK_SUBUNIT' as the output mode
|
|
for your srunner.
|
|
@example
|
|
@verbatim
|
|
SRunner *sr;
|
|
sr = srunner_create (make_s1_suite ());
|
|
srunner_add_suite (sr, make_s2_suite ());
|
|
srunner_run_all (sr, CK_SUBUNIT);
|
|
@end verbatim
|
|
@end example
|
|
2) Setup your main language test runner to run your check based test
|
|
executable. For instance using python:
|
|
@example
|
|
@verbatim
|
|
|
|
import subunit
|
|
|
|
class ShellTests(subunit.ExecTestCase):
|
|
"""Run some tests from the C codebase."""
|
|
|
|
def test_group_one(self):
|
|
"""./foo/check_driver"""
|
|
|
|
def test_group_two(self):
|
|
"""./foo/other_driver"""
|
|
@end verbatim
|
|
@end example
|
|
|
|
In this example, running the test suite ShellTests in python (using any test
|
|
runner - unittest.py, tribunal, trial, nose or others) will run
|
|
./foo/check_driver and ./foo/other_driver and report on their result.
|
|
|
|
Subunit is hosted on launchpad - the @uref{https://launchpad.net/subunit/,
|
|
subunit} project there contains bug tracker, future plans, and source code
|
|
control details.
|
|
|
|
@node Supported Build Systems, Conclusion and References, Advanced Features, Top
|
|
@chapter Supported Build Systems
|
|
@findex Supported Build Systems
|
|
|
|
Check officially supports two build systems: Autotools and CMake.
|
|
Primarily it is recommended to use Autotools where possible, as CMake is
|
|
only officially supported for Windows. Information on using Check in
|
|
either build system follows.
|
|
|
|
@menu
|
|
* Autotools::
|
|
* CMake::
|
|
@end menu
|
|
|
|
@node Autotools, CMake, Supported Build Systems, Supported Build Systems
|
|
@section Autotools
|
|
|
|
It is recommended to use pkg-config where possible to locate and use
|
|
Check in an Autotools project. This can be accomplished by including
|
|
the following in the project's @file{configure.ac} file:
|
|
|
|
@verbatim
|
|
PKG_CHECK_MODULES([CHECK], [check >= MINIMUM-VERSION])
|
|
@end verbatim
|
|
|
|
where MINIMUM-VERSION is the lowest version which is sufficient for
|
|
the project. For example, to guarantee that at least version 0.9.6 is
|
|
available, use the following:
|
|
|
|
@verbatim
|
|
PKG_CHECK_MODULES([CHECK], [check >= 0.9.6])
|
|
@end verbatim
|
|
|
|
An example of a @file{configure.ac} script for a project is
|
|
included in the @file{doc/example} directory in Check's source.
|
|
This macro should provide everything necessary to integrate Check
|
|
into an Autotools project.
|
|
|
|
If one does not wish to use pkg-config Check also provides its own
|
|
macro, @code{AM_PATH_CHECK()}, which may be used. This macro is
|
|
deprecated, but is still included with Check for backwards compatibility.
|
|
|
|
The @code{AM_PATH_CHECK()} macro is defined in the file
|
|
@file{check.m4} which is installed by Check. It has some optional
|
|
parameters that you might find useful in your @file{configure.ac}:
|
|
@verbatim
|
|
AM_PATH_CHECK([MINIMUM-VERSION,
|
|
[ACTION-IF-FOUND[,ACTION-IF-NOT-FOUND]]])
|
|
@end verbatim
|
|
|
|
@code{AM_PATH_CHECK} does several things:
|
|
|
|
@enumerate
|
|
@item
|
|
It ensures check.h is available
|
|
|
|
@item
|
|
It ensures a compatible version of Check is installed
|
|
|
|
@item
|
|
It sets @env{CHECK_CFLAGS} and @env{CHECK_LIBS} for use by Automake.
|
|
@end enumerate
|
|
|
|
If you include @code{AM_PATH_CHECK()} in @file{configure.ac} and
|
|
subsequently see warnings when attempting to create
|
|
@command{configure}, it probably means one of the following things:
|
|
|
|
@enumerate
|
|
@item
|
|
You forgot to call @command{aclocal}. @command{autoreconf} will do
|
|
this for you.
|
|
|
|
@item
|
|
@command{aclocal} can't find @file{check.m4}. Here are some possible
|
|
solutions:
|
|
|
|
@enumerate a
|
|
@item
|
|
Call @command{aclocal} with @option{-I} set to the location of
|
|
@file{check.m4}. This means you have to call both @command{aclocal} and
|
|
@command{autoreconf}.
|
|
|
|
@item
|
|
Add the location of @file{check.m4} to the @samp{dirlist} used by
|
|
@command{aclocal} and then call @command{autoreconf}. This means you
|
|
need permission to modify the @samp{dirlist}.
|
|
|
|
@item
|
|
Set @code{ACLOCAL_AMFLAGS} in your top-level @file{Makefile.am} to
|
|
include @option{-I DIR} with @code{DIR} being the location of
|
|
@file{check.m4}. Then call @command{autoreconf}.
|
|
@end enumerate
|
|
@end enumerate
|
|
|
|
|
|
@node CMake, , Autotools, Supported Build Systems
|
|
@section CMake
|
|
|
|
Those unable to use Autotools in their project may use CMake instead.
|
|
Officially CMake is supported only for Windows.
|
|
|
|
Documentation for using CMake is forthcoming. In the meantime, look
|
|
at the example CMake project in Check's @file{doc/examples} directory.
|
|
|
|
|
|
|
|
@node Conclusion and References, Environment Variable Reference, Supported Build Systems, Top
|
|
@chapter Conclusion and References
|
|
The tutorial and description of advanced features has provided an
|
|
introduction to all of the functionality available in Check.
|
|
Hopefully, this is enough to get you started writing unit tests with
|
|
Check. All the rest is simply application of what has been learned so
|
|
far with repeated application of the ``test a little, code a little''
|
|
strategy.
|
|
|
|
For further reference, see Kent Beck, ``Test-Driven Development: By
|
|
Example'', 1st ed., Addison-Wesley, 2003. ISBN 0-321-14653-0.
|
|
|
|
If you know of other authoritative references to unit testing and
|
|
test-driven development, please send us a patch to this manual.
|
|
|
|
@node Environment Variable Reference, Copying This Manual, Conclusion and References, Top
|
|
@appendix Environment Variable Reference
|
|
|
|
This is a reference to environment variables that Check recognized and their use.
|
|
|
|
CK_RUN_CASE: Name of a test case, runs only that test. See section @ref{Selective Running of Tests}.
|
|
|
|
CK_RUN_SUITE: Name of a test suite, runs only that suite. See section @ref{Selective Running of Tests}.
|
|
|
|
CK_VERBOSITY: How much output to emit, accepts: ``silent'', ``minimal'', ``normal'', ``subunit'', or ``verbose''. See section @ref{SRunner Output}.
|
|
|
|
CK_FORK: Set to ``no'' to disable using fork() to run unit tests in their own process. This is useful for debugging segmentation faults. See section @ref{No Fork Mode}.
|
|
|
|
CK_DEFAULT_TIMEOUT: Override Check's default unit test timeout, a floating value in seconds. ``0'' means no timeout. See section @ref{Test Timeouts}.
|
|
|
|
CK_TIMEOUT_MULTIPLIER: A multiplier used against the default unit test timeout. An integer, defaults to ``1''. See section @ref{Test Timeouts}.
|
|
|
|
CK_LOG_FILE_NAME: Filename to write logs to. See section @ref{Test Logging}.
|
|
|
|
CK_XML_LOG_FILE_NAME: Filename to write XML log to. See section @ref{XML Logging}.
|
|
|
|
CK_TAP_LOG_FILE_NAME: Filename to write TAP (Test Anything Protocol) output to. See section @ref{TAP Logging}.
|
|
|
|
|
|
@node Copying This Manual, Index, Environment Variable Reference, Top
|
|
@appendix Copying This Manual
|
|
|
|
@menu
|
|
* GNU Free Documentation License:: License for copying this manual.
|
|
@end menu
|
|
|
|
@include fdl.texi
|
|
|
|
@node Index, , Copying This Manual, Top
|
|
@unnumbered Index
|
|
|
|
@printindex cp
|
|
|
|
@bye
|