How To Write Unit Tests For OpenSSL
This is an outline of the basic process and principles to follow when writing unit tests. This document will evolve quite a bit as the community gains experience.
Mechanics
Forking the Repo
For now, people contributing new unit tests for uncovered code (as opposed to submitting tests with new code changes) should fork Mike Bland's GitHub repository and issue GitHub pull requests. Remember to do all development on a topic branch (not master
). Tests committed to this repo will occasionally get merged into the master OpenSSL repo.
Rationale per Matt Caswell: We should think about process here. If you are going off to recruit an army of people writing tests then I wonder if it is worth setting up a separate github repository whilst you are building up the tests. We can then merge in from that on a periodic basis. I wouldn't want availability of openssl team committers to be a bottle neck.
Use the Test Template Generator
TODO(mbland): Get template generator checked-in. Maybe have a template generator for each library, e.g. ssl/new-test.sh
that has additional setup boilerplate specific to the ssl
library.
Use the test/new-test.sh
script to generate a skeleton test file.
Add Makefile Targets
The following instructions use the Makefile
targets for ssl/heartbeat_test.c
as an example.
In the Makefile
for the library containing the test, add the test source file to the TEST
variable:
# ssl/Makefile TEST=ssltest.c heartbeat_test.c
In test/Makefile
:
- add a variable for the test target near the top of the file, right after the existing test variables
- use the variable to add an executable target to the
EXE
variable - use the variable to add an object file target to the
OBJ
variable - use the variable to add a source file target to the
SRC
variable - add the test target to the
alltests
target - add the target to execute the test
- add the target to build the test executable
# test/Makefile HEARTBEATTEST= heartbeat_test EXE= ... $(HEARTBEATTEST)$(EXE_EXT) OBJ= ... $(HEARTBEATTEST).o SRC= ... $(HEARTBEATTEST).c alltests: \ ... test_heartbeat test_heartbeat: $(HEARTBEATTEST)$(EXE_EXT) ../util/shlib_wrap.sh ./$(HEARTBEATTEST) $(HEARTBEATTEST)$(EXE_EXT): $(HEARTBEATTEST).o $(DLIBCRYPTO) @target=$(HEARTBEATTEST); $(BUILD_CMD)
Finally, run make depend
to automatically generate the header file dependencies.
Building and Running the Test
If you're initially developing on Mac OS X or (for now) FreeBSD 10, just use the stock method of building and testing:
$ config && make && make test # To execute just one specific test $ TESTS=test_heartbeat make test
Ultimately the test will have to compile and pass with developer flags enabled:
$ ./GitConfigure debug-ben-debug-64-clang $ ./GitMake -j 2 $ ./GitMake test
The above currently doesn't work on Mac OS X or FreeBSD 10; for now, you can install the latest 9.x release of FreeBSD via a virtualization platform such as VirtualBox.
Style
Follow Pseudo-xUnit Style
The Pseudo-xUnit Pattern is that established by ssl/heartbeat_test.c. This pattern organizes code in a fashion reminiscent of the xUnit family of unit testing frameworks, without actually using a testing framework. This should lower the barrier to entry for people wanting to write unit tests, but enable a relatively easy migration to an xUnit-based framework if we decide to do so one day.
Some of the basic principles to follow are:
Define a fixture structure
The fixture structure should contain all of the inputs to the code under test and all of the expected result values. It should also contain a const char*
for the name of the test case function that created it, to aid in error message formatting. Even though the fixture may contain dynamically-allocated members, the fixture itself should be copied by value to reduce the necessary degree of memory management in a small unit test program.
Define set_up() and tear_down() functions for the fixture
set_up()
should return a newly-initialized test fixture structure. It should take the name of the test case as an argument (i.e. __func__
) and assign it to the fixture. All of the fixture members should be initialized, which each test case function can then override as needed.
tear_down()
should take the fixture as an argument and release any resources allocated by set_up()
. It can also call any library-wide error printing routines (e.g. ERR_print_errors_fp(stderr)
).
Each test case function should call set_up()
as its first statement, and should call tear_down()
just before returning.
Use test case functions, not a table of fixtures
Individual test case functions that call a common execution function are much more readable and maintainable than a loop over a table of fixture structures. Explicit fixture variable assignments aid comprehension when reading a specific test case, which saves time and energy when trying to understand a test or diagnose a failure. When a new member is added to an existing fixture, set_up()
can set a default for all test cases, and only the test cases that rely on that new member need to be updated.
Use very descriptive test case names
Give tests long, descriptive names that provide ample context for the details of the test case. Good test names are also help produce good error messages.
Group test cases into "suites" by naming convention
Give logically-related tests the same prefix, e.g. test_dtls1_
and test_tls1
from ssl/heartbeat_test.c
. If need be, you can define suite-specific set_up()
functions that call the common set_up()
and elaborate on it. (This generally shouldn't be necessary for tear_down()
.)
Keep individual test case functions focused on one thing
If the test name contains the word "and", consider breaking it into two or more separate test case functions.
Write very descriptive error messages
Include the test case function name in each error message, and explain in detail the context for the assertion that failed. Include the expected result (contained in the fixture structure) and the actual result returned from the code under test. Write helper methods for complex values as needed (e.g. print_payload()
in ssl/heartbeat_test.c
).
Return zero on success and one on failure
The return value will be used to tally the number of test cases that failed. Even if multiple assertions fail for a single test case, the result should be exactly one.
Add each test case function to the test runner function
Whatever function is used to execute the batch of test case functions, be that main()
or a separate function called by main()
, don't forget to add your test case functions to that function.
TODO(mbland): create some kind of automated script or editor macros?
Report the number of test cases that failed
Add up the total number of failed test cases in main()
and report that number as the last error message of the test. main()
should return EXIT_FAILURE
if any test cases failed, EXIT_SUCCESS
otherwise.
Disable for Windows (for now)
Until we solve the private-symbol problem on Windows, we will need to wrap our unit test code in the following #ifdef
block:
#if !defined(OPENSSL_SYS_WINDOWS) /* All the test code, including main() */ int main(int argc, char *argv[]) { return EXIT_SUCCESS; } #endif /* OPENSSL_NO_WINDOWS */
Samples
Existing tests that can be used as models for new tests.