Difference between revisions of "How To Write Unit Tests For OpenSSL"

From OpenSSLWiki
Jump to navigationJump to search
(→‎Add each test case function to the test runner function: Update with advice to use ADD_TEST() and run_tests())
(→‎Style: Combine last two sections in light of ADD_TEST() and run_tests())
Line 136: Line 136:
 
The return value will be used to tally the number of test cases that failed. Even if multiple assertions fail for a single test case, the result should be exactly one.
 
The return value will be used to tally the number of test cases that failed. Even if multiple assertions fail for a single test case, the result should be exactly one.
  
=== Add each test case function to the test runner function ===
+
=== Register each test case using ADD_TEST() and execute using run_tests() ===
 
Whatever function is used as the test runner, be that <code>main()</code> or a separate function called by <code>main()</code>, add your test case functions to that function using <code>ADD_TEST()</code> and execute them using <code>run_tests()</code>.
 
Whatever function is used as the test runner, be that <code>main()</code> or a separate function called by <code>main()</code>, add your test case functions to that function using <code>ADD_TEST()</code> and execute them using <code>run_tests()</code>.
 +
 +
<code>run_tests()</code> will add up the total number of failed test cases and report that number as the last error message of the test. The return value of <code>run_tests()</code> should be the value returned from <code>main()</code>, which will be <code>EXIT_FAILURE</code> if any test cases failed, <code>EXIT_SUCCESS</code> otherwise.
  
 
TODO(mbland): create some kind of automated script or editor macros?
 
TODO(mbland): create some kind of automated script or editor macros?
 
=== Report the number of test cases that failed ===
 
Add up the total number of failed test cases in <code>main()</code> and report that number as the last error message of the test. <code>main()</code> should return <code>EXIT_FAILURE</code> if any test cases failed, <code>EXIT_SUCCESS</code> otherwise.
 
  
 
=== Disable for Windows (for now) ===
 
=== Disable for Windows (for now) ===

Revision as of 00:00, 20 July 2014

This is an outline of the basic process and principles to follow when writing unit tests. This document will evolve quite a bit as the community gains experience.

Mechanics

Forking the Repo

For now, people contributing new unit tests for uncovered code (as opposed to submitting tests with new code changes) should fork Mike Bland's GitHub repository and issue GitHub pull requests. Remember to do all development on a topic branch (not master). Tests committed to this repo will occasionally get merged into the master OpenSSL repo.

Rationale per Matt Caswell: We should think about process here. If you are going off to recruit an army of people writing tests then I wonder if it is worth setting up a separate github repository whilst you are building up the tests. We can then merge in from that on a periodic basis. I wouldn't want availability of openssl team committers to be a bottle neck.

Set up the OpenSSL master repository as your upstream remote as per GitHub's instructions on configuring remotes:

$ cd my-openssl-repo
$ git remote add upstream https://github.com/openssl/openssl.git

You should see the following output from git remove -v (where $USER is your GitHub username):

$ git remote -v
origin  https://github.com/$USER/openssl.git (fetch)
origin  https://github.com/$USER/openssl.git (push)
upstream        https://github.com/openssl/openssl.git (fetch)
upstream        https://github.com/openssl/openssl.git (push)

Check out the Tools and Tips page

Testing and Development Tools and Tips has information on tools that may make navigating and building the code a bit easier.

Use the Test Template Generator

TODO(mbland): Get template generator checked-in. Maybe have a template generator for each library, e.g. ssl/new-test.sh that has additional setup boilerplate specific to the ssl library.

Use the test/new-test.sh script to generate a skeleton test file. (pending in the test-util branch)

Add Makefile Targets

The following instructions use the Makefile targets for ssl/heartbeat_test.c as an example.

In the Makefile for the library containing the test, add the test source file to the TEST variable:

# ssl/Makefile
TEST=ssltest.c heartbeat_test.c

In test/Makefile:

  • add a variable for the test target near the top of the file, right after the existing test variables
  • use the variable to add an executable target to the EXE variable
  • use the variable to add an object file target to the OBJ variable
  • use the variable to add a source file target to the SRC variable
  • add the test target to the alltests target
  • add the target to execute the test
  • add the target to build the test executable
# test/Makefile
HEARTBEATTEST=  heartbeat_test
EXE=  ... $(HEARTBEATTEST)$(EXE_EXT)
OBJ= ... $(HEARTBEATTEST).o
SRC= ... $(HEARTBEATTEST).c
alltests: \
        ... test_heartbeat

test_heartbeat: $(HEARTBEATTEST)$(EXE_EXT)
  ../util/shlib_wrap.sh ./$(HEARTBEATTEST)

$(HEARTBEATTEST)$(EXE_EXT): $(HEARTBEATTEST).o $(DLIBCRYPTO) testutil.o
  @target=$(HEARTBEATTEST) testutil=testutil.o; $(BUILD_CMD)

Run make links && make depend

Finally, run make links && make depend to link the new test into the test/ directory and automatically generate the header file dependencies.

Building and Running the Test

If you're initially developing on Mac OS X or (for now) FreeBSD 10, just use the stock method of building and testing:

$ ./config && make && make test

# To execute just one specific test
$ make TESTS=test_heartbeat test 

Ultimately the test will have to compile and pass with developer flags enabled:

$ ./GitConfigure debug-ben-debug-64-clang
$ ./GitMake -j 2
$ ./GitMake test_heartbeat

The above currently doesn't work on Mac OS X or FreeBSD 10; for now, you can install the 9.1 release of FreeBSD, which uses Clang 3.1, via a virtualization platform such as VirtualBox. (Mac OS X breaks, in part, due to the fact that it doesn't use the GNU assembler; -fsanitize appears to be ignored on FreeBSD 9.1/Clang 3.1, but later versions break because the Clang compiler will emit -fsanitize symbols but the libasan library has yet to be ported to FreeBSD.)

Keep your repo up-to-date

Periodically run the following to keep your branch up-to-date:

$ git fetch upstream master
$ git rebase upstream/master

This will pull all the updates from the master OpenSSL repository into your repository, then update your branch to apply your changes on top of the latest updates.

Send a pull request

When your test is ready, send a GitHub pull request. Note that the pull request should be based on the tests branch of Mike's repository, not master. (This should be the default; let Mike know if it doesn't appear to be!) We'll review the code, and when it's ready, it'll get merged into Mike's repository. From there, it will eventually get pulled into the master OpenSSL repository.

Style

The Pseudo-xUnit Pattern is that established by ssl/heartbeat_test.c. This pattern organizes code in a fashion reminiscent of the xUnit family of unit testing frameworks, without actually using a testing framework. This should lower the barrier to entry for people wanting to write unit tests, but enable a relatively easy migration to an xUnit-based framework if we decide to do so one day.

Some of the basic principles to follow are:

#include the header for the code under test first

Having the header file for the code under test appear as the first #include directive ensures that that file is self-contained, i.e. it includes every header file it depends on, rather than relying on client code to include its dependencies.

#include "testutil.h" should come second

test/testutil.h contains the helper macros used in writing OpenSSL tests. Since the tests will be linked into test/ by the make links step, and built in the test/ directory, the "testutil.h" file will appear to be in the same directory as the test file.

Define a fixture structure

The fixture structure should contain all of the inputs to the code under test and all of the expected result values. It should also contain a const char* for the name of the test case function that created it, to aid in error message formatting. Even though the fixture may contain dynamically-allocated members, the fixture itself should be copied by value to reduce the necessary degree of memory management in a small unit test program.

Define set_up() and tear_down() functions for the fixture

set_up() should return a newly-initialized test fixture structure. It should take the name of the test case as an argument (i.e. __func__) and assign it to the fixture. All of the fixture members should be initialized, which each test case function can then override as needed.

tear_down() should take the fixture as an argument and release any resources allocated by set_up(). It can also call any library-wide error printing routines (e.g. ERR_print_errors_fp(stderr)).

Use SETUP_TEST_FIXTURE() and EXECUTE_TEST() from test/testutil.h

Each test case function should call set_up() as its first statement, and should call tear_down() just before returning. This is handled in a uniform fashion when using the SETUP_TEST_FIXTURE() and EXECUTE_TEST() helper macros from test/testutil.h. See the comments in test/testutil.h for usage, and ssl/heartbeat_test.c for an example of how to wrap the macros for a specific unit test.

Use test case functions, not a table of fixtures

Individual test case functions that call a common execution function are much more readable and maintainable than a loop over a table of fixture structures. Explicit fixture variable assignments aid comprehension when reading a specific test case, which saves time and energy when trying to understand a test or diagnose a failure. When a new member is added to an existing fixture, set_up() can set a default for all test cases, and only the test cases that rely on that new member need to be updated.

Use very descriptive test case names

Give tests long, descriptive names that provide ample context for the details of the test case. Good test names are also help produce good error messages.

Group test cases into "suites" by naming convention

Give logically-related tests the same prefix, e.g. test_dtls1_ and test_tls1 from ssl/heartbeat_test.c. If need be, you can define suite-specific set_up() functions that call the common set_up() and elaborate on it. (This generally shouldn't be necessary for tear_down().)

Keep individual test case functions focused on one thing

If the test name contains the word "and", consider breaking it into two or more separate test case functions.

Write very descriptive error messages

Include the test case function name in each error message, and explain in detail the context for the assertion that failed. Include the expected result (contained in the fixture structure) and the actual result returned from the code under test. Write helper methods for complex values as needed (e.g. print_payload() in ssl/heartbeat_test.c).

Return zero on success and one on failure

The return value will be used to tally the number of test cases that failed. Even if multiple assertions fail for a single test case, the result should be exactly one.

Register each test case using ADD_TEST() and execute using run_tests()

Whatever function is used as the test runner, be that main() or a separate function called by main(), add your test case functions to that function using ADD_TEST() and execute them using run_tests().

run_tests() will add up the total number of failed test cases and report that number as the last error message of the test. The return value of run_tests() should be the value returned from main(), which will be EXIT_FAILURE if any test cases failed, EXIT_SUCCESS otherwise.

TODO(mbland): create some kind of automated script or editor macros?

Disable for Windows (for now)

Until we solve the private-symbol problem on Windows, we will need to wrap our unit test code in the following #ifdef block:

#if !defined(OPENSSL_SYS_WINDOWS)

/* All the test code, including main() */

int main(int argc, char *argv[])                                                
    {
        return EXIT_SUCCESS;                                                    
    }

#endif /* OPENSSL_NO_WINDOWS  */

Samples

Existing tests that can be used as models for new tests.