Until the 1.15 release, NumPy used thenose testing framework, it now usesthepytest framework. The older framework is still maintained in order tosupport downstream projects that use the old numpy framework, but all testsfor NumPy should use pytest.
Our goal is that every module and package in SciPy and NumPyshould have a thorough set of unittests. These tests should exercise the full functionality of a givenroutine as well as its robustness to erroneous or unexpected inputarguments. Long experience has shown that by far the best time towrite the tests is before you write or change the code - this istest-driven development. Thearguments for this can sound rather abstract, but we can assure youthat you will find that writing the tests first leads to more robustand better designed code. Well-designed tests with good coverage makean enormous difference to the ease of refactoring. Whenever a new bugis found in a routine, you should write a new test for that specificcase and add it to the test suite to prevent that bug from creepingback in unnoticed.
To run SciPy’s full test suite, use the following:
>>>importscipy>>>scipy.test()
or from the command line:
$ python runtests.py
SciPy uses the testing framework from NumPy (specificallyTest Support (numpy.testing)), so all the SciPy examples shown here are alsoapplicable to NumPy. NumPy’s full test suite can be run asfollows:
>>>importnumpy>>>numpy.test()
The test method may take two or more arguments; the first,label is astring specifying what should be tested and the second,verbose is aninteger giving the level of output verbosity. See the docstring fornumpy.test for details. The default value forlabel is ‘fast’ - whichwill run the standard tests. The string ‘full’ will run the full batteryof tests, including those identified as being slow to run. Ifverboseis 1 or less, the tests will just show information messages about the teststhat are run; but if it is greater than 1, then the tests will also providewarnings on missing tests. So if you want to run every test and getmessages about which modules don’t have tests:
>>>scipy.test(label='full',verbose=2)# or scipy.test('full', 2)
Finally, if you are only interested in testing a subset of SciPy, forexample, theintegrate module, use the following:
>>>scipy.integrate.test()
or from the command line:
$python runtests.py -t scipy/integrate/tests
The rest of this page will give you a basic idea of how to add unittests to modules in SciPy. It is extremely important for us to haveextensive unit testing since this code is going to be used byscientists and researchers and is being developed by a large number ofpeople spread across the world. So, if you are writing a package thatyou’d like to become part of SciPy, please write the tests as youdevelop the package. Also since much of SciPy is legacy code that wasoriginally written without unit tests, there are still several modulesthat don’t have tests yet. Please feel free to choose one of thesemodules and develop tests for it as you read throughthis introduction.
Every Python module, extension module, or subpackage in the SciPypackage directory should have a correspondingtest_<name>.py file.Pytest examines these files for test methods (named test*) and testclasses (named Test*).
Suppose you have a SciPy modulescipy/xxx/yyy.py containing afunctionzzz(). To test this function you would create a testmodule calledtest_yyy.py. If you only need to test one aspect ofzzz, you can simply add a test function:
deftest_zzz():assert_(zzz()=='Hello from zzz')
More often, we need to group a number of tests together, so we createa test class:
fromnumpy.testingimportassert_,assert_raises# import xxx symbolsfromscipy.xxx.yyyimportzzzclassTestZzz:deftest_simple(self):assert_(zzz()=='Hello from zzz')deftest_invalid_parameter(self):assert_raises(...)
Within these test methods,assert_() and related functions are used to testwhether a certain assumption is valid. If the assertion fails, the test fails.Note that the Python builtinassert should not be used, because it isstripped during compilation with-O.
Note thattest_ functions or methods should not have a docstring, becausethat makes it hard to identify the test from the output of running the testsuite withverbose=2 (or similar verbosity setting). Use plain comments(#) if necessary.
Sometimes it is convenient to runtest_yyy.py by itself, so we add
if__name__=="__main__":run_module_suite()
at the bottom.
As an alternative topytest.mark.<label>, there are a number of labels youcan use.
Unlabeled tests like the ones above are run in the defaultscipy.test() run. If you want to label your test as slow - andtherefore reserved for a fullscipy.test(label='full') run, youcan label it with a decorator:
# numpy.testing module includes 'import decorators as dec'fromnumpy.testingimportdec,assert_@dec.slowdeftest_big(self):print'Big, slow test'
Similarly for methods:
classtest_zzz:@dec.slowdeftest_simple(self):assert_(zzz()=='Hello from zzz')
Available labels are:
slow: marks a test as taking a long timesetastest(tf): work-around for test discovery when the test name isnon conformantskipif(condition,msg=None): skips the test wheneval(condition) isTrueknownfailureif(fail_cond,msg=None): will avoid running the test ifeval(fail_cond) isTrue, useful for tests that conditionally segfaultdeprecated(conditional=True): filters deprecation warnings emitted in thetestparamaterize(var,input): an alternative topytest.mark.paramaterizedTesting looks for module-level or class-level setup and teardown functions byname; thus:
defsetup():"""Module-level setup"""print'doing setup'defteardown():"""Module-level teardown"""print'doing teardown'classTestMe(object):defsetup():"""Class-level setup"""print'doing setup'defteardown():"""Class-level teardown"""print'doing teardown'
Setup and teardown functions to functions and methods are known as “fixtures”,and their use is not encouraged.
One very nice feature of testing is allowing easy testing across a rangeof parameters - a nasty problem for standard unit tests. Use thedec.paramaterize decorator.
Doctests are a convenient way of documenting the behavior of a functionand allowing that behavior to be tested at the same time. The outputof an interactive Python session can be included in the docstring of afunction, and the test framework can run the example and compare theactual output to the expected output.
The doctests can be run by adding thedoctests argument to thetest() call; for example, to run all tests (including doctests)for numpy.lib:
>>>importnumpyasnp>>>np.lib.test(doctests=True)
The doctests are run as if they are in a fresh Python instance whichhas executedimportnumpyasnp. Tests that are part of a SciPysubpackage will have that subpackage already imported. E.g. for a testinscipy/linalg/tests/, the namespace will be created such thatfromscipyimportlinalg has already executed.
tests/¶Rather than keeping the code and the tests in the same directory, weput all the tests for a given subpackage in atests/subdirectory. For our example, if it doesn’t already exist you willneed to create atests/ directory inscipy/xxx/. So the pathfortest_yyy.py isscipy/xxx/tests/test_yyy.py.
Once thescipy/xxx/tests/test_yyy.py is written, its possible torun the tests by going to thetests/ directory and typing:
pythontest_yyy.py
Or if you addscipy/xxx/tests/ to the Python path, you could runthe tests interactively in the interpreter like this:
>>>importtest_yyy>>>test_yyy.test()
__init__.py andsetup.py¶Usually, however, adding thetests/ directory to the python pathisn’t desirable. Instead it would better to invoke the test straightfrom the modulexxx. To this end, simply place the following linesat the end of your package’s__init__.py file:
...deftest(level=1,verbosity=1):fromnumpy.testingimportTesterreturnTester().test(level,verbosity)
You will also need to add the tests directory in the configurationsection of your setup.py:
...defconfiguration(parent_package='',top_path=None):...config.add_data_dir('tests')returnconfig...
Now you can do the following to test your module:
>>>importscipy>>>scipy.xxx.test()
Also, when invoking the entire SciPy test suite, your tests will befound and run:
>>>importscipy>>>scipy.test()# your tests are included and run automatically!
If you have a collection of tests that must be run multiple times withminor variations, it can be helpful to create a base class containingall the common tests, and then create a subclass for each variation.Several examples of this technique exist in NumPy; below are excerptsfrom one innumpy/linalg/tests/test_linalg.py:
classLinalgTestCase:deftest_single(self):a=array([[1.,2.],[3.,4.]],dtype=single)b=array([2.,1.],dtype=single)self.do(a,b)deftest_double(self):a=array([[1.,2.],[3.,4.]],dtype=double)b=array([2.,1.],dtype=double)self.do(a,b)...classTestSolve(LinalgTestCase):defdo(self,a,b):x=linalg.solve(a,b)assert_almost_equal(b,dot(a,x))assert_(imply(isinstance(b,matrix),isinstance(x,matrix)))classTestInv(LinalgTestCase):defdo(self,a,b):a_inv=linalg.inv(a)assert_almost_equal(dot(a,a_inv),identity(asarray(a).shape[0]))assert_(imply(isinstance(a,matrix),isinstance(a_inv,matrix)))
In this case, we wanted to test solving a linear algebra problem usingmatrices of several data types, usinglinalg.solve andlinalg.inv. The common test cases (for single-precision,double-precision, etc. matrices) are collected inLinalgTestCase.
Sometimes you might want to skip a test or mark it as a known failure,such as when the test suite is being written before the code it’smeant to test, or if a test only fails on a particular architecture.The decorators from numpy.testing.dec can be used to do this.
To skip a test, simply useskipif:
fromnumpy.testingimportdec@dec.skipif(SkipMyTest,"Skipping this test because...")deftest_something(foo):...
The test is marked as skipped ifSkipMyTest evaluates to nonzero,and the message in verbose test output is the second argument given toskipif. Similarly, a test can be marked as a known failure byusingknownfailureif:
fromnumpy.testingimportdec@dec.knownfailureif(MyTestFails,"This test is known to fail because...")deftest_something_else(foo):...
Of course, a test can be unconditionally skipped or marked as a knownfailure by passingTrue as the first argument toskipif orknownfailureif, respectively.
A total of the number of skipped and known failing tests is displayedat the end of the test run. Skipped tests are marked as'S' inthe test results (or'SKIPPED' forverbose>1), and knownfailing tests are marked as'K' (or'KNOWN' ifverbose>1).
Tests on random data are good, but since test failures are meant to exposenew bugs or regressions, a test that passes most of the time but failsoccasionally with no code changes is not helpful. Make the random datadeterministic by setting the random number seed before generating it. Useeither Python’srandom.seed(some_number) or NumPy’snumpy.random.seed(some_number), depending on the source of random numbers.
numpy.testing.suppress_warnings.record