TESTS RST
TESTS RST
==============================
.. contents::
Introduction
''''''''''''
Until the 1.15 release, NumPy used the `nose`_ testing framework, it now uses
the `pytest`_ framework. The older framework is still maintained in order to
support downstream projects that use the old numpy framework, but all tests
for NumPy should use pytest.
.. note::
Testing NumPy
'''''''''''''
NumPy can be tested in a number of ways, choose any way you feel comfortable.
The test method may take two or more arguments; the first ``label`` is a
string specifying what should be tested and the second ``verbose`` is an
integer giving the level of output verbosity. See the docstring
`numpy.test`
for details. The default value for ``label`` is 'fast' - which
will run the standard tests. The string 'full' will run the full battery
of tests, including those identified as being slow to run. If ``verbose``
is 1 or less, the tests will just show information messages about the tests
that are run; but if it is greater than 1, then the tests will also provide
warnings on missing tests. So if you want to run every test and get
messages about which modules don't have tests::
>>> numpy.core.test()
$ python runtests.py
If you are writing a package that you'd like to become part of NumPy,
please write the tests as you develop the package.
Every Python module, extension module, or subpackage in the NumPy
package directory should have a corresponding ``test_<name>.py`` file.
Pytest examines these files for test methods (named ``test*``) and test
classes (named ``Test*``).
def test_zzz():
assert zzz() == 'Hello from zzz'
import pytest
class TestZzz:
def test_simple(self):
assert zzz() == 'Hello from zzz'
def test_invalid_parameter(self):
with pytest.raises(ValueError, match='.*some matching regex.*'):
...
Within these test methods, ``assert`` and related functions are used to test
whether a certain assumption is valid. If the assertion fails, the test fails.
``pytest`` internally rewrites the ``assert`` statement to give informative
output when it fails, so should be preferred over the legacy variant
``numpy.testing.assert_``. Whereas plain ``assert`` statements are ignored
when running Python in optimized mode with ``-O``, this is not an issue when
running tests with pytest.
Note that ``test_`` functions or methods should not have a docstring, because
that makes it hard to identify the test from the output of running the test
suite with ``verbose=2`` (or similar verbosity setting). Use plain comments
(``#``) if necessary.
Labeling tests
--------------
Unlabeled tests like the ones above are run in the default
``numpy.test()`` run. If you want to label your test as slow - and
therefore reserved for a full ``numpy.test(label='full')`` run, you
can label it with ``pytest.mark.slow``::
import pytest
@pytest.mark.slow
def test_big(self):
print('Big, slow test')
class test_zzz:
@pytest.mark.slow
def test_simple(self):
assert_(zzz() == 'Hello from zzz')
def setup():
"""Module-level setup"""
print('doing setup')
def teardown():
"""Module-level teardown"""
print('doing teardown')
class TestMe:
def setup():
"""Class-level setup"""
print('doing setup')
def teardown():
"""Class-level teardown"""
print('doing teardown')
Setup and teardown functions to functions and methods are known as "fixtures",
and their use is not encouraged.
Parametric tests
----------------
One very nice feature of testing is allowing easy testing across a range
of parameters - a nasty problem for standard unit tests. Use the
``pytest.mark.parametrize`` decorator.
Doctests
--------
The doctests are run as if they are in a fresh Python instance which
has executed ``import numpy as np``. Tests that are part of a NumPy
subpackage will have that subpackage already imported. E.g. for a test
in ``numpy/linalg/tests/``, the namespace will be created such that
``from numpy import linalg`` has already executed.
``tests/``
----------
Rather than keeping the code and the tests in the same directory, we
put all the tests for a given subpackage in a ``tests/``
subdirectory. For our example, if it doesn't already exist you will
need to create a ``tests/`` directory in ``numpy/xxx/``. So the path
for ``test_yyy.py`` is ``numpy/xxx/tests/test_yyy.py``.
python test_yyy.py
...
def test(level=1, verbosity=1):
from numpy.testing import Tester
return Tester().test(level, verbosity)
You will also need to add the tests directory in the configuration
section of your setup.py::
...
def configuration(parent_package='', top_path=None):
...
config.add_subpackage('tests')
return config
...
Also, when invoking the entire NumPy test suite, your tests will be
found and run::
If you have a collection of tests that must be run multiple times with
minor variations, it can be helpful to create a base class containing
all the common tests, and then create a subclass for each variation.
Several examples of this technique exist in NumPy; below are excerpts
from one in `numpy/linalg/tests/test_linalg.py
<https://github.com/numpy/numpy/blob/main/numpy/linalg/tests/test_linalg.py>`__::
class LinalgTestCase:
def test_single(self):
a = array([[1., 2.], [3., 4.]], dtype=single)
b = array([2., 1.], dtype=single)
self.do(a, b)
def test_double(self):
a = array([[1., 2.], [3., 4.]], dtype=double)
b = array([2., 1.], dtype=double)
self.do(a, b)
...
class TestSolve(LinalgTestCase):
def do(self, a, b):
x = linalg.solve(a, b)
assert_allclose(b, dot(a, x))
assert imply(isinstance(b, matrix), isinstance(x, matrix))
class TestInv(LinalgTestCase):
def do(self, a, b):
a_inv = linalg.inv(a)
assert_allclose(dot(a, a_inv), identity(asarray(a).shape[0]))
assert imply(isinstance(a, matrix), isinstance(a_inv, matrix))
import pytest
import pytest
Tests on random data are good, but since test failures are meant to expose
new bugs or regressions, a test that passes most of the time but fails
occasionally with no code changes is not helpful. Make the random data
deterministic by setting the random number seed before generating it. Use
either Python's ``random.seed(some_number)`` or NumPy's
``numpy.random.seed(some_number)``, depending on the source of random numbers.
The advantages over random generation include tools to replay and share
failures without requiring a fixed seed, reporting *minimal* examples for
each failure, and better-than-naive-random techniques for triggering bugs.
.. autofunction:: numpy.test
.. _nose: https://nose.readthedocs.io/en/latest/
.. _pytest: https://pytest.readthedocs.io
.. _parameterization: https://docs.pytest.org/en/latest/parametrize.html
.. _Hypothesis: https://hypothesis.readthedocs.io/en/latest/
.. _vscode: https://code.visualstudio.com/docs/python/testing#_enable-a-test-
framework
.. _pycharm: https://www.jetbrains.com/help/pycharm/testing-your-first-python-
application.html