Pytest PDF
Pytest PDF
Pytest PDF
Release 2.5.2
i
5 pytest hook reference 71
5.1 Hook specification and validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2 Initialization, command line and configuration hooks . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.3 Generic “runtest” hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.4 Collection hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.5 Reporting hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.6 Debugging/Interaction hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
10 Contributing 125
10.1 Types of contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
10.2 Preparing Pull Requests on Bitbucket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
ii
12.17 py.test 2.1.3: just some more fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
12.18 py.test 2.1.2: bug fixes and fixes for jython . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
12.19 py.test 2.1.1: assertion fixes and improved junitxml output . . . . . . . . . . . . . . . . . . . . . . . 151
12.20 py.test 2.1.0: perfected assertions and bug fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
12.21 py.test 2.0.3: bug fixes and speed ups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
12.22 py.test 2.0.2: bug fixes, improved xfail/skip expressions, speed ups . . . . . . . . . . . . . . . . . . . 153
12.23 py.test 2.0.1: bug fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
12.24 py.test 2.0.0: asserts++, unittest++, reporting++, config++, docs++ . . . . . . . . . . . . . . . . . . 155
iii
Index 185
iv
pytest Documentation, Release 2.5.2
CONTENTS 1
pytest Documentation, Release 2.5.2
2 CONTENTS
CHAPTER
ONE
3
pytest Documentation, Release 2.5.2
1.2.1 Installation
Installation options:
pip install -U pytest # or
easy_install -U pytest
def test_answer():
assert func(3) == 5
test_sample.py F
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
========================= 1 failed in 0.01 seconds =========================
pytest found the test_answer function by following standard test discovery rules, basically detecting the
test_ prefixes. We got a failure report because our little func(3) call did not return 5.
Note: You can simply use the assert statement for asserting test expectations. pytest’s Advanced assertion intro-
spection will intelligently report intermediate values of the assert expression freeing you from the need to learn the
many names of JUnit legacy methods.
If you want to assert that some code raises an exception you can use the raises helper:
# content of test_sysexit.py
import pytest
def f():
raise SystemExit(1)
def test_mytest():
with pytest.raises(SystemExit):
f()
Todo
For further ways to assert exceptions see the raises
Once you start to have more than a few tests it often makes sense to group tests logically, in classes and modules. Let’s
write a class containing two tests:
# content of test_class.py
class TestClass:
def test_one(self):
x = "this"
assert ’h’ in x
def test_two(self):
x = "hello"
assert hasattr(x, ’check’)
The two tests are found because of the standard Conventions for Python test discovery. There is no need to subclass
anything. We can simply run the module by passing its filename:
$ py.test -q test_class.py
.F
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
def test_two(self):
x = "hello"
> assert hasattr(x, ’check’)
E assert hasattr(’hello’, ’check’)
test_class.py:8: AssertionError
1 failed, 1 passed in 0.01 seconds
The first test passed, the second failed. Again we can easily see the intermediate values used in the assertion, helping
us to understand the reason for the failure.
For functional tests one often needs to create some files and pass them to application objects. pytest provides Builtin
fixtures/function arguments which allow to request arbitrary resources, for example a unique temporary directory:
# content of test_tmpdir.py
def test_needsfiles(tmpdir):
print tmpdir
assert 0
We list the name tmpdir in the test function signature and pytest will lookup and call a fixture factory to create
the resource before performing the test function call. Let’s just run it:
$ py.test -q test_tmpdir.py
F
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local(’/tmp/pytest-1008/test_needsfiles0’)
def test_needsfiles(tmpdir):
print tmpdir
> assert 0
E assert 0
test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------
/tmp/pytest-1008/test_needsfiles0
1 failed in 0.01 seconds
Before the test runs, a unique-per-test-invocation temporary directory was created. More info at Temporary directories
and files.
You can find out what kind of builtin pytest fixtures: explicit, modular, scalable exist by typing:
py.test --fixtures # shows builtin and custom fixtures
• Windows: If “easy_install” or “py.test” are not found you need to add the Python script path to your PATH,
see here: Python for Windows. You may alternatively use an ActivePython install which does this for you
automatically.
• Jython2.5.1 on Windows XP: Jython does not create command line launchers so py.test will not work
correctly. You may install py.test on CPython and type py.test --genscript=mytest and then use
jython mytest to run your tests with Jython using pytest.
Usages and Examples for more complex examples
New in version 2.0. If you use Python-2.5 or later you can invoke testing through the Python interpreter from the
command line:
python -m pytest [...]
This is equivalent to invoking the command line script py.test [...] directly.
Import ‘pkg’ and use its filesystem location to find and run tests:
py.test --pyargs pkg # run all tests found below directory of pypkg
Python comes with a builtin Python debugger called PDB. pytest allows one to drop into the PDB prompt via a
command line option:
py.test --pdb
This will invoke the Python debugger on every failure. Often you might only want to do this for the first failing test to
understand a certain failure situation:
py.test -x --pdb # drop to PDB on first failure, then end test session
py.test --pdb --maxfail=3 # drop to PDB for the first three failures
If you want to set a breakpoint and enter the pdb.set_trace() you can use a helper:
import pytest
def test_function():
...
pytest.set_trace() # invoke PDB debugger and tracing
In previous versions you could only enter PDB tracing if you disabled capturing on the command line via py.test
-s.
To create result files which can be read by Hudson or other Continuous integration servers, use this invocation:
py.test --junitxml=path
and look at the content at the path location. Such files are used e.g. by the PyPy-test web page to show test results
over several revisions.
This will submit test run information to a remote Paste service and provide a URL for each failure. You may select
tests as usual or add for example -x if you only want to send one particular failure.
Creating a URL for a whole test session log:
py.test --pastebin=all
To disable loading specific plugins at invocation time, use the -p option together with the prefix no:.
Example: to disable loading the plugin doctest, which is responsible for executing doctest tests from text files,
invoke py.test like this:
py.test -p no:doctest
New in version 2.0. You can invoke pytest from Python code directly:
pytest.main()
this acts as if you would call “py.test” from the command line. It will not raise SystemExit but return the exitcode
instead. You can pass in options and arguments:
pytest.main([’-x’, ’mytestdir’])
or pass in a string:
pytest.main("-x mytestdir")
pytest.main("-qq", plugins=[MyPlugin()])
Running it will show that MyPlugin was added and its hook was invoked:
$ python myinvoke.py
*** test run reporting finishing
We recommend to use virtualenv environments and use pip (or easy_install) for installing your application and any
dependencies as well as the pytest package itself. This way you will get an isolated and reproducible environment.
Given you have installed virtualenv and execute it from the command line, here is an example session for unix or
windows:
virtualenv . # create a virtualenv directory in the current directory
scripts/activate # on Windows
Due to the activate step above the pip will come from the virtualenv directory and install any package into the
isolated virtual environment.
• inlining test directories into your application package, useful if you have direct relation between (unit-)test and
application modules and want to distribute your tests along with your application:
setup.py # your distutils/setuptools Python package metadata
mypkg/
__init__.py
appmodule.py
...
test/
test_app.py
...
• avoid “__init__.py” files in your test directories. This way your tests can run easily against an installed version
of mypkg, independently from if the installed package contains the tests or not.
• With inlined tests you might put __init__.py into test directories and make them installable as part of
your application. Using the py.test --pyargs mypkg invocation pytest will discover where mypkg is
installed and collect tests from there. With the “external” test you can still distribute tests but they will not be
installed or become importable.
Typically you can run tests by pointing to test directories or modules:
py.test tests/test_app.py # for external test dirs
py.test mypkg/test/test_app.py # for inlined test dirs
py.test mypkg # run tests in all below test directories
py.test # run all tests below current dir
...
Because of the above editable install mode you can change your source code (both tests and the app) and
rerun tests at will. Once you are done with your work, you can use tox to make sure that the package is really correct
and tests pass in all required configurations.
Note: You can use Python3 namespace packages (PEP420) for your application but pytest will still perform test
package name discovery based on the presence of __init__.py files. If you use one of the two recommended file
system layouts above but leave away the __init__.py files from your directories it should just work on Python3.3
and above. From “inlined tests”, however, you will need to use absolute imports for getting at your application code.
Note: If pytest finds a “a/b/test_module.py” test file while recursing into the filesystem it determines the import
name as follows:
• determine basedir: this is the first “upward” (towards the root) directory not containing an __init__.py.
If e.g. both a and b contain an __init__.py file then the parent directory of a will become the basedir.
• perform sys.path.insert(0, basedir) to make the test module importable under the fully qualified
import name.
• import a.b.test_module where the path is determined by converting path separators / into ”.” charac-
ters. This means you must follow the convention of having directory and file names map directly to the import
names.
The reason for this somewhat evolved importing technique is that in larger projects multiple test modules might import
from each other and thus deriving a canonical import name helps to avoid surprises such as a test modules getting
imported twice.
If you frequently release code and want to make sure that your actual package passes all tests you may want to look
into tox, the virtualenv test automation tool and its pytest support. Tox helps you to setup virtualenv environments with
pre-defined dependencies and then executing a pre-configured test command with options. It will run tests against the
installed package and not against your source code checkout, helping to detect packaging glitches.
If you want to use Jenkins you can use the --junitxml=PATH option to create a JUnitXML file that Jenkins can
pick up and generate reports.
If you are a maintainer or application developer and want people who don’t deal with python much to easily run tests
you may generate a standalone pytest script:
py.test --genscript=runtests.py
This generates a runtests.py script which is a fully functional basic pytest script, running unchanged under
Python2 and Python3. You can tell people to download the script and then e.g. run it like this:
python runtests.py
You can integrate test runs into your distutils or setuptools based project. Use the genscript method to generate a
standalone pytest script:
py.test --genscript=runtests.py
and make this script part of your distribution and then add this to your setup.py file:
class PyTest(Command):
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
import sys,subprocess
errno = subprocess.call([sys.executable, ’runtests.py’])
raise SystemExit(errno)
setup(
#...,
cmdclass = {’test’: PyTest},
#...,
)
this will execute your tests using runtests.py. As this is a standalone version of pytest no prior installation
whatsoever is required for calling the test command. You can also pass additional arguments to the subprocess-calls
such as your test directory or other options.
Setuptools supports writing our own Test command for invoking pytest. Most often it is better to use tox instead, but
here is how you can get started with setuptools integration:
from setuptools.command.test import test as TestCommand
import sys
class PyTest(TestCommand):
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = []
self.test_suite = True
def run_tests(self):
#import here, cause outside the eggs aren’t loaded
import pytest
errno = pytest.main(self.test_args)
sys.exit(errno)
setup(
#...,
tests_require=[’pytest’],
cmdclass = {’test’: PyTest},
)
this will download pytest if needed and then run your tests as you would expect it to.
Here are some examples of projects using pytest (please send notes via contact):
• PyPy, Python with a JIT compiler, running over 21000 tests
• the MoinMoin Wiki Engine
• sentry, realtime app-maintenance and exception tracking
• tox, virtualenv/Hudson integration tool
• PIDA framework for integrated development
• PyPM ActiveState’s package manager
• Fom a fluid object mapper for FluidDB
• applib cross-platform utilities
• six Python 2 and 3 compatibility utilities
• pediapress MediaWiki articles
• mwlib mediawiki parser and utility library
• The Translate Toolkit for localization and conversion
• execnet rapid multi-Python deployment
• pylib cross-platform path, IO, dynamic code library
• Pacha configuration management in five minutes
• bbfreeze create standalone executables from Python scripts
• pdb++ a fancier version of PDB
• py-s3fuse Amazon S3 FUSE based filesystem
• waskr WSGI Stats Middleware
• guachi global persistent configs for Python modules
• Circuits lightweight Event Driven Framework
• pygtk-helpers easy interaction with PyGTK
• QuantumCore statusmessage and repoze openid plugin
• pydataportability libraries for managing the open web
• XIST extensible HTML/XML generator
Note: This FAQ is here only mostly for historic reasons. Checkout pytest Q&A at Stackoverflow for many questions
and answers related to pytest and/or use contact channels to get help.
pytest and nose share basic philosophy when it comes to running and writing Python tests. In fact, you can run
many tests written for nose with pytest. nose was originally created as a clone of pytest when pytest was in
the 0.8 release cycle. Note that starting with pytest-2.0 support for running unittest test suites is majorly improved.
Since some time pytest has builtin support for supporting tests written using trial. It does not itself start a reactor,
however, and does not handle Deferreds returned from a test in pytest style. If you are using trial’s unittest.TestCase
chances are that you can just run your tests even if you return Deferreds. In addition, there also is a dedicated
pytest-twisted plugin which allows to return deferreds from pytest-style tests, allowing to use pytest fixtures: explicit,
modular, scalable and other features.
In 2012, some work is going into the pytest-django plugin. It substitutes the usage of Django’s manage.py test
and allows to use all pytest features most of which are not available from Django directly.
Around 2007 (version 0.8) some people thought that pytest was using too much “magic”. It had been part of the
pylib which contains a lot of unreleated python library code. Around 2010 there was a major cleanup refactoring,
which removed unused or deprecated code and resulted in the new pytest PyPI package which strictly contains only
test-related code. This release also brought a complete pluginification such that the core is around 300 lines of code
and everything else is implemented in plugins. Thus pytest today is a small, universally runnable and customizable
testing framework for Python. Note, however, that pytest uses metaprogramming techniques and reading its source
is thus likely not something for Python beginners.
A second “magic” issue was the assert statement debugging feature. Nowadays, pytest explicitely rewrites assert
statements in test modules in order to provide more useful assert feedback. This completely avoids previous issues of
confusing assertion-reporting. It also means, that you can use Python’s -O optimization without loosing assertions in
test modules.
pytest contains a second, mostly obsolete, assert debugging technique, invoked via --assert=reinterpret,
activated by default on Python-2.5: When an assert statement fails, pytest re-interprets the expression part to
show intermediate values. This technique suffers from a caveat that the rewriting does not: If your expression has side
effects (better to avoid them anyway!) the intermediate values may not be the same, confusing the reinterpreter and
obfuscating the initial error (this is also explained at the command line if it happens).
You can also turn off all assertion interaction using the --assertmode=off option.
Some of the reasons are historic, others are practical. pytest used to be part of the py package which provided
several developer utilities, all starting with py.<TAB>, thus providing nice TAB-completion. If you install pip
install pycmd you get these tools from a separate package. These days the command line tool could be called
pytest but since many people have gotten used to the old name and there is another tool named “pytest” we just
decided to stick with py.test for now.
For simple applications and for people experienced with nose or unittest-style test setup using xUnit style setup
probably feels natural. For larger test suites, parametrized testing or setup of complex test resources using funcargs
may feel more natural. Moreover, funcargs are ideal for writing advanced test support code (like e.g. the monkey-
patch, the tmpdir or capture funcargs) because the support code can register setup/teardown functions in a managed
class/module/function scope.
There are two conceptual reasons why yielding from a factory function is not possible:
• If multiple factories yielded values there would be no natural place to determine the combination policy - in
real-world examples some combinations often should not run.
• Calling factories for obtaining test function arguments is part of setting up and running a test. At that point it is
not possible to add new test calls to the test collection anymore.
However, with pytest-2.3 you can use the Fixtures as Function arguments decorator and specify params so that all
tests depending on the factory-created resource will run multiple times with different parameters.
You can also use the pytest_generate_tests hook to implement the parametrization scheme of your choice.
On windows the multiprocess package will instantiate sub processes by pickling and thus implicitly re-import a lot
of local modules. Unfortunately, setuptools-0.6.11 does not if __name__==’__main__’ protect its generated
command line script. This leads to infinite recursion when running a test that instantiates Processes.
As of middle 2013, there shouldn’t be a problem anymore when you use the standard setuptools (note that distribute
has been merged back into setuptools which is now shipped directly with virtualenv).
TWO
PYTEST REFERENCE
DOCUMENTATION
main(args=None, plugins=None)
return exit code, after performing an in-process test run.
Parameters
• args – list of command line arguments.
• plugins – list of plugin objects to be auto-registered during initialization.
More examples at Calling pytest from Python code
19
pytest Documentation, Release 2.5.2
Similar to caught exception objects in Python, explicitly clearing local references to returned
py.code.ExceptionInfo objects can help the Python interpreter speed up its garbage collection.
Clearing those references breaks a reference cycle (ExceptionInfo –> caught exception –> frame stack
raising the exception –> current frame stack –> local variables –> ExceptionInfo) which makes Python
keep all objects referenced from that cycle (including all local variables in the current frame) alive until the
next cyclic garbage collection run. See the official Python try statement documentation for more detailed
information.
Examples at Assertions about expected exceptions.
deprecated_call(func, *args, **kwargs)
assert that calling func(*args, **kwargs) triggers a DeprecationWarning.
You can use the following functions in your test, fixture or setup functions to force a certain test outcome. Note that
most often you can rather use declarative marks, see Skip and xfail: dealing with tests that can not succeed.
fail(msg=’‘, pytrace=True)
explicitely fail an currently-executing test with the given Message.
Parameters pytrace – if false the msg represents the full failure information and no python trace-
back will be reported.
skip(msg=’‘)
skip an executing test with the given message. Note: it’s usually better to use the pytest.mark.skipif marker
to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. See the
pytest_skipping plugin for details.
importorskip(modname, minversion=None)
return imported module if it has at least “minversion” as its __version__ attribute. If no minversion is specified
the a skip is only triggered if the module can not be imported. Note that version comparison only works with
simple version strings like “1.2.3” but not “1.2.3.dev1” or others.
xfail(reason=’‘)
xfail an executing test or setup functions with the given reason.
exit(msg)
exit testing process as if KeyboardInterrupt was triggered.
keywords
keywords/markers dictionary for the underlying node.
session
pytest session object.
addfinalizer(finalizer)
add finalizer/teardown function to be called after the last test within the requesting test context finished
execution.
applymarker(marker)
Apply a marker to a single test function invocation. This method is useful if you don’t want to have a
keyword/marker on all function invocations.
Parameters marker – a _pytest.mark.MarkDecorator object created by a call to
pytest.mark.NAME(...).
raiseerror(msg)
raise a FixtureLookupError with the given message.
cached_setup(setup, teardown=None, scope=’module’, extrakey=None)
(deprecated) Return a testing resource managed by setup & teardown calls. scope and extrakey
determine when the teardown function will be called so that subsequent calls to setup would recreate
the resource. With pytest-2.3 you often do not need cached_setup() as you can directly declare a
scope on a fixture function and register a finalizer through request.addfinalizer().
Parameters
• teardown – function receiving a previously setup resource.
• setup – a no-argument function creating a resource.
• scope – a string value out of function, class, module or session indicating the
caching lifecycle of the resource.
• extrakey – added to internal caching key of (funcargname, scope).
getfuncargvalue(argname)
Dynamically retrieve a named fixture function argument.
As of pytest-2.3, it is easier and usually better to access other fixture values by stating it as an input
argument in the fixture function. If you only can decide about using another fixture at test setup time, you
may use this function to retrieve it inside a fixture function body.
capfd
enables capturing of writes to file descriptors 1 and 2 and makes
captured output available via ‘‘capsys.readouterr()‘‘ method calls
which return a ‘‘(out, err)‘‘ tuple.
monkeypatch
The returned ‘‘monkeypatch‘‘ funcarg provides these
pytestconfig
the pytest config object with access to command line opts.
recwarn
Return a WarningsRecorder instance that provides these methods:
tmpdir
return a temporary directory path object
which is unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a ‘py.path.local‘_
path object.
in 0.00 seconds
You can get help on command line options and values in INI-style configurations files by using the general help option:
py.test -h # prints options _and_ config file settings
This will display command line and configuration file settings which were registered by installed plugins.
pytest searches for the first matching ini-style configuration file in the directories of command line argument and
the directories above. It looks for file basenames in this order:
pytest.ini
tox.ini
setup.cfg
Searching stops when the first [pytest] section is found in any of these files. There is no merging of configuration
values from multiple files. Example:
py.test path/to/testdir
If argument is provided to a pytest run, the current working directory is used to start the search.
It can be tedious to type the same series of command line options every time you use pytest. For example, if you
always want to see detailed info on skipped and xfailed tests, as well as have terser “dot” progress output, you can
write it into a configuration file:
# content of pytest.ini
# (or tox.ini or setup.cfg)
[pytest]
addopts = -rsxX -q
From now on, running pytest will add the specified options.
minversion
Specifies a minimal pytest version required for running tests.
minversion = 2.1 # will fail if we run with pytest-2.0
addopts
Add the specified OPTS to the set of command line arguments as if they had been specified by the user. Example:
if you have this ini file content:
[pytest]
addopts = --maxfail=2 -rf # exit after 2 failures, report fail info
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any char not in seq
Default patterns are .* _darcs CVS {args}. Setting a norecursedir replaces the default. Here is an
example of how to avoid certain directories:
# content of setup.cfg
[pytest]
norecursedirs = .svn _build tmp*
This would tell pytest to not look into typical subversion or sphinx-build directories or into any tmp prefixed
directory.
python_files
One or more Glob-style file patterns determining which python files are considered as test modules.
python_classes
One or more name prefixes determining which test classes are considered as test modules.
python_functions
One or more name prefixes determining which test functions and methods are considered as test modules. Note
that this has no effect on methods that live on a unittest.TestCase derived class.
See Changing naming conventions for examples.
pytest allows you to use the standard python assert for verifying expectations and values in Python tests. For
example, you can write the following:
# content of test_assert1.py
def f():
return 3
def test_function():
assert f() == 4
to assert that your function returns a certain value. If this assertion fails you will see the return value of the function
call:
$ py.test test_assert1.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items
test_assert1.py F
def test_function():
> assert f() == 4
E assert 3 == 4
E + where 3 = f()
test_assert1.py:5: AssertionError
========================= 1 failed in 0.01 seconds =========================
pytest has support for showing the values of the most common subexpressions including calls, attributes, compar-
isons, and binary and unary operators. (See Demo of Python failure reports with pytest). This allows you to use the
idiomatic python constructs without boilerplate code while not losing introspection information.
However, if you specify a message with the assertion like this:
assert a % 2 == 0, "value was odd, should be even"
then no assertion introspection takes places at all and the message will be simply shown in the traceback.
See Advanced assertion introspection for more information on assertion introspection.
In order to write assertions about raised exceptions, you can use pytest.raises as a context manager like this:
import pytest
with pytest.raises(ZeroDivisionError):
1 / 0
and if you need to have access to the actual exception info you may use:
with pytest.raises(RuntimeError) as excinfo:
def f():
f()
f()
excinfo is a py.code.ExceptionInfo instance, which is a wrapper around the actual exception raised.
If you want to write test code that works on Python 2.4 as well, you may also use two other ways to test for an expected
exception:
pytest.raises(ExpectedException, func, *args, **kwargs)
pytest.raises(ExpectedException, "func(*args, **kwargs)")
both of which execute the specified function with args and kwargs and asserts that the given ExpectedException
is raised. The reporter will provide you with helpful output in case of failures such as no exception or wrong exception.
New in version 2.0. pytest has rich support for providing context-sensitive information when it encounters compar-
isons. For example:
# content of test_assert2.py
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
assert set1 == set2
$ py.test test_assert2.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items
test_assert2.py F
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
> assert set1 == set2
E assert set([’0’, ’1’, ’3’, ’8’]) == set([’0’, ’3’, ’5’, ’8’])
E Extra items in the left set:
E ’1’
E Extra items in the right set:
E ’5’
test_assert2.py:5: AssertionError
========================= 1 failed in 0.01 seconds =========================
It is possible to add your own detailed explanations by implementing the pytest_assertrepr_compare hook.
pytest_assertrepr_compare(config, op, left, right)
return explanation for comparisons in failing assert expressions.
Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines
but any newlines in a string will be escaped. Note that all but the first line will be indented sligthly, the intention
is for the first line to be a summary.
As an example consider adding the following hook in a conftest.py which provides an alternative explanation for Foo
objects:
# content of conftest.py
from test_foocompare import Foo
def pytest_assertrepr_compare(op, left, right):
if isinstance(left, Foo) and isinstance(right, Foo) and op == "==":
return [’Comparing Foo instances:’,
’ vals: %s != %s’ % (left.val, right.val)]
def test_compare():
f1 = Foo(1)
f2 = Foo(2)
assert f1 == f2
you can run the test module and get the custom output defined in the conftest file:
$ py.test -q test_foocompare.py
F
================================= FAILURES =================================
_______________________________ test_compare _______________________________
def test_compare():
f1 = Foo(1)
f2 = Foo(2)
> assert f1 == f2
E assert Comparing Foo instances:
E vals: 1 != 2
test_foocompare.py:8: AssertionError
1 failed in 0.01 seconds
New in version 2.1. Reporting details about a failing assertion is achieved either by rewriting assert statements before
they are run or re-evaluating the assert expression and recording the intermediate values. Which technique is used
depends on the location of the assert, pytest configuration, and Python version being used to run pytest. Note that
for assert statements with a manually provided message, i.e. assert expr, message, no assertion introspection
takes place and the manually provided message will be rendered in tracebacks.
By default, if the Python version is greater than or equal to 2.6, pytest rewrites assert statements in test modules.
Rewritten assert statements put introspection information into the assertion failure message. pytest only rewrites test
modules directly discovered by its test collection process, so asserts in supporting modules which are not themselves
test modules will not be rewritten.
Note: pytest rewrites test modules on import. It does this by using an import hook to write a new pyc files. Most of
the time this works transparently. However, if you are messing with import yourself, the import hook may interfere. If
this is the case, simply use --assert=reinterp or --assert=plain. Additionally, rewriting will fail silently
if it cannot write new pycs, i.e. in a read-only filesystem or a zipfile.
If an assert statement has not been rewritten or the Python version is less than 2.6, pytest falls back on assert
reinterpretation. In assert reinterpretation, pytest walks the frame of the function containing the assert statement
to discover sub-expression results of the failing assert statement. You can force pytest to always use assertion
reinterpretation by passing the --assert=reinterp option.
Assert reinterpretation has a caveat not present with assert rewriting: If evaluating the assert expression has side effects
you may get a warning that the intermediate values could not be determined safely. A common example of this issue
is an assertion which reads from a file:
assert f.read() != ’...’
If this assertion fails then the re-evaluation will probably succeed! This is because f.read() will return an empty
string when it is called the second time during the re-evaluation. However, it is easy to rewrite the assertion and avoid
any trouble:
content = f.read()
assert content != ’...’
New in version 2.0/2.3/2.4. The purpose of test fixtures is to provide a fixed baseline upon which tests can reliably
and repeatedly execute. pytest fixtures offer dramatic improvements over the classic xUnit style of setup/teardown
functions:
• fixtures have explicit names and are activated by declaring their use from test functions, modules, classes or
whole projects.
• fixtures are implemented in a modular manner, as each fixture name triggers a fixture function which can itself
use other fixtures.
• fixture management scales from simple unit to complex functional testing, allowing to parametrize fixtures and
tests according to configuration and component options, or to re-use fixtures across class, module or whole test
session scopes.
In addition, pytest continues to support classic xunit-style setup. You can mix both styles, moving incrementally from
classic to new style, as you prefer. You can also start out from existing unittest.TestCase style or nose based projects.
Note: pytest-2.4 introduced an additional experimental yield fixture mechanism for easier context manager integration
and more linear writing of teardown code.
Test functions can receive fixture objects by naming them as an input argument. For each argument name, a fix-
ture function with that name provides the fixture object. Fixture functions are registered by marking them with
@pytest.fixture. Let’s look at a simple self-contained test module containing a fixture and a test function
using it:
# content of ./test_smtpsimple.py
import pytest
@pytest.fixture
def smtp():
import smtplib
return smtplib.SMTP("merlinux.eu")
def test_ehlo(smtp):
response, msg = smtp.ehlo()
assert response == 250
assert "merlinux" in msg
assert 0 # for demo purposes
Here, the test_ehlo needs the smtp fixture value. pytest will discover and call the @pytest.fixture marked
smtp fixture function. Running the test looks like this:
$ py.test test_smtpsimple.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items
test_smtpsimple.py F
def test_ehlo(smtp):
response, msg = smtp.ehlo()
assert response == 250
assert "merlinux" in msg
> assert 0 # for demo purposes
E assert 0
test_smtpsimple.py:12: AssertionError
========================= 1 failed in 0.21 seconds =========================
In the failure traceback we see that the test function was called with a smtp argument, the smtplib.SMTP()
instance created by the fixture function. The test function fails on our deliberate assert 0. Here is the exact
protocol used by pytest to call the test function this way:
1. pytest finds the test_ehlo because of the test_ prefix. The test function needs a function argument named
smtp. A matching fixture function is discovered by looking for a fixture-marked function named smtp.
2. smtp() is called to create an instance.
3. test_ehlo(<SMTP instance>) is called and fails in the last line of the test function.
Note that if you misspell a function argument or want to use one that isn’t available, you’ll see an error with a list of
available function arguments.
When injecting fixtures to test functions, pytest-2.0 introduced the term “funcargs” or “funcarg mechanism” which
continues to be present also in docs today. It now refers to the specific case of injecting fixture values as arguments
to test functions. With pytest-2.3 there are more possibilities to use fixtures but “funcargs” remain as the main way as
they allow to directly state the dependencies of a test function.
As the following examples show in more detail, funcargs allow test functions to easily receive and work against specific
pre-initialized application objects without having to care about import/setup/cleanup details. It’s a prime example of
dependency injection where fixture functions take the role of the injector and test functions are the consumers of
fixture objects.
Fixtures requiring network access depend on connectivity and are usually time-expensive to create. Extending the
previous example, we can add a scope=’module’ parameter to the @pytest.fixture invocation to cause the
decorated smtp fixture function to only be invoked once per test module. Multiple test functions in a test module
will thus each receive the same smtp fixture instance. The next example puts the fixture function into a separate
conftest.py file so that tests from multiple test modules in the directory can access the fixture function:
# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module")
def smtp():
return smtplib.SMTP("merlinux.eu")
The name of the fixture again is smtp and you can access its result by listing the name smtp as an input parameter in
any test or fixture function (in or below the directory where conftest.py is located):
# content of test_module.py
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
assert "merlinux" in response[1]
assert 0 # for demo purposes
def test_noop(smtp):
response = smtp.noop()
assert response[0] == 250
assert 0 # for demo purposes
We deliberately insert failing assert 0 statements in order to inspect what is going on and can now run the tests:
$ py.test test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items
test_module.py FF
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
assert "merlinux" in response[1]
> assert 0 # for demo purposes
E assert 0
test_module.py:6: AssertionError
________________________________ test_noop _________________________________
def test_noop(smtp):
response = smtp.noop()
assert response[0] == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:11: AssertionError
========================= 2 failed in 0.23 seconds =========================
You see the two assert 0 failing and more importantly you can also see that the same (module-scoped) smtp
object was passed into the two test functions because pytest shows the incoming argument values in the traceback. As
a result, the two test functions using smtp run as quick as a single one because they reuse the same instance.
If you decide that you rather want to have a session-scoped smtp instance, you can simply declare it:
@pytest.fixture(scope="session")
def smtp(...):
# the returned fixture value will be shared for
# all tests needing it
pytest supports execution of fixture specific finalization code when the fixture goes out of scope. By accepting a
request object into your fixture function you can call its request.addfinalizer one or multiple times:
# content of conftest.py
import smtplib
import pytest
@pytest.fixture(scope="module")
def smtp(request):
smtp = smtplib.SMTP("merlinux.eu")
def fin():
print ("teardown smtp")
smtp.close()
request.addfinalizer(fin)
return smtp # provide the fixture value
The fin function will execute when the last test using the fixture in the module has finished execution.
Let’s execute it:
$ py.test -s -q --tb=no
FFteardown smtp
We see that the smtp instance is finalized after the two tests finished execution. Note that if we decorated our fixture
function with scope=’function’ then fixture setup and cleanup would occur around each single test. In either
case the test module itself does not need to change or know about these details of fixture setup.
Fixture function can accept the request object to introspect the “requesting” test function, class or module context.
Further extending the previous smtp fixture example, let’s read an optional server URL from the test module which
uses our fixture:
# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module")
def smtp(request):
server = getattr(request.module, "smtpserver", "merlinux.eu")
smtp = smtplib.SMTP(server)
def fin():
print ("finalizing %s (%s)" % (smtp, server))
smtp.close()
return smtp
We use the request.module attribute to optionally obtain an smtpserver attribute from the test module. If we
just execute again, nothing much has changed:
$ py.test -s -q --tb=no
FF
2 failed in 0.59 seconds
Let’s quickly create another test module that actually sets the server URL in its module namespace:
# content of test_anothersmtp.py
def test_showhelo(smtp):
assert 0, smtp.helo()
Running it:
$ py.test -qq --tb=short test_anothersmtp.py
F
================================= FAILURES =================================
______________________________ test_showhelo _______________________________
test_anothersmtp.py:5: in test_showhelo
> assert 0, smtp.helo()
E AssertionError: (250, ’mail.python.org’)
voila! The smtp fixture function picked up our mail server name from the module namespace.
Fixture functions can be parametrized in which case they will be called multiple times, each time executing the set
of dependent tests, i. e. the tests that depend on this fixture. Test functions do usually not need to be aware of their
re-running. Fixture parametrization helps to write exhaustive functional tests for components which themselves can
be configured in multiple ways.
Extending the previous example, we can flag the fixture to create two smtp fixture instances which will cause all tests
using the fixture to run twice. The fixture function gets access to each parameter through the special request object:
# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module",
params=["merlinux.eu", "mail.python.org"])
def smtp(request):
smtp = smtplib.SMTP(request.param)
def fin():
print ("finalizing %s" % smtp)
smtp.close()
request.addfinalizer(fin)
return smtp
The main change is the declaration of params with @pytest.fixture, a list of values for each of which the
fixture function will execute and can access a value via request.param. No test function code needs to change.
So let’s just do another run:
$ py.test -q test_module.py
FFFF
================================= FAILURES =================================
__________________________ test_ehlo[merlinux.eu] __________________________
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
assert "merlinux" in response[1]
> assert 0 # for demo purposes
E assert 0
test_module.py:6: AssertionError
__________________________ test_noop[merlinux.eu] __________________________
def test_noop(smtp):
response = smtp.noop()
assert response[0] == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:11: AssertionError
________________________ test_ehlo[mail.python.org] ________________________
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
> assert "merlinux" in response[1]
E assert ’merlinux’ in ’mail.python.org\nSIZE 25600000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8B
test_module.py:5: AssertionError
----------------------------- Captured stdout ------------------------------
finalizing <smtplib.SMTP instance at 0x21f3e60>
________________________ test_noop[mail.python.org] ________________________
def test_noop(smtp):
response = smtp.noop()
test_module.py:11: AssertionError
4 failed in 6.06 seconds
We see that our two test functions each ran twice, against the different smtp instances. Note also, that with the
mail.python.org connection the second test fails in test_ehlo because a different server string is expected
than what arrived.
You can not only use fixtures in test functions but fixture functions can use other fixtures themselves. This contributes
to a modular design of your fixtures and allows re-use of framework-specific fixtures across many projects. As a
simple example, we can extend the previous example and instantiate an object app where we stick the already defined
smtp resource into it:
# content of test_appsetup.py
import pytest
class App:
def __init__(self, smtp):
self.smtp = smtp
@pytest.fixture(scope="module")
def app(smtp):
return App(smtp)
def test_smtp_exists(app):
assert app.smtp
Here we declare an app fixture which receives the previously defined smtp fixture and instantiates an App object
with it. Let’s run it:
$ py.test -v test_appsetup.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/pyt
collecting ... collected 2 items
Due to the parametrization of smtp the test will run twice with two different App instances and respective smtp
servers. There is no need for the app fixture to be aware of the smtp parametrization as pytest will fully analyse the
fixture dependency graph.
Note, that the app fixture has a scope of module and uses a module-scoped smtp fixture. The example would still
work if smtp was cached on a session scope: it is fine for fixtures to use “broader” scoped fixtures but not the
other way round: A session-scoped fixture could not use a module-scoped one in a meaningful way.
pytest minimizes the number of active fixtures during test runs. If you have a parametrized fixture, then all the tests
using it will first execute with one instance and then finalizers are called before the next fixture instance is created.
Among other things, this eases testing of applications which create and use global state.
The following example uses two parametrized funcargs, one of which is scoped on a per-module basis, and all the
functions perform print calls to show the setup/teardown flow:
# content of test_module.py
import pytest
@pytest.fixture(scope="function", params=[1,2])
def otherarg(request):
return request.param
def test_0(otherarg):
print " test0", otherarg
def test_1(modarg):
print " test1", modarg
def test_2(otherarg, modarg):
print " test2", otherarg, modarg
Let’s run the tests in verbose mode and with looking at the print-output:
$ py.test -v -s test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/pyt
collecting ... collected 8 items
You can see that the parametrized module-scoped modarg resource caused an ordering of test execution that lead
to the fewest possible “active” resources. The finalizer for the mod1 parametrized resource was executed before the
mod2 resource was setup.
Sometimes test functions do not directly need access to a fixture object. For example, tests may require to operate
with an empty directory as the current working directory but otherwise do not care for the concrete directory. Here is
how you can can use the standard tempfile and pytest fixtures to achieve it. We separate the creation of the fixture into
a conftest.py file:
# content of conftest.py
import pytest
import tempfile
import os
@pytest.fixture()
def cleandir():
newpath = tempfile.mkdtemp()
os.chdir(newpath)
@pytest.mark.usefixtures("cleandir")
class TestDirectoryInit:
def test_cwd_starts_empty(self):
assert os.listdir(os.getcwd()) == []
with open("myfile", "w") as f:
f.write("hello")
def test_cwd_again_starts_empty(self):
assert os.listdir(os.getcwd()) == []
Due to the usefixtures marker, the cleandir fixture will be required for the execution of each test method, just
as if you specified a “cleandir” function argument to each of them. Let’s run it to verify our fixture is activated and the
tests pass:
$ py.test -q
..
2 passed in 0.01 seconds
and you may specify fixture usage at the test module level, using a generic feature of the mark mechanism:
pytestmark = pytest.mark.usefixtures("cleandir")
Lastly you can put fixtures required by all tests in your project into an ini-file:
# content of pytest.ini
[pytest]
usefixtures = cleandir
Occasionally, you may want to have fixtures get invoked automatically without a usefixtures or funcargs reference.
As a practical example, suppose we have a database fixture which has a begin/rollback/commit architecture and we
want to automatically surround each test method by a transaction and a rollback. Here is a dummy self-contained
implementation of this idea:
# content of test_db_transact.py
import pytest
class DB:
def __init__(self):
self.intransaction = []
def begin(self, name):
self.intransaction.append(name)
def rollback(self):
self.intransaction.pop()
@pytest.fixture(scope="module")
def db():
return DB()
class TestClass:
@pytest.fixture(autouse=True)
def transact(self, request, db):
db.begin(request.function.__name__)
request.addfinalizer(db.rollback)
The class-level transact fixture is marked with autouse=true which implies that all test methods in the class will
use this fixture without a need to state it in the test function signature or with a class-level usefixtures decorator.
If we run it, we get two passing tests:
$ py.test -q
..
2 passed in 0.01 seconds
Note that the above transact fixture may very well be a fixture that you want to make available in your project
without having it generally active. The canonical way to do that is to put the transact definition into a conftest.py file
without using autouse:
# content of conftest.py
@pytest.fixture()
def transact(self, request, db):
db.begin()
request.addfinalizer(db.rollback)
All test methods in this TestClass will use the transaction fixture while other test classes or functions in the module
will not use it unless they also add a transact reference.
If during implementing your tests you realize that you want to use a fixture function from multiple test files you can
move it to a conftest.py file or even separately installable plugins without changing test code. The discovery of fixtures
functions starts at test classes, then test modules, then conftest.py files and finally builtin and third party plugins.
New in version 2.4. pytest-2.4 allows fixture functions to seamlessly use a yield instead of a return statement to
provide a fixture value while otherwise fully supporting all other fixture features.
Note: “yielding” fixture values is an experimental feature and its exact declaration may change later but earliest in a
2.5 release. You can thus safely use this feature in the 2.4 series but may need to adapt later. Test functions themselves
will not need to change (as a general feature, they are ignorant of how fixtures are setup).
import pytest
@pytest.yield_fixture
def passwd():
print ("\nsetup before yield")
f = open("/etc/passwd")
yield f.readlines()
print ("teardown after yield")
f.close()
def test_has_lines(passwd):
print ("test called")
assert passwd
In contrast to finalization through registering callbacks, our fixture function used a yield statement to provide the
lines of the /etc/passwd file. The code after the yield statement serves as the teardown code, avoiding the
indirection of registering a teardown callback function.
Let’s run it with output capturing disabled:
$ py.test -q -s test_yield.py
We can also seemlessly use the new syntax with with statements. Let’s simplify the above passwd fixture:
# content of test_yield2.py
import pytest
@pytest.yield_fixture
def passwd():
with open("/etc/passwd") as f:
yield f.readlines()
def test_has_lines(passwd):
assert len(passwd) >= 1
The file f will be closed after the test finished execution because the Python file object supports finalization when
the with statement ends.
Note that the new syntax is fully integrated with using scope, params and other fixture features. Changing existing
fixture functions to use yield is thus straight forward.
The yield-syntax has been discussed by pytest users extensively. In general, the advantages of the using a yield
fixture syntax are:
• easy provision of fixtures in conjunction with context managers.
• no need to register a callback, providing for more synchronous control flow in the fixture function. Also there is
no need to accept the request object into the fixture function just for providing finalization code.
However, there are also limitations or foreseeable irritations:
• usually yield is used for producing multiple values. But fixture functions can only yield exactly one value.
Yielding a second fixture value will get you an error. It’s possible we can evolve pytest to allow for produc-
ing multiple values as an alternative to current parametrization. For now, you can just use the normal fixture
parametrization mechanisms together with yield-style fixtures.
• the yield syntax is similar to what contextlib.contextmanager() decorated functions provide. With
pytest fixture functions, the “after yield” part will always be invoked, independently from the exception status
of the test function which uses the fixture. The pytest behaviour makes sense if you consider that many different
test functions might use a module or session scoped fixture. Some test functions might raise exceptions and
others not, so how could pytest re-raise a single exception at the yield point in the fixture function?
• lastly yield introduces more than one way to write fixture functions, so what’s the obvious way to a newcomer?
Newcomers reading the docs will see feature examples using the return style so should use that, if in doubt.
Others can start experimenting with writing yield-style fixtures and possibly help evolving them further.
If you want to feedback or participate in the ongoing discussion, please join our contact channels. you are most
welcome.
New in version 2.2,: improved in 2.4 The builtin pytest.mark.parametrize decorator enables parametrization
of arguments for a test function. Here is a typical example of a test function that implements checking that a certain
input leads to an expected output:
# content of test_expectation.py
import pytest
@pytest.mark.parametrize("input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(input, expected):
assert eval(input) == expected
Here, the @parametrize decorator defines three different (input,expected) tuples so that the test_eval
function will run three times using them in turn:
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 3 items
test_expectation.py ..F
@pytest.mark.parametrize("input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(input, expected):
> assert eval(input) == expected
E assert 54 == 42
E + where 54 = eval(’6*9’)
test_expectation.py:8: AssertionError
==================== 1 failed, 2 passed in 0.01 seconds ====================
As designed in this example, only one pair of input/output values fails the simple test function. And as usual with test
function arguments, you can see the input and output values in the traceback.
Note that you could also use the parametrize marker on a class or a module (see Marking test functions with attributes)
which would invoke several functions with the argument sets.
It is also possible to mark individual test instances within parametrize, for example with the builtin mark.xfail:
# content of test_expectation.py
import pytest
@pytest.mark.parametrize("input,expected", [
("3+5", 8),
("2+4", 6),
pytest.mark.xfail(("6*9", 42)),
])
def test_eval(input, expected):
assert eval(input) == expected
test_expectation.py ..x
The one parameter set which caused a failure previously now shows up as an “xfailed (expected to fail)” test.
Note: In versions prior to 2.4 one needed to specify the argument names as a tuple. This remains valid but the
simpler "name1,name2,..." comma-separated-string syntax is now advertised first because it’s easier to write
and produces less line noise.
Sometimes you may want to implement your own parametrization scheme or implement some dynamism for deter-
mining the parameters or scope of a fixture. For this, you can use the pytest_generate_tests hook which
is called when collecting a test function. Through the passed in metafunc object you can inspect the requesting test
context and, most importantly, you can call metafunc.parametrize() to cause parametrization.
For example, let’s say we want to run a test taking string inputs which we want to set via a new pytest command
line option. Let’s first write a simple test accepting a stringinput fixture function argument:
# content of test_strings.py
def test_valid_string(stringinput):
assert stringinput.isalpha()
Now we add a conftest.py file containing the addition of a command line option and the parametrization of our
test function:
# content of conftest.py
def pytest_addoption(parser):
parser.addoption("--stringinput", action="append", default=[],
help="list of stringinputs to pass to test functions")
def pytest_generate_tests(metafunc):
if ’stringinput’ in metafunc.fixturenames:
metafunc.parametrize("stringinput",
metafunc.config.option.stringinput)
If we now pass two stringinput values, our test will run twice:
$ py.test -q --stringinput="hello" --stringinput="world" test_strings.py
..
2 passed in 0.01 seconds
Let’s also run with a stringinput that will lead to a failing test:
$ py.test -q --stringinput="!" test_strings.py
F
================================= FAILURES =================================
___________________________ test_valid_string[!] ___________________________
stringinput = ’!’
def test_valid_string(stringinput):
> assert stringinput.isalpha()
E assert <built-in method isalpha of str object at 0x2b869b32b148>()
E + where <built-in method isalpha of str object at 0x2b869b32b148> = ’!’.isalpha
test_strings.py:3: AssertionError
1 failed in 0.01 seconds
For further examples, you might want to look at more parametrization examples.
metafunc objects are passed to the pytest_generate_tests hook. They help to inspect a testfunction and to
generate tests according to test configuration or values specified in the class or module where a test function is defined:
metafunc.fixturenames: set of required function arguments for given function
metafunc.function: underlying python test function
metafunc.cls: class object where the test function is defined in or None.
metafunc.module: the module object where the test function is defined in.
This section describes a classic and popular way how you can implement fixtures (setup and teardown test state) on a
per-module/class/function basis. pytest started supporting these methods around 2005 and subsequently nose and the
standard library introduced them (under slightly different names). While these setup/teardown methods are and will
remain fully supported you may also use pytest’s more powerful fixture mechanism which leverages the concept of
dependency injection, allowing for a more modular and more scalable approach for managing test state, especially for
larger projects and for functional testing. You can mix both fixture mechanisms in the same file but unittest-based test
methods cannot receive fixture arguments.
Note: As of pytest-2.4, teardownX functions are not called if setupX existed and failed/was skipped. This harmonizes
behaviour across all major python testing tools.
If you have multiple test functions and test classes in a single module you can optionally implement the following
fixture methods which will usually be called once for all the functions:
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method.
"""
Similarly, the following methods are called at class level before and after all test methods of the class are called:
@classmethod
def setup_class(cls):
""" setup any state specific to the execution of the given class (which
usually contains tests).
"""
@classmethod
def teardown_class(cls):
""" teardown any state that was previously setup with a call to
setup_class.
"""
Similarly, the following methods are called around each method invocation:
def setup_method(self, method):
""" setup any state tied to the execution of the given method in a
class. setup_method is invoked for every test method of a class.
"""
If you would rather define test functions directly at module level you can also use the following functions to implement
fixtures:
def setup_function(function):
""" setup any state tied to the execution of the given function.
Invoked for every test function in the module.
"""
def teardown_function(function):
""" teardown any state that was previously setup with a setup_function
call.
"""
Note that it is possible for setup/teardown pairs to be invoked multiple times per testing process.
During test execution any output sent to stdout and stderr is captured. If a test or a setup method fails its
according captured output will usually be shown along with the failure traceback.
In addition, stdin is set to a “null” object which will fail on attempts to read from it because it is rarely desired to
wait for interactive input when running automated tests.
By default capturing is done by intercepting writes to low level file descriptors. This allows to capture output from
simple print statements as well as output from a subprocess started by a test.
One primary benefit of the default capturing of stdout/stderr output is that you can use print statements for debugging:
# content of test_module.py
def setup_function(function):
print ("setting up %s" % function)
def test_func1():
assert True
def test_func2():
assert False
and running this module will show you precisely the output of the failing function and hide the other one:
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items
test_module.py .F
def test_func2():
> assert False
E assert False
test_module.py:9: AssertionError
----------------------------- Captured stdout ------------------------------
setting up <function test_func2 at 0x1ec25f0>
==================== 1 failed, 1 passed in 0.01 seconds ====================
The Fixtures as Function arguments allows test function a very easy way to access the captured output by simply
using the names capsys or capfd in the test function signature. Here is an example test function that performs
some output related checks:
def test_myoutput(capsys): # or use "capfd" for fd-level
print ("hello")
sys.stderr.write("world\n")
out, err = capsys.readouterr()
assert out == "hello\n"
assert err == "world\n"
print "next"
out, err = capsys.readouterr()
assert out == "next\n"
The readouterr() call snapshots the output so far - and capturing will be continued. After the test function finishes
the original streams will be restored. Using capsys this way frees your test from having to care about setting/resetting
output streams and also interacts well with pytest’s own per-test capturing.
If you want to capture on fd level you can use the capfd function argument which offers the exact same interface.
Sometimes tests need to invoke functionality which depends on global settings or which invokes code which cannot
be easily tested such as network access. The monkeypatch function argument helps you to safely set/delete an
attribute, dictionary item or environment variable or to modify sys.path for importing. See the monkeypatch blog
post for some introduction material and a discussion of its motivation.
If you want to pretend that os.expanduser returns a certain directory, you can use the
monkeypatch.setattr() method to patch this function before calling into a function which uses it:
# content of test_module.py
import os.path
def getssh(): # pseudo application code
return os.path.join(os.path.expanduser("~admin"), ’.ssh’)
def test_mytest(monkeypatch):
def mockreturn(path):
return ’/abc’
monkeypatch.setattr(os.path, ’expanduser’, mockreturn)
x = getssh()
assert x == ’/abc/.ssh’
Here our test function monkeypatches os.path.expanduser and then calls into an function that calls it. After the
test function finishes the os.path.expanduser modification will be undone.
If you want to prevent the “requests” library from performing http requests in all your tests, you can do:
# content of conftest.py
import pytest
@pytest.fixture(autouse=True)
def no_requests(monkeypatch):
monkeypatch.delattr("requests.session.Session.request")
This autouse fixture will be executed for each test function and it will delete the method
request.session.Session.request so that any attempts within tests to create http requests will fail.
class monkeypatch
object keeping a record of setattr/item/env/syspath changes.
setattr(target, name, value=<object object at 0x30ef590>, raising=True)
set attribute value on target, memorizing the old value. By default raise AttributeError if the attribute did
not exist.
For convenience you can specify a string as target which will be interpreted as a dotted import path,
with the last part being the attribute name. Example: monkeypatch.setattr("os.getcwd",
lambda x: "/") would set the getcwd function of the os module.
The raising value determines if the setattr should fail if the attribute is not already present (defaults to
True which means it will raise).
delattr(target, name=<object object at 0x30ef590>, raising=True)
delete attribute name from target, by default raise AttributeError it the attribute did not previously exist.
If no name is specified and target is a string it will be interpreted as a dotted import path with the last
part being the attribute name.
If raising is set to false, the attribute is allowed to not pre-exist.
The pytest-xdist plugin extends pytest with some unique test execution modes:
• Looponfail: run your tests repeatedly in a subprocess. After each run, pytest waits until a file in your project
changes and then re-runs the previously failing tests. This is repeated until all tests pass. At this point a full run
is again performed.
• multiprocess Load-balancing: if you have multiple CPUs or hosts you can use them for a combined test run.
This allows to speed up development or to use special resources of remote machines.
• Multi-Platform coverage: you can specify different Python interpreters or different platforms and run tests in
parallel on all of them.
Before running tests remotely, pytest efficiently “rsyncs” your program source code to the remote place. All test
results are reported back and displayed to your local terminal. You may specify different Python versions and inter-
preters.
# or
or use the package in develop/in-place mode with a checkout of the pytest-xdist repository
python setup.py develop
Especially for longer running tests or tests requiring a lot of I/O this can lead to considerable speed ups.
To instantiate a Python-2.4 subprocess and send tests to it, you may type:
py.test -d --tx popen//python=python2.4
This will start a subprocess which is run with the “python2.4” Python interpreter, found in your system binary lookup
path.
If you prefix the –tx option value like this:
py.test -d --tx 3*popen//python=python2.4
then three subprocesses would be created and the tests will be distributed to three subprocesses and run simultanously.
For refactoring a project with a medium or large test suite you can use the looponfailing mode. Simply add the --f
option:
py.test -f
and pytest will run your tests. Assuming you have failures it will then wait for file changes and re-run the failing
test set. File changes are detected by looking at looponfailingroots root directories and all of their contents
(recursively). If the default for this value does not work for you you can change it in your project by setting a
configuration option:
# content of a pytest.ini, setup.cfg or tox.ini file
[pytest]
looponfailroots = mypkg testdir
This would lead to only looking for file changes in the respective directories, specified relatively to the ini-file’s
directory.
Suppose you have a package mypkg which contains some tests that you can successfully run locally. And you also
have a ssh-reachable machine myhost. Then you can ad-hoc distribute your tests by typing:
py.test -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg
This will synchronize your mypkg package directory with a remote ssh account and then collect and run your tests at
the remote side.
You can specify multiple --rsyncdir directories to be sent to the remote side.
Download the single-module socketserver.py Python program and run it like this:
python socketserver.py
It will tell you that it starts listening on the default port. You can now on your home machine specify this new socket
host with something like this:
py.test -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg
If you specify a windows host, an OSX host and a Linux environment this command will send each tests to all platforms
- and report back failures from all platforms at once. The specifications strings use the xspec syntax.
pytest (since version 2.0) supports ini-style configuration. For example, you could make running with three subpro-
cesses your default:
[pytest]
addopts = -n3
In a tox.ini or setup.cfg file in your root project directory you may specify directories to include or to exclude
in synchronisation:
[pytest]
rsyncdirs = . mypkg helperpkg
rsyncignore = .hg
These directory specifications are relative to the directory where the configuration file was found.
You can use the tmpdir function argument which will provide a temporary directory unique to the test invocation,
created in the base temporary directory.
tmpdir is a py.path.local object which offers os.path methods and more. Here is an example test usage:
# content of test_tmpdir.py
import os
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
p.write("content")
assert p.read() == "content"
assert len(tmpdir.listdir()) == 1
assert 0
Running this would result in a passed test except for the last assert 0 line which we use to look at values:
$ py.test test_tmpdir.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items
test_tmpdir.py F
tmpdir = local(’/tmp/pytest-1009/test_create_file0’)
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
p.write("content")
assert p.read() == "content"
assert len(tmpdir.listdir()) == 1
> assert 0
E assert 0
test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.01 seconds =========================
Temporary directories are by default created as sub-directories of the system temporary directory. The base name
will be pytest-NUM where NUM will be incremented with each test run. Moreover, entries older than 3 temporary
directories will be removed.
You can override the default temporary directory setting like this:
py.test --basetemp=mydir
When distributing tests on the local machine, pytest takes care to configure a basetemp directory for the sub pro-
cesses such that all temporary data lands below a single per-test run basetemp directory.
By using the pytest.mark helper you can easily set metadata on your test functions. There are some builtin
markers, for example:
• skipif - skip a test function if a certain condition is met
• xfail - produce an “expected failure” outcome if a certain condition is met
• parametrize to perform multiple calls to the same test function.
It’s easy to create custom markers or to apply markers to whole test classes or modules. See Working with custom
markers for examples which also serve as documentation.
class MarkGenerator
Factory for MarkDecorator objects - exposed as a pytest.mark singleton instance. Example:
import py
@pytest.mark.slowtest
def test_function():
pass
Note: The rules above prevent MarkDecorator objects from storing only a single function or class reference as
their positional argument with no additional keyword or positional arguments.
class MarkInfo(name, args, kwargs)
Marking object created by MarkDecorator instances.
name = None
name of attribute
args = None
positional argument list, empty if none specified
kwargs = None
keyword argument dictionary, empty if nothing specified
add(args, kwargs)
add a MarkInfo with the given args and kwargs.
2.13 Skip and xfail: dealing with tests that can not succeed
If you have test functions that cannot be run on certain platforms or that you expect to fail you can mark them
accordingly or you may call helper functions during execution of setup or test functions.
A skip means that you expect your test to pass unless the environment (e.g. wrong Python interpreter, missing de-
pendency) prevents it to run. And xfail means that your test can run but you expect it to fail because there is an
implementation problem.
pytest counts and lists skip and xfail tests separately. Detailed information about skipped/xfailed tests is not shown
by default to avoid cluttering the output. You can use the -r option to see details corresponding to the “short” letters
shown in the test progress:
py.test -rxs # show extra info on skips and xfails
New in version 2.0,: 2.4 Here is an example of marking a test function to be skipped when run on a Python3.3
interpreter:
import sys
@pytest.mark.skipif(sys.version_info >= (3,3),
reason="requires python3.3")
def test_function():
...
During test function setup the condition (“sys.version_info >= (3,3)”) is checked. If it evaluates to True, the test
function will be skipped with the specified reason. Note that pytest enforces specifying a reason in order to report
meaningful “skip reasons” (e.g. when using -rs). If the condition is a string, it will be evaluated as python expression.
You can share skipif markers between modules. Consider this test module:
# content of test_mymodule.py
import mymodule
minversion = pytest.mark.skipif(mymodule.__versioninfo__ >= (1,1),
reason="at least mymodule-1.1 required")
@minversion
def test_function():
...
# test_myothermodule.py
from test_mymodule import minversion
@minversion
def test_anotherfunction():
...
For larger test suites it’s usually a good idea to have one file where you define the markers which you then consistently
apply throughout your test suite.
Alternatively, the pre pytest-2.4 way to specify condition strings instead of booleans will remain fully supported in
future versions of pytest. It couldn’t be easily used for importing markers between test modules so it’s no longer
advertised as the primary method.
As with all function marking you can skip test functions at the whole class- or module level. If your code targets
python2.6 or above you use the skipif decorator (and any other marker) on classes:
@pytest.mark.skipif(sys.platform == ’win32’,
reason="requires windows")
class TestPosixCalls:
def test_function(self):
"will not be setup or run under ’win32’ platform"
If the condition is true, this marker will produce a skip result for each of the test methods.
If your code targets python2.5 where class-decorators are not available, you can set the pytestmark attribute of a
class:
class TestPosixCalls:
pytestmark = pytest.mark.skipif(sys.platform == ’win32’,
reason="requires Windows")
def test_function(self):
"will not be setup or run under ’win32’ platform"
As with the class-decorator, the pytestmark special name tells pytest to apply it to each test function in the class.
If you want to skip all test functions of a module, you must use the pytestmark name on the global level:
# test_module.py
pytestmark = pytest.mark.skipif(...)
If multiple “skipif” decorators are applied to a test function, it will be skipped if any of the skip conditions is true.
You can use the xfail marker to indicate that you expect the test to fail:
@pytest.mark.xfail
def test_function():
...
This test will be run but no traceback will be reported when it fails. Instead terminal reporting will list it in the
“expected to fail” or “unexpectedly passing” sections.
2.13. Skip and xfail: dealing with tests that can not succeed 55
pytest Documentation, Release 2.5.2
you can force the running and reporting of an xfail marked test as if it weren’t marked at all.
As with skipif you can also mark your expectation of a failure on a particular platform:
@pytest.mark.xfail(sys.version_info >= (3,3),
reason="python3.3 api changes")
def test_function():
...
You can furthermore prevent the running of an “xfail” test or specify a reason such as a bug ID or similar. Here is a
simple test file with the several usages:
import pytest
xfail = pytest.mark.xfail
@xfail
def test_hello():
assert 0
@xfail(run=False)
def test_hello2():
assert 0
@xfail("hasattr(os, ’sep’)")
def test_hello3():
assert 0
@xfail(reason="bug 110")
def test_hello4():
assert 0
@xfail(’pytest.__version__[0] != "17"’)
def test_hello5():
assert 0
def test_hello6():
pytest.xfail("reason")
xfail_demo.py xxxxxx
========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN]
XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, ’sep’)
XFAIL xfail_demo.py::test_hello4
bug 110
XFAIL xfail_demo.py::test_hello5
condition: pytest.__version__[0] != "17"
XFAIL xfail_demo.py::test_hello6
reason: reason
It is possible to apply markers like skip and xfail to individual test instances when using parametrize:
import pytest
@pytest.mark.parametrize(("n", "expected"), [
(1, 2),
pytest.mark.xfail((1, 0)),
pytest.mark.xfail(reason="some bug")((1, 3)),
(2, 3),
(3, 4),
(4, 5),
pytest.mark.skipif("sys.version_info >= (3,0)")((10, 11)),
])
def test_increment(n, expected):
assert n + 1 == expected
If you cannot declare xfail- of skipif conditions at import time you can also imperatively produce an according outcome
imperatively, in test or setup code:
def test_function():
if not valid_config():
pytest.xfail("failing configuration (but should work)")
# or
pytest.skip("unsupported configuration")
You can use the following import helper at module level or within a test or test setup function:
docutils = pytest.importorskip("docutils")
If docutils cannot be imported here, this will lead to a skip outcome of the test. You can also skip based on the
version number of a library:
docutils = pytest.importorskip("docutils", minversion="0.3")
The version will be read from the specified module’s __version__ attribute.
Prior to pytest-2.4 the only way to specify skipif/xfail conditions was to use strings:
2.13. Skip and xfail: dealing with tests that can not succeed 57
pytest Documentation, Release 2.5.2
import sys
@pytest.mark.skipif("sys.version_info >= (3,3)")
def test_function():
...
During test function setup the skipif condition is evaluated by calling eval(’sys.version_info >=
(3,0)’, namespace). The namespace contains all the module globals, and os and sys as a minimum.
Since pytest-2.4 condition booleans are considered preferable because markers can then be freely imported between
test modules. With strings you need to import not only the marker but all variables everything used by the marker,
which violates encapsulation.
The reason for specifying the condition as a string was that pytest can report a summary of skip conditions based
purely on the condition string. With conditions as booleans you are required to specify a reason string.
Note that string conditions will remain fully supported and you are free to use them if you have no need for cross-
importing markers.
The evaluation of a condition string in pytest.mark.skipif(conditionstring) or
pytest.mark.xfail(conditionstring) takes place in a namespace dictionary which is constructed
as follows:
• the namespace is initialized by putting the sys and os modules and the pytest config object into it.
• updated with the module globals of the test function for which the expression is applied.
The pytest config object allows you to skip based on a test configuration value which you might have added:
@pytest.mark.skipif("not config.getvalue(’db’)")
def test_function(...):
...
You can use the recwarn funcarg to assert that code triggers warnings through the Python warnings system. Here is
a simple self-contained test:
# content of test_recwarn.py
def test_hello(recwarn):
from warnings import warn
warn("hello", DeprecationWarning)
w = recwarn.pop(DeprecationWarning)
assert issubclass(w.category, DeprecationWarning)
assert ’hello’ in str(w.message)
assert w.filename
assert w.lineno
You can also call a global helper for checking that a certain function call triggers a Deprecation warning:
import pytest
def test_global():
pytest.deprecated_call(myfunction, 17)
pytest has support for running Python unittest.py style tests. It’s meant for leveraging existing unittest-style
projects to use pytest features. Concretely, pytest will automatically collect unittest.TestCase subclasses
and their test methods in test files. It will invoke typical setup/teardown methods and generally try to make test
suites written to run on unittest, to also run using pytest. We assume here that you are familiar with writing
unittest.TestCase style tests and rather focus on integration aspects.
2.15.1 Usage
and you should be able to run your unittest-style tests if they are contained in test_* modules. If that works for
you then you can make use of most pytest features, for example --pdb debugging in failures, using plain assert-
statements, more informative tracebacks, stdout-capturing or distributing tests to multiple CPUs via the -nNUM option
if you installed the pytest-xdist plugin. Please refer to the general pytest documentation for many more
examples.
Running your unittest with pytest allows you to use its fixture mechanism with unittest.TestCase style tests.
Assuming you have at least skimmed the pytest fixture features, let’s jump-start into an example that integrates a pytest
db_class fixture, setting up a class-cached database object, and then reference it from a unittest-style test:
# content of conftest.py
import pytest
@pytest.fixture(scope="class")
def db_class(request):
class DummyDB:
pass
# set a class attribute on the invoking test context
request.cls.db = DummyDB()
This defines a fixture function db_class which - if used - is called once for each test class and which sets the class-
level db attribute to a DummyDB instance. The fixture function achieves this by receiving a special request object
which gives access to the requesting test context such as the cls attribute, denoting the class from which the fixture is
used. This architecture de-couples fixture writing from actual test code and allows re-use of the fixture by a minimal
reference, the fixture name. So let’s write an actual unittest.TestCase class using our fixture definition:
# content of test_unittest_db.py
import unittest
import pytest
@pytest.mark.usefixtures("db_class")
class MyTest(unittest.TestCase):
def test_method1(self):
assert hasattr(self, "db")
assert 0, self.db # fail for demo purposes
def test_method2(self):
assert 0, self.db # fail for demo purposes
The @pytest.mark.usefixtures("db_class") class-decorator makes sure that the pytest fixture function
db_class is called once per class. Due to the deliberately failing assert statements, we can take a look at the
self.db values in the traceback:
$ py.test test_unittest_db.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items
test_unittest_db.py FF
def test_method1(self):
assert hasattr(self, "db")
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x12124d0>
test_unittest_db.py:9: AssertionError
___________________________ MyTest.test_method2 ____________________________
def test_method2(self):
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x12124d0>
test_unittest_db.py:12: AssertionError
========================= 2 failed in 0.01 seconds =========================
This default pytest traceback shows that the two test methods share the same self.db instance which was our
intention when writing the class-scoped fixture function above.
Although it’s usually better to explicitely declare use of fixtures you need for a given test, you may sometimes want to
have fixtures that are automatically used in a given context. After all, the traditional style of unittest-setup mandates
the use of this implicit fixture writing and chances are, you are used to it or like it.
You can flag fixture functions with @pytest.fixture(autouse=True) and define the fixture function in the
context where you want it used. Let’s look at an initdir fixture which makes all test methods of a TestCase
class execute in a temporary directory with a pre-initialized samplefile.ini. Our initdir fixture itself uses
the pytest builtin tmpdir fixture to delegate the creation of a per-test temporary directory:
# content of test_unittest_cleandir.py
import pytest
import unittest
class MyTest(unittest.TestCase):
@pytest.fixture(autouse=True)
def initdir(self, tmpdir):
tmpdir.chdir() # change to pytest-provided temporary directory
tmpdir.join("samplefile.ini").write("# testdata")
def test_method(self):
s = open("samplefile.ini").read()
assert "testdata" in s
Due to the autouse flag the initdir fixture function will be used for all methods of the class where it is de-
fined. This is a shortcut for using a @pytest.mark.usefixtures("initdir") marker on the class like in
the previous example.
Running this test module ...:
$ py.test -q test_unittest_cleandir.py
.
1 passed in 0.01 seconds
... gives us one passed test because the initdir fixture function was executed ahead of the test_method.
Note: While pytest supports receiving fixtures via test function arguments for non-unittest test methods,
unittest.TestCase methods cannot directly receive fixture function arguments as implementing that is likely to
inflict on the ability to run general unittest.TestCase test suites. Maybe optional support would be possible, though.
If unittest finally grows a plugin system that should help as well. In the meanwhile, the above usefixtures and
autouse examples should help to mix in pytest fixtures into unittest suites. And of course you can also start to
selectively leave away the unittest.TestCase subclassing, use plain asserts and get the unlimited pytest feature
set.
pytest has basic support for running tests written for nose.
2.16.1 Usage
python setup.py develop # make sure tests can import our package
py.test # instead of ’nosetests’
and you should be able to run your nose style tests and make use of pytest’s capabilities.
By default all files matching the test*.txt pattern will be run through the python standard doctest module. You
can change the pattern by issuing:
py.test --doctest-glob=’*.rst’
on the command line. You can also trigger running of doctests from docstrings in all python modules (including
regular python test modules):
py.test --doctest-modules
You can make these changes permanent in your project by putting them into a pytest.ini file like this:
# content of pytest.ini
[pytest]
addopts = --doctest-modules
# content of example.rst
then you can just invoke py.test without command line options:
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items
mymodule.py .
Also, using fixtures from classes, modules or projects and autouse fixtures (xUnit setup on steroids) fixtures are sup-
ported when executing text doctest files.
THREE
pytest implements all aspects of configuration, collection, running and reporting by calling well specified hooks.
Virtually any Python module can be registered as a plugin. It can implement any number of hook functions (usually
two or three) which all have a pytest_ prefix, making hook functions easy to distinguish and find. There are three
basic location types:
• builtin plugins: loaded from pytest’s internal _pytest directory.
• external plugins: modules discovered through setuptools entry points
• conftest.py plugins: modules auto-discovered in test directories
local conftest.py plugins contain directory-specific hook implementations. Session and test running activities
will invoke all hooks defined in conftest.py files closer to the root of the filesystem. Example: Assume the
following layout and content of files:
a/conftest.py:
def pytest_runtest_setup(item):
# called for running each test in ’a’ directory
print ("setting up", item)
a/test_in_subdir.py:
def test_sub():
pass
test_flat.py:
def test_flat():
pass
Note: If you have conftest.py files which do not reside in a python package directory (i.e. one containing an
__init__.py) then “import conftest” can be ambiguous because there might be other conftest.py files as well
on your PYTHONPATH or sys.path. It is thus good practise for projects to either put conftest.py under a
package scope or to never import anything from a conftest.py file.
65
pytest Documentation, Release 2.5.2
Installing a plugin happens through any usual Python installation tool, for example:
pip install pytest-NAME
pip uninstall pytest-NAME
If a plugin is installed, pytest automatically finds and integrates it, there is no need to activate it. We have a beta
page listing all 3rd party plugins and their status and here is a little annotated list for some popular plugins:
• pytest-django: write tests for django apps, using pytest integration.
• pytest-twisted: write tests for twisted apps, starting a reactor and processing deferreds from test functions.
• pytest-capturelog: to capture and assert about messages from the logging module
• pytest-cov: coverage reporting, compatible with distributed testing
• pytest-xdist: to distribute tests to CPUs and remote hosts, to run in boxed mode which allows to survive seg-
mentation faults, to run in looponfailing mode, automatically re-running failing tests on file changes, see also
xdist: pytest distributed testing plugin
• pytest-instafail: to report failures while the test run is happening.
• pytest-bdd and pytest-konira to write tests using behaviour-driven testing.
• pytest-timeout: to timeout tests based on function marks or global definitions.
• pytest-cache: to interactively re-run failing tests and help other plugins to store test run information across
invocations.
• pytest-pep8: a --pep8 option to enable PEP8 compliance checking.
• oejskit: a plugin to run javascript unittests in life browsers
You may discover more plugins through a pytest- pypi.python.org search.
If you want to write a plugin, there are many real-life examples you can copy from:
• a custom collection example plugin: A basic example for specifying tests in Yaml files
• around 20 builtin plugins which provide pytest’s own functionality
• many external plugins providing additional features
All of these plugins implement the documented well specified hooks to extend and add functionality.
If you want to make your plugin externally available, you may define a so-called entry point for your distribution so
that pytest finds your plugin module. Entry points are a feature that is provided by setuptools or Distribute. pytest
looks up the pytest11 entrypoint to discover its plugins and you can thus make your plugin available by defining it
in your setuptools/distribute-based setup-invocation:
setup(
name="myproject",
packages = [’myproject’]
If a package is installed this way, pytest will load myproject.pluginmodule as a plugin which can define
well specified hooks.
You can require plugins in a test module or a conftest file like this:
pytest_plugins = "name1", "name2",
When the test module or conftest plugin is loaded the specified plugins will be loaded as well. You can also use dotted
path like this:
pytest_plugins = "myapp.testsupport.myplugin"
If a plugin wants to collaborate with code from another plugin it can obtain a reference through the plugin manager
like this:
plugin = config.pluginmanager.getplugin("name_of_plugin")
If you want to look at the names of existing plugins, use the --traceconfig option.
If you want to find out which plugins are active in your environment you can type:
py.test --traceconfig
and will get an extended test header which shows activated plugins and their names. It will also print local plugins aka
conftest.py files when they are loaded.
This means that any subsequent try to activate/load the named plugin will it already existing. See Finding out which
plugins are active for how to obtain the name of a plugin.
FOUR
You can find the source code for the following plugins in the pytest repository.
69
pytest Documentation, Release 2.5.2
FIVE
pytest calls hook functions to implement initialization, running, test execution and reporting. When pytest loads
a plugin it validates that each hook function conforms to its respective hook specification. Each hook function name
and its argument names need to match a hook specification. However, a hook function may accept fewer parameters
by simply not specifying them. If you mistype argument names or the hook name itself you get an error showing the
available arguments.
pytest_cmdline_preparse(config, args)
(deprecated) modify command line arguments before option parsing.
pytest_cmdline_parse(pluginmanager, args)
return initialized config object, parsing the specified args.
pytest_namespace()
return dict of name->object to be made globally available in the pytest namespace. This hook is called before
command line options are parsed.
pytest_addoption(parser)
register argparse-style options and ini-style config values.
This function must be implemented in a plugin and is called once at the beginning of a test run.
Parameters parser – To add command line options, call parser.addoption(...). To add
ini-file values call parser.addini(...).
Options can later be accessed through the config object, respectively:
•config.getoption(name) to retrieve the value of a command line option.
•config.getini(name) to retrieve a value read from an ini-style file.
The config object is passed around on many internal objects via the .config attribute or can be retrieved as
the pytestconfig fixture or accessed via (deprecated) pytest.config.
pytest_cmdline_main(config)
called for performing the main command line action. The default implementation will invoke the configure
hooks and runtest_mainloop.
pytest_configure(config)
called after command line options have been parsed and all plugins and initial conftest files been loaded.
71
pytest Documentation, Release 2.5.2
pytest_unconfigure(config)
called before test process is exited.
pytest calls the following hooks for collecting files and directories:
pytest_ignore_collect(path, config)
return True to prevent considering this path for collection. This hook is consulted for all files and directories
prior to calling more specific hooks.
pytest_collect_directory(path, parent)
called before traversing a directory for collection files.
pytest_collect_file(path, parent)
return collection Node or None for the given path. Any new node needs to have the specified parent as a
parent.
For influencing the collection of objects in Python modules you can use the following hook:
There are few hooks which can be used for special reporting or interaction with exceptions:
pytest_internalerror(excrepr, excinfo)
called for internal errors.
pytest_keyboard_interrupt(excinfo)
called for keyboard interrupt.
pytest_exception_interact(node, call, report)
(experimental, new in 2.4) called when an exception was raised which can potentially be interactively handled.
This hook is only called if an exception was raised that is not an internal exception like “skip.Exception”.
SIX
class Config
access to configuration values, pluginmanager and plugin hooks.
option = None
access to command line option as attributes. (deprecated), use getoption() instead
pluginmanager = None
a pluginmanager instance
classmethod fromdictargs(option_dict, args)
constructor useable for subprocesses.
addinivalue_line(name, line)
add a line to an ini-file option. The option must have been declared but might not yet be set in which case
the line becomes the the first line in its value.
getini(name)
return configuration value from an ini file. If the specified name hasn’t been registered through a prior
parser.addini call (usually from a plugin), a ValueError is raised.
getoption(name)
return command line option value.
Parameters name – name of the option. You may also specify the literal --OPT option instead
of the “dest” option name.
getvalue(name, path=None)
return command line option value.
Parameters name – name of the command line option
(deprecated) if we can’t find the option also lookup the name in a matching conftest file.
getvalueorskip(name, path=None)
(deprecated) return getvalue(name) or call pytest.skip if no value exists.
class Parser
Parser for command line arguments and ini-file values.
getgroup(name, description=’‘, after=None)
get (or create) a named option Group.
Name name of the option group.
Description long description for –help output.
75
pytest Documentation, Release 2.5.2
get_marker(name)
get a marker object from this node or None if the node doesn’t have a marker with that name.
listextrakeywords()
Return a set of all extra keywords in self and any parents.
addfinalizer(fin)
register a function to be called when this node is finalized.
This method can only be called when this node is active in a setup chain, for example during self.setup().
getparent(cls)
get the next parent node (including ourself) which is an instance of the given class
class Collector
Bases: _pytest.main.Node
Collector instances create children through collect() and thus iteratively build a tree.
exception CollectError
Bases: exceptions.Exception
an error during collection, contains a custom message.
Collector.collect()
returns a list of children (items and collectors) for this collection node.
Collector.repr_failure(excinfo)
represent a collection failure.
class Item
Bases: _pytest.main.Node
a basic test invocation item. Note that for a single function there might be multiple test invocation items.
class Module
Bases: _pytest.main.File, _pytest.python.PyCollector
Collector for test classes and functions.
class Class
Bases: _pytest.python.PyCollector
Collector for test methods.
class Function
Bases: _pytest.python.FunctionMixin, _pytest.main.Item,
_pytest.python.FuncargnamesCompatAttr
a Function Item is responsible for setting up and executing a Python test function.
function
underlying python ‘function’ object
runtest()
execute the underlying test function.
class CallInfo
Result/Exception info a function invocation.
when = None
context of invocation: one of “setup”, “call”, “teardown”, “memocollect”
excinfo = None
None or ExceptionInfo object.
77
pytest Documentation, Release 2.5.2
class TestReport
Basic test report object (also used for setup and teardown calls if they fail).
nodeid = None
normalized collection node id
location = None
a (filesystempath, lineno, domaininfo) tuple indicating the actual location of a test item - it might be
different from the collected one e.g. if a method is inherited from a different module.
keywords = None
a name -> value dictionary containing all keywords and markers associated with a test invocation.
outcome = None
test outcome, always one of “passed”, “failed”, “skipped”.
longrepr = None
None or a failure representation.
when = None
one of ‘setup’, ‘call’, ‘teardown’ to indicate runtest phase.
sections = None
list of (secname, data) extra information which needs to marshallable
duration = None
time it took to run just the test
SEVEN
79
pytest Documentation, Release 2.5.2
(Updated on 2014-01-15)
EIGHT
Here is a (growing) list of examples. Contact us if you need more examples or have questions. Also take a look at
the comprehensive documentation which contains many example snippets as well. Also, pytest on stackoverflow.com
often comes with example answers.
For basic examples, see
• Installation and Getting Started for basic introductory examples
• Asserting with the assert statement for basic assertion examples
• pytest fixtures: explicit, modular, scalable for basic fixture/setup examples
• Parametrizing fixtures and test functions for basic test function parametrization
• Support for unittest.TestCase / Integration of fixtures for basic unittest integration
• Running tests written for nose for basic nosetests integration
The following examples aim at various use cases you might encounter.
Here is a nice run of several tens of failures and how pytest presents things (unfortunately not showing the nice
colors here in the HTML that you get on the terminal - we are working on that):
assertion $ py.test failure_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 39 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
param1 = 3, param2 = 6
failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________
81
pytest Documentation, Release 2.5.2
def test_simple(self):
def f():
return 42
def g():
return 43
failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
def test_simple_multiline(self):
otherfunc_multi(
42,
> 6*9)
failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54
def otherfunc_multi(a,b):
> assert (a ==
b)
E assert 42 == 54
failure_demo.py:11: AssertionError
___________________________ TestFailing.test_not ___________________________
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function f at 0x296ac08>()
failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
def test_eq_text(self):
> assert ’spam’ == ’eggs’
E assert ’spam’ == ’eggs’
E - spam
E + eggs
failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
def test_eq_similar_text(self):
> assert ’foo 1 bar’ == ’foo 2 bar’
E assert ’foo 1 bar’ == ’foo 2 bar’
E - foo 1 bar
E ? ^
E + foo 2 bar
E ? ^
failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
def test_eq_multiline_text(self):
> assert ’foo\nspam\nbar’ == ’foo\neggs\nbar’
E assert ’foo\nspam\nbar’ == ’foo\neggs\nbar’
E foo
E - spam
E + eggs
E bar
failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
def test_eq_long_text(self):
a = ’1’*100 + ’a’ + ’2’*100
b = ’1’*100 + ’b’ + ’2’*100
> assert a == b
E assert ’111111111111...2222222222222’ == ’1111111111111...2222222222222’
E Skipping 90 identical leading characters in diff, use -v to show
E Skipping 91 identical trailing characters in diff, use -v to show
E - 1111111111a222222222
E ? ^
E + 1111111111b222222222
E ? ^
failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
def test_eq_long_text_multiline(self):
a = ’1\n’*100 + ’a’ + ’2\n’*100
b = ’1\n’*100 + ’b’ + ’2\n’*100
> assert a == b
E assert ’1\n1\n1\n1\n...n2\n2\n2\n2\n’ == ’1\n1\n1\n1\n1...n2\n2\n2\n2\n’
E Skipping 190 identical leading characters in diff, use -v to show
E Skipping 191 identical trailing characters in diff, use -v to show
E 1
E 1
E 1
E 1
E 1
E - a2
E + b2
E 2
E 2
E 2
E 2
failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
E assert [0, 1, 2] == [0, 1, 3]
E At index 2 diff: 2 != 3
failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
b = [0]*100 + [2] + [3]*100
> assert a == b
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
E At index 100 diff: 1 != 2
failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
def test_eq_dict(self):
> assert {’a’: 0, ’b’: 1, ’c’: 0} == {’a’: 0, ’b’: 2, ’d’: 0}
E assert {’a’: 0, ’b’: 1, ’c’: 0} == {’a’: 0, ’b’: 2, ’d’: 0}
E Omitting 1 identical items, use -v to show
E Differing items:
E {’b’: 1} != {’b’: 2}
E Left contains more items:
E {’c’: 0}
E Right contains more items:
E {’d’: 0}
failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
E assert set([0, 10, 11, 12]) == set([0, 20, 21])
E Extra items in the left set:
E 10
E 11
E 12
E Extra items in the right set:
E 20
E 21
failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
E assert [1, 2] == [1, 2, 3]
E Right contains more items, first extra item: 3
failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
E assert 1 in [0, 2, 3, 4, 5]
failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
def test_not_in_text_multiline(self):
text = ’some multiline\ntext\nwhich\nincludes foo\nand a\ntail’
> assert ’foo’ not in text
E assert ’foo’ not in ’some multiline\ntext\nw...ncludes foo\nand a\ntail’
E ’foo’ is contained here:
E some multiline
E text
E which
E includes foo
E ? +++
E and a
E tail
failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
def test_not_in_text_single(self):
text = ’single foo line’
> assert ’foo’ not in text
E assert ’foo’ not in ’single foo line’
E ’foo’ is contained here:
E single foo line
E ? +++
failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
def test_not_in_text_single_long(self):
failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
def test_not_in_text_single_long_term(self):
text = ’head ’ * 50 + ’f’*70 + ’tail ’ * 20
> assert ’f’*70 not in text
E assert ’fffffffffff...ffffffffffff’ not in ’head head he...l tail tail ’
E ’ffffffffffffffffff...fffffffffffffffffff’ is contained here:
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
failure_demo.py:94: AssertionError
______________________________ test_attribute ______________________________
def test_attribute():
class Foo(object):
b = 1
i = Foo()
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x29c77d0>.b
failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________
def test_attribute_instance():
class Foo(object):
b = 1
> assert Foo().b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x29e5f10>.b
E + where <failure_demo.Foo object at 0x29e5f10> = <class ’failure_demo.Foo’>()
failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________
def test_attribute_failure():
class Foo(object):
def _get_b(self):
raise Exception(’Failed to get attrib’)
b = property(_get_b)
i = Foo()
> assert i.b == 2
failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _get_b(self):
> raise Exception(’Failed to get attrib’)
E Exception: Failed to get attrib
failure_demo.py:113: Exception
_________________________ test_attribute_multiple __________________________
def test_attribute_multiple():
class Foo(object):
b = 1
class Bar(object):
b = 2
> assert Foo().b == Bar().b
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x29c3b10>.b
E + where <failure_demo.Foo object at 0x29c3b10> = <class ’failure_demo.Foo’>()
E + and 2 = <failure_demo.Bar object at 0x29c3350>.b
E + where <failure_demo.Bar object at 0x29c3350> = <class ’failure_demo.Bar’>()
failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________
def test_raises(self):
s = ’qwe’
> raises(TypeError, "int(s)")
failure_demo.py:133:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> int(s)
E ValueError: invalid literal for int() with base 10: ’qwe’
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:999>:1:
______________________ TestRaises.test_raises_doesnt _______________________
def test_raises_doesnt(self):
> raises(IOError, "int(’3’)")
E Failed: DID NOT RAISE
failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________
def test_raise(self):
> raise ValueError("demo error")
E ValueError: demo error
failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________
def test_tupleerror(self):
failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
print ("l is %r" % l)
> a,b = l.pop()
E TypeError: ’int’ object is not iterable
failure_demo.py:147: TypeError
----------------------------- Captured stdout ------------------------------
l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
def test_some_error(self):
> if namenotexi:
E NameError: global name ’namenotexi’ is not defined
failure_demo.py:150: NameError
____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely():
src = ’def foo():\n assert 1 == 0\n’
name = ’abc-123’
module = py.std.imp.new_module(name)
code = py.code.compile(src, name, ’exec’)
py.builtin.exec_(code, module.__dict__)
py.std.sys.modules[name] = module
> module.foo()
failure_demo.py:165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo():
> assert 1 == 0
E assert 1 == 0
def test_complex_error(self):
def f():
return 44
def g():
return 43
> somefunc(f(), g())
failure_demo.py:175:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
x = 44, y = 43
def somefunc(x,y):
> otherfunc(x,y)
failure_demo.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 44, b = 43
def otherfunc(a,b):
> assert a==b
E assert 44 == 43
failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
def test_z1_unpack_error(self):
l = []
> a,b = l
E ValueError: need more than 0 values to unpack
failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
def test_z2_type_error(self):
l = 3
> a,b = l
E TypeError: ’int’ object is not iterable
failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________
def test_startswith(self):
s = "123"
g = "456"
> assert s.startswith(g)
E assert <built-in method startswith of str object at 0x29ea328>(’456’)
E + where <built-in method startswith of str object at 0x29ea328> = ’123’.startswith
failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
def test_startswith_nested(self):
def f():
return "123"
def g():
return "456"
> assert f().startswith(g())
E assert <built-in method startswith of str object at 0x29ea328>(’456’)
failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
def test_global_func(self):
> assert isinstance(globf(42), float)
E assert isinstance(43, float)
E + where 43 = globf(42)
failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
def test_instance(self):
self.x = 6*7
> assert self.x != 42
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x2aaf050>.x
failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
def test_compare(self):
> assert globf(10) < 5
E assert 11 < 5
E + where 11 = globf(10)
failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
def test_try_finally(self):
x = 1
try:
> assert x == 0
E assert 1 == 0
failure_demo.py:210: AssertionError
======================== 39 failed in 0.20 seconds =========================
8.2.1 Pass different values to a test function, depending on command line options
Suppose we want to write a test that depends on a command line option. Here is a basic pattern how to achieve this:
# content of test_sample.py
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
elif cmdopt == "type2":
print ("second")
assert 0 # to see what was printed
For this to work we need to add a command line option and provide the cmdopt through a fixture function:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--cmdopt", action="store", default="type1",
help="my option: type1 or type2")
@pytest.fixture
def cmdopt(request):
return request.config.getoption("--cmdopt")
cmdopt = ’type1’
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
elif cmdopt == "type2":
print ("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
first
1 failed in 0.01 seconds
cmdopt = ’type2’
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
elif cmdopt == "type2":
print ("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
second
1 failed in 0.01 seconds
You can see that the command line option arrived in our test. This completes the basic pattern. However, one often
rather wants to process command line options outside of the test and rather pass in different or more complex objects.
Through addopts you can statically add command line options for your project. You can also dynamically modify
the command line arguments before they get processed:
# content of conftest.py
import sys
def pytest_cmdline_preparse(args):
if ’xdist’ in sys.modules: # pytest-xdist plugin
import multiprocessing
num = max(multiprocessing.cpu_count() / 2, 1)
args[:] = ["-n", str(num)] + args
If you have the xdist plugin installed you will now always perform test runs using a number of subprocesses close to
your CPU. Running in an empty directory with the above conftest.py:
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 0 items
Here is a conftest.py file adding a --runslow command line option to control skipping of slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--runslow", action="store_true",
help="run slow tests")
def pytest_runtest_setup(item):
if ’slow’ in item.keywords and not item.config.getoption("--runslow"):
pytest.skip("need --runslow option to run")
import pytest
slow = pytest.mark.slow
def test_func_fast():
pass
@slow
def test_func_slow():
pass
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-70/conftest.py:9: need --runslow option to run
test_module.py ..
If you have a test helper function called from a test you can use the pytest.fail marker to fail a test with a certain
message. The test support function will not show up in the traceback if you set the __tracebackhide__ option
somewhere in the helper function. Example:
# content of test_checkconfig.py
import pytest
def checkconfig(x):
__tracebackhide__ = True
if not hasattr(x, "config"):
pytest.fail("not configured: %s" %(x,))
def test_something():
checkconfig(42)
The __tracebackhide__ setting influences pytest showing of tracebacks: the checkconfig function will
not be shown unless the --fulltrace command line option is specified. Let’s run our little function:
$ py.test -q test_checkconfig.py
F
================================= FAILURES =================================
______________________________ test_something ______________________________
def test_something():
> checkconfig(42)
E Failed: not configured: 42
test_checkconfig.py:8: Failed
1 failed in 0.01 seconds
Usually it is a bad idea to make application code behave differently if called from a test. But if you absolutely must
find out if your application code is running from a test you can do something like this:
# content of conftest.py
def pytest_configure(config):
import sys
sys._called_from_test = True
def pytest_unconfigure(config):
del sys._called_from_test
accordingly in your application. It’s also a good idea to use your own application module rather than sys for handling
flag.
def pytest_report_header(config):
return "project deps: mylib-1.1"
You can also return a list of strings which will be considered as several lines of information. You can of course also
make the amount of reporting information on e.g. the value of config.option.verbose so that you present
more information appropriately:
# content of conftest.py
def pytest_report_header(config):
if config.option.verbose > 0:
return ["info1: did you know that ...", "did you?"]
If you have a slow running large test suite you might want to find out which tests are the slowest. Let’s make an
artifical test suite:
# content of test_some_are_slow.py
import time
def test_funcfast():
pass
def test_funcslow1():
time.sleep(0.1)
def test_funcslow2():
time.sleep(0.2)
test_some_are_slow.py ...
Sometimes you may have a testing situation which consists of a series of test steps. If one step fails it makes no sense
to execute further steps as they are all expected to fail anyway and their tracebacks add no insight. Here is a simple
conftest.py file which introduces an incremental marker which is to be used on classes:
# content of conftest.py
import pytest
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
These two hook implementations work together to abort incremental-marked tests in a class. Here is a test module
example:
# content of test_step.py
import pytest
@pytest.mark.incremental
class TestUserHandling:
def test_login(self):
pass
def test_modification(self):
assert 0
def test_deletion(self):
pass
def test_normal():
pass
If we run this:
$ py.test -rx
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items
test_step.py .Fx.
def test_modification(self):
> assert 0
E assert 0
test_step.py:9: AssertionError
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion
reason: previous test failed (test_modification)
============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============
We’ll see that test_deletion was not executed because test_modification failed. It is reported as an
“expected failure”.
If you have nested test directories, you can have per-directory fixture scopes by placing fixture functions in a
conftest.py file in that directory You can use all types of fixtures including autouse fixtures which are the equiv-
alent of xUnit’s setup/teardown concept. It’s however recommended to have explicit fixture references in your tests or
test classes rather than relying on implicitely executing setup/teardown functions, especially if they are far away from
the actual tests.
Here is a an example for making a db fixture available in a directory:
# content of a/conftest.py
import pytest
class DB:
pass
@pytest.fixture(scope="session")
def db():
return DB()
and then a module in a sister directory which will not see the db fixture:
# content of b/test_error.py
def test_root(db): # no db here, will error out
pass
test_step.py .Fx.
a/test_db.py F
a/test_db2.py F
b/test_error.py E
/tmp/doc-exec-70/b/test_error.py:1
================================= FAILURES =================================
def test_modification(self):
> assert 0
E assert 0
test_step.py:9: AssertionError
_________________________________ test_a1 __________________________________
def test_a1(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x23f9998>
a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________
def test_a2(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x23f9998>
a/test_db2.py:2: AssertionError
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ==========
The two test modules in the a directory see the same db fixture instance while the one test in the sister-directory b
doesn’t see it. We could of course also define a db fixture in that sister directory’s conftest.py file. Note that
each fixture is only instantiated if there is a test actually needing it (unless you use “autouse” fixture which are always
executed ahead of the first test executing).
If you want to postprocess test reports and need access to the executing environment you can implement a hook that
gets called when the test “report” object is about to be created. Here we write out all failing test calls and also access
a fixture (if it was used by the test) in case you want to query/look at it during your post processing. In our case we
just write some informations out to a failures file:
# content of conftest.py
import pytest
import os.path
@pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
# execute all other hooks to obtain the report object
rep = __multicall__.execute()
test_module.py FF
tmpdir = local(’/tmp/pytest-1012/test_fail10’)
def test_fail1(tmpdir):
> assert 0
E assert 0
test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:4: AssertionError
========================= 2 failed in 0.01 seconds =========================
you will have a “failures” file which contains the failing test ids:
$ cat failures
test_module.py::test_fail1 (/tmp/pytest-1012/test_fail10)
test_module.py::test_fail2
If you want to make test result reports available in fixture finalizers here is a little example implemented via a local
plugin:
# content of conftest.py
import pytest
@pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
# execute all other hooks to obtain the report object
rep = __multicall__.execute()
@pytest.fixture
def something(request):
def fin():
# request.node is an "item" because we use the default
# "function" scope
if request.node.rep_setup.failed:
print "setting up a test failed!", request.node.nodeid
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
print "executing test failed", request.node.nodeid
request.addfinalizer(fin)
import pytest
@pytest.fixture
def other():
assert 0
def test_call_fails(something):
assert 0
def test_fail2():
assert 0
@pytest.fixture
def other():
> assert 0
E assert 0
test_module.py:6: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________
something = None
def test_call_fails(something):
> assert 0
E assert 0
test_module.py:12: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:15: AssertionError
==================== 2 failed, 1 error in 0.01 seconds =====================
You’ll see that the fixture finalizers could use the precise reporting information.
pytest allows to easily parametrize test functions. For basic docs, see Parametrizing fixtures and test functions.
In the following we provide some examples using the builtin mechanisms.
Let’s say we want to execute a test with different computation parameters and the parameter range shall be determined
by a command line argument. Let’s first write a simple (do-nothing) computation test:
# content of test_compute.py
def test_compute(param1):
assert param1 < 4
def pytest_addoption(parser):
parser.addoption("--all", action="store_true",
help="run all combinations")
def pytest_generate_tests(metafunc):
if ’param1’ in metafunc.fixturenames:
if metafunc.config.option.all:
end = 5
else:
end = 2
metafunc.parametrize("param1", range(end))
We run only two computations, so we see two dots. let’s run the full monty:
$ py.test -q --all
....F
================================= FAILURES =================================
_____________________________ test_compute[4] ______________________________
param1 = 4
def test_compute(param1):
> assert param1 < 4
E assert 4 < 4
test_compute.py:3: AssertionError
1 failed, 4 passed in 0.01 seconds
As expected when running the full range of param1 values we’ll get an error on the last one.
Here is a quick port to run tests configured with test scenarios, an add-on from Robert Collins for the
standard unittest framework. We only have to work a bit to construct the correct arguments for pytest’s
Metafunc.parametrize():
# content of test_scenarios.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
for scenario in metafunc.cls.scenarios:
idlist.append(scenario[0])
items = scenario[1].items()
argnames = [x[0] for x in items]
argvalues.append(([x[1] for x in items]))
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
class TestSampleWithScenarios:
scenarios = [scenario1, scenario2]
test_scenarios.py ....
If you just collect tests you’ll also nicely see ‘advanced’ and ‘basic’ as variants for the test function:
$ py.test --collect-only test_scenarios.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items
<Module ’test_scenarios.py’>
<Class ’TestSampleWithScenarios’>
<Instance ’()’>
<Function ’test_demo1[basic]’>
<Function ’test_demo2[basic]’>
<Function ’test_demo1[advanced]’>
<Function ’test_demo2[advanced]’>
Note that we told metafunc.parametrize() that your scenario values should be considered class-scoped. With
pytest-2.3 this leads to a resource-based ordering.
The parametrization of test functions happens at collection time. It is a good idea to setup expensive resources like
DB connections or subprocess only when the actual test is run. Here is a simple example how you can achieve that,
first the actual test requiring a db object:
# content of test_backends.py
import pytest
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
pytest.fail("deliberately failing for demo purposes")
We can now add a test configuration that generates two invocations of the test_db_initialized function and
also implements a factory that creates a database object for the actual test invocations:
# content of conftest.py
import pytest
def pytest_generate_tests(metafunc):
if ’db’ in metafunc.fixturenames:
metafunc.parametrize("db", [’d1’, ’d2’], indirect=True)
class DB1:
"one database object"
class DB2:
"alternative database object"
@pytest.fixture
def db(request):
if request.param == "d1":
return DB1()
elif request.param == "d2":
return DB2()
else:
raise ValueError("invalid internal test config")
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
> pytest.fail("deliberately failing for demo purposes")
E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed
1 failed, 1 passed in 0.01 seconds
The first invocation with db == "DB1" passed while the second with db == "DB2" failed. Our db fixture func-
tion has instantiated each of the DB values during the setup phase while the pytest_generate_tests generated
two according calls to the test_db_initialized during the collection phase.
def pytest_generate_tests(metafunc):
# called once per each test function
funcarglist = metafunc.cls.params[metafunc.function.__name__]
argnames = list(funcarglist[0])
metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
for funcargs in funcarglist])
class TestClass:
# a map specifying multiple argument sets for a test method
params = {
Our test generator looks up a class-level definition which specifies which argument sets to use for each test function.
Let’s run it:
$ py.test -q
F..
================================= FAILURES =================================
________________________ TestClass.test_equals[1-2] ________________________
test_parametrize.py:18: AssertionError
1 failed, 2 passed in 0.01 seconds
Here is a stripped down real-life example of using parametrized testing for testing serialization of objects between
different python interpreters. We define a test_basic_objects function which is to be run with different sets of
arguments for its three arguments:
• python1: first python interpreter, run to pickle-dump an object to a file
• python2: second interpreter, run to pickle-load an object from a file
• obj: object to be dumped/loaded
"""
module containing a parametrized tests testing cross-python
serialization via the pickle module.
"""
import py
import pytest
@pytest.fixture(params=pythonlist)
def python2(request, python1):
return Python(request.param, python1.picklefile)
class Python:
def __init__(self, version, picklefile):
self.pythonpath = py.path.local.sysfind(version)
if not self.pythonpath:
pytest.skip("%r not found" %(version,))
self.picklefile = picklefile
def dumps(self, obj):
dumpfile = self.picklefile.dirpath("dump.py")
dumpfile.write(py.code.Source("""
import pickle
f = open(%r, ’wb’)
s = pickle.dump(%r, f)
f.close()
""" % (str(self.picklefile), obj)))
py.process.cmdexec("%s %s" %(self.pythonpath, dumpfile))
Running it results in some skips if we don’t have all the python interpreters installed and otherwise runs all combina-
tions (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):
. $ py.test -rs -q multipython.py
............sss............sss............sss............ssssssssssssssssss
========================= short test summary info ==========================
SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:22: ’python2.8’ not found
48 passed, 27 skipped in 1.30 seconds
If you want to compare the outcomes of several implementations of a given API, you can write test functions that
receive the already imported implementations and get skipped in case the implementation is not importable/available.
Let’s say we have a “base” implementation and the other (possibly optimized ones) need to provide similar results:
# content of conftest.py
import pytest
@pytest.fixture(scope="session")
def basemod(request):
return pytest.importorskip("base")
def optmod(request):
return pytest.importorskip(request.param)
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-67/conftest.py:10: could not import ’opt2’
You’ll see that we don’t have a opt2 module and thus the second test run of our test_func1 was skipped. A few
notes:
• the fixture functions in the conftest.py file are “session-scoped” because we don’t need to import more
than once
• if you have multiple test functions and a skipped import, you will see the [1] count increasing in the report
• you can put @pytest.mark.parametrize style parametrization on the test functions to parametrize input/output
values as well.
Here are some example using the Marking test functions with attributes mechanism.
You can “mark” a test function with custom metadata like this:
# content of test_server.py
import pytest
@pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
def test_something_quick():
pass
def test_another():
pass
New in version 2.2. You can then restrict a test run to only run tests marked with webtest:
$ py.test -v -m webtest
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/pyt
collecting ... collected 3 items
You can use the -k command line option to specify an expression which implements a substring match on the test
names instead of the exact match on markers that -m provides. This makes it easy to select tests based on their names:
$ py.test -v -k http # running with the above defined example module
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/pyt
collecting ... collected 3 items
And you can also run all tests except the ones that match the keyword:
$ py.test -k "not send_http" -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/pyt
collecting ... collected 3 items
Note: If you are using expressions such as “X and Y” then both X and Y need to be simple non-keyword names. For
example, “pass” or “from” will result in SyntaxErrors because “-k” evaluates the expression.
However, if the “-k” argument is a simple string, no such restrictions apply. Also “-k ‘not STRING”’ has no restric-
tions. You can also specify numbers like “-k 1.3” to match tests which are parametrized with the float “1.3”.
New in version 2.2. Registering markers for your test suite is simple:
# content of pytest.ini
[pytest]
markers =
webtest: mark a test as a webtest.
You can ask which markers exist for your test suite - the list includes our just defined webtest markers:
$ py.test --markers
@pytest.mark.webtest: mark a test as a webtest.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True val
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failu
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to
For an example on how to add and work with markers from a plugin, see Custom marker and command line option to
control test runs.
• typos in function markers are treated as an error if you use the --strict option. Future versions of pytest
are probably going to start treating non-registered markers as errors at some point.
If you are programming with Python 2.6 or later you may use pytest.mark decorators with classes to apply markers
to all of its test methods:
# content of test_mark_classlevel.py
import pytest
@pytest.mark.webtest
class TestClass:
def test_startup(self):
pass
def test_startup_and_more(self):
pass
This is equivalent to directly applying the decorator to the two test functions.
To remain backward-compatible with Python 2.4 you can also set a pytestmark attribute on a TestClass like this:
import pytest
class TestClass:
pytestmark = pytest.mark.webtest
class TestClass:
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
in which case it will be applied to all functions and methods defined in the module.
When using parametrize, applying a mark will make it apply to each individual test. However it is also possible to
apply a marker to an individual test instance:
import pytest
@pytest.mark.foo
@pytest.mark.parametrize(("n", "expected"), [
(1, 2),
pytest.mark.bar((1, 3)),
(2, 3),
])
def test_increment(n, expected):
assert n + 1 == expected
In this example the mark “foo” will apply to each of the three tests, whereas the “bar” mark is only applied to the
second test. Skip and xfail marks can also be applied in this way, see Skip/xfail with parametrize.
8.4.6 Custom marker and command line option to control test runs
Plugins can provide custom markers and implement specific behaviour based on it. This is a self-contained example
which adds a command line option and a parametrized test function marker to run tests specifies via named environ-
ments:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("-E", action="store", metavar="NAME",
help="only run tests matching the environment NAME.")
def pytest_configure(config):
# register an additional marker
config.addinivalue_line("markers",
"env(name): mark test to run only on named environment")
def pytest_runtest_setup(item):
envmarker = item.get_marker("env")
if envmarker is not None:
envname = envmarker.args[0]
if envname != item.config.getoption("-E"):
pytest.skip("test requires env %r" % envname)
import pytest
@pytest.mark.env("stage1")
def test_basic_db_operation():
pass
and an example invocations specifying a different environment than what the test needs:
$ py.test -E stage2
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items
test_someenv.py s
test_someenv.py .
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True val
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failu
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to
If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times
to a test function. From plugin code you can read over all such settings. Example:
# content of test_mark_three_times.py
import pytest
pytestmark = pytest.mark.glob("module", x=1)
@pytest.mark.glob("class", x=2)
class TestClass:
@pytest.mark.glob("function", x=3)
def test_something(self):
pass
Here we have the marker “glob” applied three times to the same test function. From a conftest file we can read it like
this:
# content of conftest.py
import sys
def pytest_runtest_setup(item):
g = item.get_marker("glob")
if g is not None:
for info in g:
print ("glob args=%s kwargs=%s" %(info.args, info.kwargs))
sys.stdout.flush()
Let’s run this without capturing output and see what we get:
$ py.test -q -s
glob args=(’function’,) kwargs={’x’: 3}
glob args=(’class’,) kwargs={’x’: 2}
glob args=(’module’,) kwargs={’x’: 1}
.
1 passed in 0.01 seconds
Consider you have a test suite which marks tests for particular platforms, namely pytest.mark.osx,
pytest.mark.win32 etc. and you also have tests that run on all platforms and have no specific marker. If you
now want to have a way to only run the tests for your particular platform, you could use the following plugin:
# content of conftest.py
#
import sys
import pytest
def pytest_runtest_setup(item):
if isinstance(item, item.Function):
plat = sys.platform
if not item.get_marker(plat):
if ALL.intersection(item.keywords):
pytest.skip("cannot run on platform %s" %(plat))
then tests will be skipped if they were specified for a different platform. Let’s do a little test file to show how this looks
like:
# content of test_plat.py
import pytest
@pytest.mark.osx
def test_if_apple_is_evil():
pass
@pytest.mark.linux2
def test_if_linux_works():
pass
@pytest.mark.win32
def test_if_win32_crashes():
pass
def test_runs_everywhere():
pass
then you will see two test skipped and two executed tests as expected:
$ py.test -rs # this option reports skip reasons
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items
test_plat.py s.s.
========================= short test summary info ==========================
SKIP [2] /tmp/doc-exec-65/conftest.py:12: cannot run on platform linux2
Note that if you specify a platform via the marker-command line option like this:
$ py.test -m linux2
=========================== test session starts ============================
test_plat.py .
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
If you a test suite where test function names indicate a certain type of test, you can implement a hook that automatically
defines markers so that you can use the -m option with it. Let’s look at this test module:
# content of test_module.py
def test_interface_simple():
assert 0
def test_interface_complex():
assert 0
def test_event_simple():
assert 0
def test_something_else():
assert 0
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if "interface" in item.nodeid:
item.add_marker(pytest.mark.interface)
elif "event" in item.nodeid:
item.add_marker(pytest.mark.event)
test_module.py FF
E assert 0
================== 2 tests deselected by "-m ’interface’" ==================
================== 2 failed, 2 deselected in 0.01 seconds ==================
test_module.py FFF
A session-scoped fixture effectively has access to all collected test items. Here is an example of a fixture function
which walks all collected tests and looks if their test class defines a callme method and calls it:
# content of conftest.py
import pytest
@pytest.fixture(scope="session", autouse=True)
def callattr_ahead_of_alltests(request):
print "callattr_ahead_of_alltests called"
seen = set([None])
session = request.node
for item in session.items:
cls = item.getparent(pytest.Class)
if cls not in seen:
if hasattr(cls.obj, "callme"):
cls.obj.callme()
seen.add(cls)
test classes may now define a callme method which will be called ahead of running any tests:
# content of test_module.py
class TestHello:
@classmethod
def callme(cls):
def test_method1(self):
print "test_method1 called"
def test_method2(self):
print "test_method1 called"
class TestOther:
@classmethod
def callme(cls):
print "callme other called"
def test_other(self):
print "test other"
class SomeTest(unittest.TestCase):
@classmethod
def callme(self):
print "SomeTest callme called"
def test_unit1(self):
print "test_unit1 method called"
You can set the norecursedirs option in an ini-file, for example your setup.cfg in the project root directory:
# content of setup.cfg
[pytest]
norecursedirs = .svn _build tmp*
This would tell pytest to not recurse into typical subversion or sphinx-build directories or into any tmp prefixed
directory.
You can configure different naming conventions by setting the python_files, python_classes and
python_functions configuration options. Example:
# content of setup.cfg
# can also be defined in in tox.ini or pytest.ini file
[pytest]
python_files=check_*.py
python_classes=Check
python_functions=check
This would make pytest look for check_ prefixes in Python filenames, Check prefixes in classes and check
prefixes in functions and classes. For example, if we have:
# content of check_myapp.py
class CheckMyApp:
def check_simple(self):
pass
def check_complex(self):
pass
Note: the python_functions and python_classes has no effect for unittest.TestCase test discovery
because pytest delegates detection of test case methods to unittest code.
You can use the --pyargs option to make pytest try interpreting arguments as python package names, deriving
their file system path and then running the test. For example if you have unittest2 installed you can type:
py.test --pyargs unittest2.test.test_skipping -q
which would run the respective test module. Like with other options, through an ini-file and the addopts option you
can make this change more permanently:
# content of pytest.ini
[pytest]
addopts = --pyargs
Now a simple invocation of py.test NAME will check if NAME exists as an importable package/module and
otherwise treat it as a filesystem path.
You can always peek at the collection tree without running tests like this:
. $ py.test --collect-only pythoncollection.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 3 items
<Module ’pythoncollection.py’>
<Function ’test_function’>
<Class ’TestClass’>
<Instance ’()’>
<Function ’test_method’>
<Function ’test_anothermethod’>
You can easily instruct pytest to discover tests from every python file:
# content of pytest.ini
[pytest]
python_files = *.py
However, many projects will have a setup.py which they don’t want to be imported. Moreover, there may files only
importable by a specific python version. For such cases you can dynamically define files to be ignored by listing them
in a conftest.py file:
# content of conftest.py
import sys
collect_ignore = ["setup.py"]
if sys.version_info[0] > 2:
collect_ignore.append("pkg/module_py2.py")
then a pytest run on python2 will find the one test when run with a python2 interpreters and will leave out the setup.py
file:
$ py.test --collect-only
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items
<Module ’pkg/module_py2.py’>
<Function ’test_only_on_python2’>
If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test
collection.
Here is an example conftest.py (extracted from Ali Afshnars special purpose pytest-yamlwsgi plugin). This
conftest.py will collect test*.yml files and will execute the yaml-formatted content as custom tests:
# content of conftest.py
import pytest
class YamlFile(pytest.File):
def collect(self):
import yaml # we need a yaml parser, e.g. PyYAML
raw = yaml.safe_load(self.fspath.open())
for name, spec in raw.items():
yield YamlItem(name, self, spec)
class YamlItem(pytest.Item):
def __init__(self, name, parent, spec):
super(YamlItem, self).__init__(name, parent)
self.spec = spec
def runtest(self):
for name, value in self.spec.items():
# some custom test execution (dumb example follows)
if name != value:
raise YamlException(self, name, value)
def reportinfo(self):
return self.fspath, 0, "usecase: %s" % self.name
class YamlException(Exception):
""" custom exception for error reporting. """
# test_simple.yml
ok:
sub1: sub1
hello:
world: world
some: other
and if you installed PyYAML or a compatible YAML-parser you can now execute the test specification:
nonpython $ py.test test_simple.yml
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items
test_simple.yml F.
You get one dot for the passing sub1: sub1 check and one failure. Obviously in the above conftest.py you’ll
want to implement a more interesting interpretation of the yaml-values. You can easily write your own domain specific
testing language this way.
Note: repr_failure(excinfo) is called for representing test failures. If you create custom collection nodes
you can return an error representation string of your choice. It will be reported as a (red) string.
reportinfo() is used for representing the test location and is also consulted when reporting in verbose mode:
nonpython $ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/pyt
collecting ... collected 2 items
While developing your custom test collection and execution it’s also interesting to just look at the collection tree:
nonpython $ py.test --collect-only
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items
<YamlFile ’test_simple.yml’>
<YamlItem ’hello’>
<YamlItem ’ok’>
NINE
123
pytest Documentation, Release 2.5.2
TEN
CONTRIBUTING
Contributions are highly welcomed and appreciated. Every little help counts, so do not hesitate!
Do you like pytest? Share some love on Twitter or in your blog posts!
We’d also like to hear about your propositions and suggestions. Feel free to submit them as issues and:
• Set the “kind” to “enhancement” or “proposal” so that we can quickly find about them.
• Explain in detail how they should work.
• Keep the scope as narrow as possible. This will make it easier to implement.
• If you have required skills and/or knowledge, we are very happy for pull requests.
Look through the BitBucket issues for bugs. Here is sample filter you can use:
https://bitbucket.org/hpk42/pytest/issues?status=new&status=open&kind=bug
Talk to developers to find out how you can fix specific bugs.
125
pytest Documentation, Release 2.5.2
Look through the BitBucket issues for enhancements. Here is sample filter you can use:
https://bitbucket.org/hpk42/pytest/issues?status=new&status=open&kind=enhancement
Talk to developers to find out how you can implement specific features.
Note: What is a “pull request”? It informs project’s core developers about the changes you want to review and merge.
Pull requests are stored on BitBucket servers. Once you send pull request, we can discuss it’s potential modifications
and even add more commits to it later on.
The primary development platform for pytest is BitBucket. You can find all the issues there and submit your pull
requests.
1. Fork the pytest BitBucket repository. It’s fine to use pytest as your fork repository name because it will live
under your user.
2. Create and activate a fork-specific virtualenv (http://www.virtualenv.org/en/latest/):
$ virtualenv pytest-venv
$ source pytest-venv/bin/activate
3. Clone your fork locally using Mercurial (hg) and create a branch:
$ hg clone ssh://[email protected]/YOUR_BITBUCKET_USERNAME/pytest
$ cd pytest
$ hg branch your-branch-name
If you need some help with Mercurial, follow this quick start guide:
http://mercurial.selenic.com/wiki/QuickStart
4. You can now edit your local working copy. To test you need to install the “tox” tool into your virtualenv:
$ pip install tox
You need to have Python 2.7 and 3.3 available in your system. Now running tests is as simple as issuing
this command:
$ python runtox.py -e py27,py33,flakes
This command will run tests via the “tox” tool against Python 2.7 and 3.3 and also perform “flakes”
coding-style checks. runtox.py is a thin wrapper around tox which installs from a development
package index where newer (not yet released to pypi) versions of dependencies (especially py) might be
present.
To run tests on py27 and pass options (e.g. enter pdb on failure) to pytest you can do:
$ python runtox.py -e py27 -- --pdb
5. Commit and push once your tests pass and you are happy with your change(s):
$ hg commit -m"<commit message>"
$ hg push -b .
source: YOUR_BITBUCKET_USERNAME/pytest
branch: your-branch-name
target: hpk42/pytest
branch: default
There used to be the pytest GitHub mirror. It was removed in favor of the Mercurial one, to remove confusion of
people not knowing where it’s better to put their issues and pull requests. Also it wasn’t easily possible to automate
the mirroring process.
However, it’s still possible to use git to contribute to pytest using tools like gitifyhg which allows you to clone and
work with Mercurial repo still using git.
Warning: Remember that git is not a default version control system for pytest and you need to be careful using
it.
ELEVEN
Target audience: Reading this document requires basic knowledge of python testing, xUnit setup methods and the
(previous) basic pytest funcarg mechanism, see http://pytest.org/2.2.4/funcargs.html If you are new to pytest, then you
can simply ignore this section and read the other sections.
The pre pytest-2.3 funcarg mechanism calls a factory each time a funcarg for a test function is required. If a factory
wants to re-use a resource across different scopes, it often used the request.cached_setup() helper to manage
caching of resources. Here is a basic example how we could implement a per-session Database object:
# content of conftest.py
class Database:
def __init__(self):
print ("database instance created")
def destroy(self):
print ("database instance destroyed")
def pytest_funcarg__db(request):
return request.cached_setup(setup=DataBase,
teardown=lambda db: db.destroy,
scope="session")
129
pytest Documentation, Release 2.5.2
All of these limitations are addressed with pytest-2.3 and its improved fixture mechanism.
Instead of calling cached_setup() with a cache scope, you can use the @pytest.fixture decorator and directly state the
scope:
@pytest.fixture(scope="session")
def db(request):
# factory will only be invoked once per session -
db = DataBase()
request.addfinalizer(db.destroy) # destroy when session is finished
return db
This factory implementation does not need to call cached_setup() anymore because it will only be invoked once
per session. Moreover, the request.addfinalizer() registers a finalizer according to the specified resource
scope on which the factory function is operating.
Previously, funcarg factories could not directly cause parametrization. You needed to specify a @parametrize
decorator on your test function or implement a pytest_generate_tests hook to perform parametrization, i.e.
calling a test multiple times with different value sets. pytest-2.3 introduces a decorator for use on the factory itself:
@pytest.fixture(params=["mysql", "pg"])
def db(request):
... # use request.param
Here the factory will be invoked twice (with the respective “mysql” and “pg” values set as request.param at-
tributes) and and all of the tests requiring “db” will run twice as well. The “mysql” and “pg” values will also be used
for reporting the test-invocation variants.
This new way of parametrizing funcarg factories should in many cases allow to re-use already written facto-
ries because effectively request.param was already used when test functions/classes were parametrized via
parametrize(indirect=True)() calls.
Of course it’s perfectly fine to combine parametrization and scoping:
@pytest.fixture(scope="session", params=["mysql", "pg"])
def db(request):
if request.param == "mysql":
db = MySQL()
elif request.param == "pg":
db = PG()
request.addfinalizer(db.destroy) # destroy when session is finished
return db
This would execute all tests requiring the per-session “db” resource twice, receiving the values created by the two
respective invocations to the factory function.
When using the @fixture decorator the name of the function denotes the name under which the resource can be
accessed as a function argument:
@pytest.fixture()
def db(request):
...
The name under which the funcarg resource can be requested is db.
You can still use the “old” non-decorator way of specifying funcarg factories aka:
def pytest_funcarg__db(request):
...
But it is then not possible to define scoping and parametrization. It is thus recommended to use the factory decorator.
pytest for a long time offered a pytest_configure and a pytest_sessionstart hook which are often used to setup global
resources. This suffers from several problems:
1. in distributed testing the master process would setup test resources that are never needed because it only co-
ordinates the test run activities of the slave processes.
2. if you only perform a collection (with “–collect-only”) resource-setup will still be executed.
3. If a pytest_sessionstart is contained in some subdirectories conftest.py file, it will not be called. This stems
from the fact that this hook is actually used for reporting, in particular the test-header with platform/custom
information.
Moreover, it was not easy to define a scoped setup from plugins or conftest files other than to implement a
pytest_runtest_setup() hook and caring for scoping/caching yourself. And it’s virtually impossible to do
this with parametrization as pytest_runtest_setup() is called during test execution and parametrization hap-
pens at collection time.
It follows that pytest_configure/session/runtest_setup are often not appropriate for implementing common fixture
needs. Therefore, pytest-2.3 introduces autouse fixtures (xUnit setup on steroids) which fully integrate with the generic
fixture mechanism and obsolete many prior uses of pytest hooks.
pytest-2.3 takes care to discover fixture/funcarg factories at collection time. This is more efficient especially for large
test suites. Moreover, a call to “py.test –collect-only” should be able to in the future show a lot of setup-information
and thus presents a nice method to get an overview of fixture management in your project.
funcargs were originally introduced to pytest-2.0. In pytest-2.3 the mechanism was extended and refined and is now
described as fixtures:
• previously funcarg factories were specified with a special pytest_funcarg__NAME prefix instead of using
the @pytest.fixture decorator.
• Factories received a request object which managed caching through request.cached_setup() calls
and allowed using other funcargs via request.getfuncargvalue() calls. These intricate APIs made it
hard to do proper parametrization and implement resource caching. The new pytest.fixture‘() decorator
allows to declare the scope and let pytest figure things out for you.
• if you used parametrization and funcarg factories which made use of request.cached_setup() it is
recommeneded to invest a few minutes and simplify your fixture function code to use the Fixtures as Function
arguments decorator instead. This will also allow to take advantage of the automatic per-resource grouping of
tests.
TWELVE
RELEASE ANNOUNCEMENTS
pytest is a mature Python testing tool with more than a 1000 tests against itself, passing on many different interpreters
and platforms.
The 2.5.2 release fixes a few bugs with two maybe-bugs remaining and actively being worked on (and waiting for the
bug reporter’s input). We also have a new contribution guide thanks to Piotr Banaszkiewicz and others.
See docs at:
http://pytest.org
As usual, you can upgrade from pypi via:
pip install -U pytest
12.1.1 2.5.2
• fix issue409 – better interoperate with cx_freeze by not trying to import from collections.abc which causes
problems for py27/cx_freeze. Thanks Wolfgang L. for reporting and tracking it down.
• fixed docs and code to use “pytest” instead of “py.test” almost everywhere. Thanks Jurko Gospodnetic for the
complete PR.
• fix issue425: mention at end of “py.test -h” that –markers and –fixtures work according to specified test path (or
current dir)
• fix issue413: exceptions with unicode attributes are now printed correctly also on python2 and with pytest-xdist
runs. (the fix requires py-1.4.20)
• copy, cleanup and integrate py.io capture from pylib 1.4.20.dev2 (rev 13d9af95547e)
• address issue416: clarify docs as to conftest.py loading semantics
• fix issue429: comparing byte strings with non-ascii chars in assert expressions now work better. Thanks Floris
Bruynooghe.
• make capfd/capsys.capture private, its unused and shouldnt be exposed
133
pytest Documentation, Release 2.5.2
pytest is a mature Python testing tool with more than a 1000 tests against itself, passing on many different interpreters
and platforms.
The 2.5.1 release maintains the “zero-reported-bugs” promise by fixing the three bugs reported since the last release a
few days ago. It also features a new home page styling implemented by Tobias Bieniek, based on the flask theme from
Armin Ronacher:
http://pytest.org
If you have anything more to improve styling and docs, we’d be very happy to merge further pull requests.
On the coding side, the release also contains a little enhancement to fixture decorators allowing to directly influence
generation of test ids, thanks to Floris Bruynooghe. Other thanks for helping with this release go to Anatoly Bubenkoff
and Ronny Pfannschmidt.
As usual, you can upgrade from pypi via:
pip install -U pytest
have fun and a nice remaining “bug-free” time of the year :) holger krekel
12.2.1 2.5.1
pytest-2.5.0 is a big fixing release, the result of two community bug fixing days plus numerous additional works from
many people and reporters. The release should be fully compatible to 2.4.2, existing plugins and test suites. We aim at
maintaining this level of ZERO reported bugs because it’s no fun if your testing tool has bugs, is it? Under a condition,
though: when submitting a bug report please provide clear information about the circumstances and a simple example
which reproduces the problem.
The issue tracker is of course not empty now. We have many remaining “enhacement” issues which we’ll hopefully
can tackle in 2014 with your help.
For those who use older Python versions, please note that pytest is not automatically tested on python2.5 due to
virtualenv, setuptools and tox not supporting it anymore. Manual verification shows that it mostly works fine but it’s
not going to be part of the automated release process and thus likely to break in the future.
As usual, current docs are at
http://pytest.org
and you can upgrade from pypi via:
Particular thanks for helping with this release go to Anatoly Bubenkoff, Floris Bruynooghe, Marc Abramowitz, Ralph
Schmitt, Ronny Pfannschmidt, Donald Stufft, James Lan, Rob Dennis, Jason R. Coombs, Mathieu Agopian, Virgil
Dupras, Bruno Oliveira, Alex Gaynor and others.
have fun, holger krekel
12.3.1 2.5.0
• dropped python2.5 from automated release testing of pytest itself which means it’s probably going to break soon
(but still works with this release we believe).
• simplified and fixed implementation for calling finalizers when parametrized fixtures or function arguments are
involved. finalization is now performed lazily at setup time instead of in the “teardown phase”. While this
might sound odd at first, it helps to ensure that we are correctly handling setup/teardown even in complex code.
User-level code should not be affected unless it’s implementing the pytest_runtest_teardown hook and expecting
certain fixture instances are torn down within (very unlikely and would have been unreliable anyway).
• PR90: add –color=yes|no|auto option to force terminal coloring mode (“auto” is default). Thanks Marc
Abramowitz.
• fix issue319 - correctly show unicode in assertion errors. Many thanks to Floris Bruynooghe for the complete
PR. Also means we depend on py>=1.4.19 now.
• fix issue396 - correctly sort and finalize class-scoped parametrized tests independently from number of methods
on the class.
• refix issue323 in a better way – parametrization should now never cause Runtime Recursion errors because
the underlying algorithm for re-ordering tests per-scope/per-fixture is not recursive anymore (it was tail-call
recursive before which could lead to problems for more than >966 non-function scoped parameters).
• fix issue290 - there is preliminary support now for parametrizing with repeated same values (sometimes useful
to to test if calling a second time works as with the first time).
• close issue240 - document precisely how pytest module importing works, discuss the two common test directory
layouts, and how it interacts with PEP420-namespace packages.
• fix issue246 fix finalizer order to be LIFO on independent fixtures depending on a parametrized higher-than-
function scoped fixture. (was quite some effort so please bear with the complexity of this sentence :) Thanks
Ralph Schmitt for the precise failure example.
• fix issue244 by implementing special index for parameters to only use indices for paramentrized test ids
• fix issue287 by running all finalizers but saving the exception from the first failing finalizer and re-raising it
so teardown will still have failed. We reraise the first failing exception because it might be the cause for other
finalizers to fail.
• fix ordering when mock.patch or other standard decorator-wrappings are used with test methods. This fixues
issue346 and should help with random “xdist” collection failures. Thanks to Ronny Pfannschmidt and Donald
Stufft for helping to isolate it.
• fix issue357 - special case “-k” expressions to allow for filtering with simple strings that are not valid python
expressions. Examples: “-k 1.3” matches all tests parametrized with 1.3. “-k None” filters all tests that have
“None” in their name and conversely “-k ‘not None”’. Previously these examples would raise syntax errors.
• fix issue384 by removing the trial support code since the unittest compat enhancements allow trial to handle it
on its own
• don’t hide an ImportError when importing a plugin produces one. fixes issue375.
• fix issue275 - allow usefixtures and autouse fixtures for running doctest text files.
• fix issue380 by making –resultlog only rely on longrepr instead of the “reprcrash” attribute which only exists
sometimes.
• address issue122: allow @pytest.fixture(params=iterator) by exploding into a list early on.
• fix pexpect-3.0 compatibility for pytest’s own tests. (fixes issue386)
• allow nested parametrize-value markers, thanks James Lan for the PR.
• fix unicode handling with new monkeypatch.setattr(import_path, value) API. Thanks Rob Dennis. Fixes is-
sue371.
• fix unicode handling with junitxml, fixes issue368.
• In assertion rewriting mode on Python 2, fix the detection of coding cookies. See issue #330.
• make “–runxfail” turn imperative pytest.xfail calls into no ops (it already did neutralize pytest.mark.xfail mark-
ers)
• refine pytest / pkg_resources interactions: The AssertionRewritingHook PEP302 compliant loader now registers
itself with setuptools/pkg_resources properly so that the pkg_resources.resource_stream method works properly.
Fixes issue366. Thanks for the investigations and full PR to Jason R. Coombs.
• pytestconfig fixture is now session-scoped as it is the same object during the whole test run. Fixes issue370.
• avoid one surprising case of marker malfunction/confusion:
@pytest.mark.some(lambda arg: ...)
def test_function():
would not work correctly because pytest assumes @pytest.mark.some gets a function to be decorated already.
We now at least detect if this arg is an lambda and thus the example will work. Thanks Alex Gaynor for bringing
it up.
• xfail a test on pypy that checks wrong encoding/ascii (pypy does not error out). fixes issue385.
• internally make varnames() deal with classes’s __init__, although it’s not needed by pytest itself atm. Also fix
caching. Fixes issue376.
• fix issue221 - handle importing of namespace-package with no __init__.py properly.
• refactor internal FixtureRequest handling to avoid monkeypatching. One of the positive user-facing effects is
that the “request” object can now be used in closures.
• fixed version comparison in pytest.importskip(modname, minverstring)
• fix issue377 by clarifying in the nose-compat docs that pytest does not duplicate the unittest-API into the “plain”
namespace.
• fix verbose reporting for @mock’d test functions
• avoid tmpdir fixture to create too long filenames especially when parametrization is used (issue354)
• fix pytest-pep8 and pytest-flakes / pytest interactions (collection names in mark plugin was assuming an item
always has a function which is not true for those plugins etc.) Thanks Andi Zeidler.
• introduce node.get_marker/node.add_marker API for plugins like pytest-pep8 and pytest-flakes to avoid the
messy details of the node.keywords pseudo-dicts. Adapated docs.
• remove attempt to “dup” stdout at startup as it’s icky. the normal capturing should catch enough possibilities of
tests messing up standard FDs.
• add pluginmanager.do_configure(config) as a link to config.do_configure() for plugin-compatibility
as usual, docs at http://pytest.org and upgrades via:
pip install -U pytest
pytest-2.4.1 is a quick follow up release to fix three regressions compared to 2.3.5 before they hit more people:
• When using parser.addoption() unicode arguments to the “type” keyword should also be converted to the re-
spective types. thanks Floris Bruynooghe, @dnozay. (fixes issue360 and issue362)
• fix dotted filename completion when using argcomplete thanks Anthon van der Neuth. (fixes issue361)
• fix regression when a 1-tuple (“arg”,) is used for specifying parametrization (the values of the parametrization
were passed nested in a tuple). Thanks Donald Stufft.
• also merge doc typo fixes, thanks Andy Dirnberger
as usual, docs at http://pytest.org and upgrades via:
pip install -U pytest
The just released pytest-2.4.0 brings many improvements and numerous bug fixes while remaining plugin- and test-
suite compatible apart from a few supposedly very minor incompatibilities. See below for a full list of details. A few
feature highlights:
• new yield-style fixtures pytest.yield_fixture, allowing to use existing with-style context managers in fixture
functions.
• improved pdb support: import pdb ; pdb.set_trace() now works without requiring prior dis-
abling of stdout/stderr capturing. Also the --pdb options works now on collection and inter-
nal errors and we introduced a new experimental hook for IDEs/plugins to intercept debugging:
pytest_exception_interact(node, call, report).
• shorter monkeypatch variant to allow specifying an import path as a target, for example:
monkeypatch.setattr("requests.get", myfunc)
• better unittest/nose compatibility: all teardown methods are now only called if the corresponding setup method
succeeded.
Many thanks to all who helped, including Floris Bruynooghe, Brianna Laugher, Andreas Pelme, Anthon van
der Neut, Anatoly Bubenkoff, Vladimir Keleshev, Mathieu Agopian, Ronny Pfannschmidt, Christian Theunert
and many others.
may passing tests be with you,
holger krekel
known incompatibilities:
• if calling –genscript from python2.7 or above, you only get a standalone script which works on python2.7 or
above. Use Python2.6 to also get a python2.5 compatible version.
• all xunit-style teardown methods (nose-style, pytest-style, unittest-style) will not be called if the corresponding
setup method failed, see issue322 below.
• the pytest_plugin_unregister hook wasn’t ever properly called and there is no known implementation of the hook
- so it got removed.
• pytest.fixture-decorated functions cannot be generators (i.e. use yield) anymore. This change might be re-
versed in 2.4.1 if it causes unforeseen real-life issues. However, you can always write and return an inner
function/generator and change the fixture consumer to iterate over the returned generator. This change was done
in lieu of the new pytest.yield_fixture decorator, see below.
new features:
• experimentally introduce a new pytest.yield_fixture decorator which accepts exactly the same pa-
rameters as pytest.fixture but mandates a yield statement instead of a return statement from fixture
functions. This allows direct integration with “with-style” context managers in fixture functions and generally
avoids registering of finalization callbacks in favour of treating the “after-yield” as teardown code. Thanks
Andreas Pelme, Vladimir Keleshev, Floris Bruynooghe, Ronny Pfannschmidt and many others for discussions.
• allow boolean expression directly with skipif/xfail if a “reason” is also specified. Rework skipping documen-
tation to recommend “condition as booleans” because it prevents surprises when importing markers between
modules. Specifying conditions as strings will remain fully supported.
• reporting: color the last line red or green depending if failures/errors occured or everything passed. thanks
Christian Theunert.
• make “import pdb ; pdb.set_trace()” work natively wrt capturing (no “-s” needed anymore), making
pytest.set_trace() a mere shortcut.
• fix issue181: –pdb now also works on collect errors (and on internal errors) . This was implemented by a slight
internal refactoring and the introduction of a new hook pytest_exception_interact hook (see next
item).
• fix issue341: introduce new experimental hook for IDEs/terminals to intercept debugging:
pytest_exception_interact(node, call, report).
• new monkeypatch.setattr() variant to provide a shorter invocation for patching out classes/functions from mod-
ules:
monkeypatch.setattr(“requests.get”, myfunc)
will replace the “get” function of the “requests” module with myfunc.
• fix issue322: tearDownClass is not run if setUpClass failed. Thanks Mathieu Agopian for the initial fix. Also
make all of pytest/nose finalizer mimick the same generic behaviour: if a setupX exists and fails, don’t run
teardownX. This internally introduces a new method “node.addfinalizer()” helper which can only be called
during the setup phase of a node.
• simplify pytest.mark.parametrize() signature: allow to pass a CSV-separated string to specify argnames. For
example: pytest.mark.parametrize("input,expected", [(1,2), (2,3)]) works as well
as the previous: pytest.mark.parametrize(("input", "expected"), ...).
• add support for setUpModule/tearDownModule detection, thanks Brian Okken.
• integrate tab-completion on options through use of “argcomplete”. Thanks Anthon van der Neut for the PR.
• change option names to be hyphen-separated long options but keep the old spelling backward compatible. py.test
-h will only show the hyphenated version, for example “–collect-only” but “–collectonly” will remain valid as
well (for backward-compat reasons). Many thanks to Anthon van der Neut for the implementation and to Hynek
Schlawack for pushing us.
• fix issue 308 - allow to mark/xfail/skip individual parameter sets when parametrizing. Thanks Brianna Laugher.
• call new experimental pytest_load_initial_conftests hook to allow 3rd party plugins to do something before a
conftest is loaded.
Bug fixes:
• fix issue358 - capturing options are now parsed more properly by using a new parser.parse_known_args method.
• pytest now uses argparse instead of optparse (thanks Anthon) which means that “argparse” is added as a depen-
dency if installing into python2.6 environments or below.
• fix issue333: fix a case of bad unittest/pytest hook interaction.
• PR27: correctly handle nose.SkipTest during collection. Thanks Antonio Cuni, Ronny Pfannschmidt.
• fix issue355: junitxml puts name=”pytest” attribute to testsuite tag.
• fix issue336: autouse fixture in plugins should work again.
• fix issue279: improve object comparisons on assertion failure for standard datatypes and recognise collec-
tions.abc. Thanks to Brianna Laugher and Mathieu Agopian.
• fix issue317: assertion rewriter support for the is_package method
• fix issue335: document py.code.ExceptionInfo() object returned from pytest.raises(), thanks Mathieu Agopian.
• remove implicit distribute_setup support from setup.py.
• fix issue305: ignore any problems when writing pyc files.
• SO-17664702: call fixture finalizers even if the fixture function partially failed (finalizers would not always be
called before)
• fix issue320 - fix class scope for fixtures when mixed with module-level functions. Thanks Anatloy Bubenkoff.
• you can specify “-q” or “-qq” to get different levels of “quieter” reporting (thanks Katarzyna Jachim)
• fix issue300 - Fix order of conftest loading when starting py.test in a subdirectory.
• fix issue323 - sorting of many module-scoped arg parametrizations
• make sessionfinish hooks execute with the same cwd-context as at session start (helps fix plugin behaviour
which write output files with relative path such as pytest-cov)
• fix issue316 - properly reference collection hooks in docs
• fix issue 306 - cleanup of -k/-m options to only match markers/test names/keywords respectively. Thanks Wouter
van Ackooy.
• improved doctest counting for doctests in python modules – files without any doctest items will not show up
anymore and doctest examples are counted as separate test items. thanks Danilo Bellini.
• fix issue245 by depending on the released py-1.4.14 which fixes py.io.dupfile to work with files with no mode.
Thanks Jason R. Coombs.
• fix junitxml generation when test output contains control characters, addressing issue267, thanks Jaap
Broekhuizen
• fix issue338: honor –tb style for setup/teardown errors as well. Thanks Maho.
• fix issue307 - use yaml.safe_load in example, thanks Mark Eichin.
• better parametrize error messages, thanks Brianna Laugher
• pytest_terminal_summary(terminalreporter) hooks can now use ”.section(title)” and ”.line(msg)” methods to
print extra information at the end of a test run.
pytest-2.3.5 is a maintenance release with many bug fixes and little improvements. See the changelog below for details.
No backward compatibility issues are foreseen and all plugins which worked with the prior version are expected to
work unmodified. Speaking of which, a few interesting new plugins saw the light last month:
• pytest-instafail: show failure information while tests are running
• pytest-qt: testing of GUI applications written with QT/Pyside
• pytest-xprocess: managing external processes across test runs
• pytest-random: randomize test ordering
And several others like pytest-django saw maintenance releases. For a more complete list, check out
https://pypi.python.org/pypi?%3Aaction=search&term=pytest&submit=search.
For general information see:
http://pytest.org/
To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
Particular thanks to Floris, Ronny, Benjamin and the many bug reporters and fix providers.
may the fixtures be with you, holger krekel
pytest-2.3.4 is a small stabilization release of the py.test tool which offers uebersimple assertions, scalable fixture
mechanisms and deep customization for testing with Python. This release comes with the following fixes and features:
• make “-k” option accept an expressions the same as with “-m” so that one can write: -k “name1 or name2” etc.
This is a slight usage incompatibility if you used special syntax like “TestClass.test_method” which you now
need to write as -k “TestClass and test_method” to match a certain method in a certain test class.
• allow to dynamically define markers via item.keywords[...]=assignment integrating with “-m” option
12.8. pytest-2.3.4: stabilization, more flexible selection via “-k expr” 141
pytest Documentation, Release 2.5.2
• yielded test functions will now have autouse-fixtures active but cannot accept fixtures as funcargs
- it’s anyway recommended to rather use the post-2.0 parametrize features instead of yield, see:
http://pytest.org/latest/example/parametrize.html
• fix autouse-issue where autouse-fixtures would not be discovered if defined in a a/conftest.py file and tests in
a/tests/test_some.py
• fix issue226 - LIFO ordering for fixture teardowns
• fix issue224 - invocations with >256 char arguments now work
• fix issue91 - add/discuss package/directory level setups in example
• fixes related to autouse discovery and calling
Thanks in particular to Thomas Waldmann for spotting and reporting issues.
See
http://pytest.org/
for general information. To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
best, holger krekel
pytest-2.3.3 is a another stabilization release of the py.test tool which offers uebersimple assertions, scalable fixture
mechanisms and deep customization for testing with Python. Particularly, this release provides:
• integration fixes and improvements related to flask, numpy, nose, unittest, mock
• makes pytest work on py24 again (yes, people sometimes still need to use it)
• show *,** args in pytest tracebacks
Thanks to Manuel Jacob, Thomas Waldmann, Ronny Pfannschmidt, Pavel Repin and Andreas Taumoefolau for pro-
viding patches and all for the issues.
See
http://pytest.org/
for general information. To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
best, holger krekel
• fix issue214 - parse modules that contain special objects like e. g. flask’s request object which blows up on
getattr access if no request is active. thanks Thomas Waldmann.
• fix issue213 - allow to parametrize with values like numpy arrays that do not support an __eq__ operator
• fix issue215 - split test_python.org into multiple files
• fix issue148 - @unittest.skip on classes is now recognized and avoids calling setUpClass/tearDownClass, thanks
Pavel Repin
• fix issue209 - reintroduce python2.4 support by depending on newer pylib which re-introduced statement-finding
for pre-AST interpreters
• nose support: only call setup if its a callable, thanks Andrew Taumoefolau
• fix issue219 - add py2.4-3.3 classifiers to TROVE list
• in tracebacks ,* arg values are now shown next to normal arguments (thanks Manuel Jacob)
• fix issue217 - support mock.patch with pytest’s fixtures - note that you need either mock-1.0.1 or the python3.3
builtin unittest.mock.
• fix issue127 - improve documentation for pytest_addoption() and add a config.getoption(name) helper
function for consistency.
• fix issue208 and fix issue29 use new py version to avoid long pauses when printing tracebacks in long modules
• fix issue205 - conftests in subdirs customizing pytest_pycollect_makemodule and pytest_pycollect_makeitem
now work properly
• fix teardown-ordering for parametrized setups
• fix issue127 - better documentation for pytest_addoption and related objects.
• fix unittest behaviour: TestCase.runtest only called if there are test methods defined
• improve trial support: don’t collect its empty unittest.TestCase.runTest() method
• “python setup.py test” now works with pytest itself
• fix/improve internal/packaging related bits:
– exception message check of test_nose.py now passes on python33 as well
• fix issue202 - fix regression: using “self” from fixture functions now works as expected (it’s the same “self”
instance that a test method which uses the fixture sees)
• skip pexpect using tests (test_pdb.py mostly) on freebsd* systems due to pexpect not supporting it properly
(hanging)
• link to web pages from –markers output which provides help for pytest.mark.* usage.
pytest-2.3 comes with many major improvements for fixture/funcarg management and parametrized testing in Python.
It is now easier, more efficient and more predicatable to re-run the same tests with different fixture instances. Also,
you can directly declare the caching “scope” of fixtures so that dependent tests throughout your whole test suite can
re-use database or other expensive fixture objects with ease. Lastly, it’s possible for fixture functions (formerly known
as funcarg factories) to use other fixtures, allowing for a completely modular and re-useable fixture design.
For detailed info and tutorial-style examples, see:
http://pytest.org/latest/fixture.html
Moreover, there is now support for using pytest fixtures/funcargs with unittest-style suites, see here for examples:
http://pytest.org/latest/unittest.html
Besides, more unittest-test suites are now expected to “simply work” with pytest.
All changes are backward compatible and you should be able to continue to run your test suites and 3rd party plugins
that worked with pytest-2.2.4.
If you are interested in the precise reasoning (including examples) of the pytest-2.3 fixture evolution, please consult
http://pytest.org/latest/funcarg_compare.html
For general info on installation and getting started:
http://pytest.org/latest/getting-started.html
Docs and PDF access as usual at:
http://pytest.org
and more details for those already in the knowing of pytest can be found in the CHANGELOG below.
Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko Carl Meyer, Ronny Pfannschmidt, Benjamin
Peterson and Alex Gaynor for helping to get the new features right and well integrated. Ronny and Floris also helped
to fix a number of bugs and yet more people helped by providing bug reports.
have fun, holger krekel
• make request.keywords and node.keywords writable. All descendant collection nodes will see keyword values.
Keywords are dictionaries containing markers and other info.
• fix issue 178: xml binary escapes are now wrapped in py.xml.raw
• fix issue 176: correctly catch the builtin AssertionError even when we replaced AssertionError with a subclass
on the python level
• factory discovery no longer fails with magic global callables that provide no sane __code__ object (mock.call
for example)
• fix issue 182: testdir.inprocess_run now considers passed plugins
• fix issue 188: ensure sys.exc_info is clear on python2 before calling into a test
• fix issue 191: add unittest TestCase runTest method support
• fix issue 156: monkeypatch correctly handles class level descriptors
• reporting refinements:
– pytest_report_header now receives a “startdir” so that you can use startdir.bestrelpath(yourpath) to show
nice relative path
– allow plugins to implement both pytest_report_header and pytest_sessionstart (sessionstart is invoked
first).
– don’t show deselected reason line if there is none
– py.test -vv will show all of assert comparisations instead of truncating
pytest-2.2.4 is a minor backward-compatible release of the versatile py.test testing tool. It contains bug fixes and a few
refinements to junitxml reporting, better unittest- and python3 compatibility.
For general information see here:
http://pytest.org/
To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
Special thanks for helping on this release to Ronny Pfannschmidt and Benjamin Peterson and the contributors of issues.
best, holger krekel
• fix issue #143: call unconfigure/sessionfinish always when configure/sessionstart where called
• fix issue #144: better mangle test ids to junitxml classnames
• upgrade distribute_setup.py to 0.6.27
pytest-2.2.2 (updated to 2.2.3 to fix packaging issues) is a minor backward-compatible release of the versatile py.test
testing tool. It contains bug fixes and a few refinements particularly to reporting with “–collectonly”, see below for
betails.
For general information see here:
http://pytest.org/
To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
Special thanks for helping on this release to Ronny Pfannschmidt and Ralf Schmitt and the contributors of issues.
best, holger krekel
• fix issue101: wrong args to unittest.TestCase test function now produce better output
• fix issue102: report more useful errors and hints for when a test directory was renamed and some
pyc/__pycache__ remain
• fix issue106: allow parametrize to be applied multiple times e.g. from module, class and at function level.
• fix issue107: actually perform session scope finalization
• don’t check in parametrize if indirect parameters are funcarg names
• add chdir method to monkeypatch funcarg
• fix crash resulting from calling monkeypatch undo a second time
• fix issue115: make –collectonly robust against early failure (missing files/directories)
• “-qq –collectonly” now shows only files and the number of tests in them
• “-q –collectonly” now shows test ids
• allow adding of attributes to test reports such that it also works with distributed testing (no upgrade of pytest-
xdist needed)
pytest-2.2.1 is a minor backward-compatible release of the the py.test testing tool. It contains bug fixes and little
improvements, including documentation fixes. If you are using the distributed testing pluginmake sure to upgrade it
to pytest-xdist-1.8.
For general information see here:
http://pytest.org/
• fix issue99 (in pytest and py) internallerrors with resultlog now produce better output - fixed by normalizing
pytest_internalerror input arguments.
• fix issue97 / traceback issues (in pytest and py) improve traceback output in conjunction with jinja2 and cython
which hack tracebacks
• fix issue93 (in pytest and pytest-xdist) avoid “delayed teardowns”: the final test in a test node will now run
its teardown directly instead of waiting for the end of the session. Thanks Dave Hunt for the good reporting
and feedback. The pytest_runtest_protocol as well as the pytest_runtest_teardown hooks now have “nextitem”
available which will be None indicating the end of the test run.
• fix collection crash due to unknown-source collected items, thanks to Ralf Schmitt (fixed by depending on a
more recent pylib)
pytest-2.2.0 is a test-suite compatible release of the popular py.test testing tool. Plugins might need upgrades. It comes
with these improvements:
• easier and more powerful parametrization of tests:
– new @pytest.mark.parametrize decorator to run tests with different arguments
– new metafunc.parametrize() API for parametrizing arguments independently
– see examples at http://pytest.org/latest/example/parametrize.html
– NOTE that parametrize() related APIs are still a bit experimental and might change in future releases.
• improved handling of test markers and refined marking mechanism:
– “-m markexpr” option for selecting tests according to their mark
– a new “markers” ini-variable for registering test markers for your project
– the new “–strict” bails out with an error if using unregistered markers.
– see examples at http://pytest.org/latest/example/markers.html
• duration profiling: new “–duration=N” option showing the N slowest test execution or setup/teardown calls.
This is most useful if you want to find out where your slowest test code is.
• also 2.2.0 performs more eager calling of teardown/finalizers functions resulting in better and more accurate
reporting when they fail
Besides there is the usual set of bug fixes along with a cleanup of pytest’s own test suite allowing it to run on a wider
range of environments.
For general information, see extensive docs with examples here:
http://pytest.org/
If you want to install or upgrade pytest you might just type:
pip install -U pytest # or
easy_install -U pytest
Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri, Alfredo Deza and all who gave feedback or
sent bug reports.
best, holger krekel
While test suites should work unchanged you might need to upgrade plugins:
• You need a new version of the pytest-xdist plugin (1.7) for distributing test runs.
• Other plugins might need an upgrade if they implement the pytest_runtest_logreport hook which
now is called unconditionally for the setup/teardown fixture phases of a test. You may choose to ignore
setup/teardown failures by inserting “if rep.when != ‘call’: return” or something similar. Note that most code
probably “just” works because the hook was already called for failing setup/teardown phases of a test so a plugin
should have been ready to grok such reports already.
• fix issue90: introduce eager tearing down of test items so that teardown function are called earlier.
• add an all-powerful metafunc.parametrize function which allows to parametrize test function arguments in mul-
tiple steps and therefore from independent plugins and places.
• add a @pytest.mark.parametrize helper which allows to easily call a test function with different argument values.
• Add examples to the “parametrize” example page, including a quick port of Test scenarios and the new
parametrize function and decorator.
• introduce registration for “pytest.mark.*” helpers via ini-files or through plugin hooks. Also introduce a “–strict”
option which will treat unregistered markers as errors allowing to avoid typos and maintain a well described set
of markers for your test suite. See examples at http://pytest.org/latest/mark.html and its links.
• issue50: introduce “-m marker” option to select tests based on markers (this is a stricter and more predictable
version of “-k” in that “-m” only matches complete markers and has more obvious rules for and/or semantics.
• new feature to help optimizing the speed of your tests: –durations=N option for displaying N slowest test calls
and setup/teardown methods.
• fix issue87: –pastebin now works with python3
• fix issue89: –pdb with unexpected exceptions in doctest work more sensibly
• fix and cleanup pytest’s own test suite to not leak FDs
• fix issue83: link to generated funcarg list
• fix issue74: pyarg module names are now checked against imp.find_module false positives
• fix compatibility with twisted/trial-11.1.0 use cases
12.16. py.test 2.2.0: test marking++, parametrization++ and duration profiling 149
pytest Documentation, Release 2.5.2
pytest-2.1.3 is a minor backward compatible maintenance release of the popular py.test testing tool. It is commonly
used for unit, functional- and integration testing. See extensive docs with examples here:
http://pytest.org/
The release contains another fix to the perfected assertions introduced with the 2.1 series as well as the new possibility
to customize reporting for assertion expressions on a per-directory level.
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or
easy_install -U pytest
Thanks to the bug reporters and to Ronny Pfannschmidt, Benjamin Peterson and Floris Bruynooghe who implemented
the fixes.
best, holger krekel
pytest-2.1.2 is a minor backward compatible maintenance release of the popular py.test testing tool. pytest is commonly
used for unit, functional- and integration testing. See extensive docs with examples here:
http://pytest.org/
Most bug fixes address remaining issues with the perfected assertions introduced in the 2.1 series - many thanks to the
bug reporters and to Benjamin Peterson for helping to fix them. pytest should also work better with Jython-2.5.1 (and
Jython trunk).
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or
easy_install -U pytest
• fix assertion rewriting on files with windows newlines on some Python versions
• refine test discovery by package/module name (–pyargs), thanks Florian Mayer
• fix issue69 / assertion rewriting fixed on some boolean operations
pytest-2.1.1 is a backward compatible maintenance release of the popular py.test testing tool. See extensive docs with
examples here:
http://pytest.org/
Most bug fixes address remaining issues with the perfected assertions introduced with 2.1.0 - many thanks to the bug
reporters and to Benjamin Peterson for helping to fix them. Also, junitxml output now produces system-out/err tags
which lead to better displays of tracebacks with Jenkins.
Also a quick note to package maintainers and others interested: there now is a “pytest” man page which can be
generated with “make man” in doc/.
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or
easy_install -U pytest
Welcome to the release of pytest-2.1, a mature testing tool for Python, supporting CPython 2.4-3.2, Jython and latest
PyPy interpreters. See the improved extensive docs (now also as PDF!) with tested examples here:
http://pytest.org/
The single biggest news about this release are perfected assertions courtesy of Benjamin Peterson. You can now
safely use assert statements in test modules without having to worry about side effects or python optimization
(“-OO”) options. This is achieved by rewriting assert statements in test modules upon import, using a PEP302 hook.
12.19. py.test 2.1.1: assertion fixes and improved junitxml output 151
pytest Documentation, Release 2.5.2
See http://pytest.org/assert.html#advanced-assertion-introspection for detailed information. The work has been partly
sponsored by my company, merlinux GmbH.
For further details on bug fixes and smaller enhancements see below.
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or
easy_install -U pytest
Welcome to pytest-2.0.3, a maintenance and bug fix release of pytest, a mature testing tool for Python, supporting
CPython 2.4-3.2, Jython and latest PyPy interpreters. See the extensive docs with tested examples here:
http://pytest.org/
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or
easy_install -U pytest
There also is a bugfix release 1.6 of pytest-xdist, the plugin that enables seemless distributed and “looponfail” testing
for Python.
best, holger krekel
• fix issue38: nicer tracebacks on calls to hooks, particularly early configure/sessionstart ones
• fix missing skip reason/meta information in junitxml files, reported via http://lists.idyll.org/pipermail/testing-in-
python/2011-March/003928.html
• fix issue34: avoid collection failure with “test” prefixed classes deriving from object.
• don’t require zlib (and other libs) for genscript plugin without –genscript actually being used.
• speed up skips (by not doing a full traceback represenation internally)
• fix issue37: avoid invalid characters in junitxml’s output
Welcome to pytest-2.0.2, a maintenance and bug fix release of pytest, a mature testing tool for Python, supporting
CPython 2.4-3.2, Jython and latest PyPy interpreters. See the extensive docs with tested examples here:
http://pytest.org/
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or
easy_install -U pytest
Many thanks to all issue reporters and people asking questions or complaining, particularly Jurko for his insistence,
Laura, Victor and Brianna for helping with improving and Ronny for his general advise.
best, holger krekel
• tackle issue32 - speed up test runs of very quick test functions by reducing the relative overhead
• fix issue30 - extended xfail/skipif handling and improved reporting. If you have a syntax error in your skip/xfail
expressions you now get nice error reports.
Also you can now access module globals from xfail/skipif expressions so that this for example works now:
import pytest
import mymodule
@pytest.mark.skipif("mymodule.__version__[0] == "1")
def test_function():
pass
This will not run the test function if the module’s version string does not start with a “1”. Note that specifying
a string instead of a boolean expressions allows py.test to report meaningful information when summarizing a
test run as to what conditions lead to skipping (or xfail-ing) tests.
• fix issue28 - setup_method and pytest_generate_tests work together The setup_method fixture method now gets
called also for test function invocations generated from the pytest_generate_tests hook.
• fix issue27 - collectonly and keyword-selection (-k) now work together Also, if you do “py.test –collectonly
-q” you now get a flat list of test ids that you can use to paste to the py.test commandline in order to execute a
particular test.
12.22. py.test 2.0.2: bug fixes, improved xfail/skip expressions, speed ups 153
pytest Documentation, Release 2.5.2
• fix issue25 avoid reported problems with –pdb and python3.2/encodings output
• fix issue23 - tmpdir argument now works on Python3.2 and WindowsXP Starting with Python3.2 os.symlink
may be supported. By requiring a newer py lib version the py.path.local() implementation acknowledges this.
• fixed typos in the docs (thanks Victor Garcia, Brianna Laugher) and particular thanks to Laura Creighton who
also revieved parts of the documentation.
• fix slighly wrong output of verbose progress reporting for classes (thanks Amaury)
• more precise (avoiding of) deprecation warnings for node.Class|Function accesses
• avoid std unittest assertion helper code in tracebacks (thanks Ronny)
Welcome to pytest-2.0.1, a maintenance and bug fix release of pytest, a mature testing tool for Python, supporting
CPython 2.4-3.2, Jython and latest PyPy interpreters. See extensive docs with tested examples here:
http://pytest.org/
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or
easy_install -U pytest
Many thanks to all issue reporters and people asking questions or complaining. Particular thanks to Floris Bruynooghe
and Ronny Pfannschmidt for their great coding contributions and many others for feedback and help.
best, holger krekel
• refine and unify initial capturing so that it works nicely even if the logging module is used on an early-loaded
conftest.py file or plugin.
• fix issue12 - show plugin versions with “–version” and “–traceconfig” and also document how to add extra
information to reporting test header
• fix issue17 (import-* reporting issue on python3) by requiring py>1.4.0 (1.4.1 is going to include it)
• fix issue10 (numpy arrays truth checking) by refining assertion interpretation in py lib
• fix issue15: make nose compatibility tests compatible with python3 (now that nose-1.0 supports python3)
• remove somewhat surprising “same-conftest” detection because it ignores conftest.py when they appear in sev-
eral subdirs.
• improve assertions (“not in”), thanks Floris Bruynooghe
• improve behaviour/warnings when running on top of “python -OO” (assertions and docstrings are turned off,
leading to potential false positives)
• introduce a pytest_cmdline_processargs(args) hook to allow dynamic computation of command line arguments.
This fixes a regression because py.test prior to 2.0 allowed to set command line options from conftest.py files
which so far pytest-2.0 only allowed from ini-files now.
• fix issue7: assert failures in doctest modules. unexpected failures in doctests will not generally show nicer, i.e.
within the doctest failing context.
• fix issue9: setup/teardown functions for an xfail-marked test will report as xfail if they fail but report as nor-
mally passing (not xpassing) if they succeed. This only is true for “direct” setup/teardown invocations because
teardown_class/ teardown_module cannot closely relate to a single test.
• fix issue14: no logging errors at process exit
• refinements to “collecting” output on non-ttys
• refine internal plugin registration and –traceconfig output
• introduce a mechanism to prevent/unregister plugins from the command line, see
http://pytest.org/latest/plugins.html#cmdunregister
• activate resultlog plugin by default
• fix regression wrt yielded tests which due to the collection-before-running semantics were not setup as with
pytest 1.3.4. Note, however, that the recommended and much cleaner way to do test parametrization remains
the “pytest_generate_tests” mechanism, see the docs.
Welcome to pytest-2.0.0, a major new release of “py.test”, the rapid easy Python testing tool. There are many new
features and enhancements, see below for summary and detailed lists. A lot of long-deprecated code has been removed,
resulting in a much smaller and cleaner implementation. See the new docs with examples here:
http://pytest.org/2.0.0/index.html
A note on packaging: pytest used to part of the “py” distribution up until version py-1.3.4 but this has changed now:
pytest-2.0.0 only contains py.test related code and is expected to be backward-compatible to existing test code. If you
want to install pytest, just type one of:
pip install -U pytest
easy_install -U pytest
Many thanks to all issue reporters and people asking questions or complaining. Particular thanks to Floris Bruynooghe
and Ronny Pfannschmidt for their great coding contributions and many others for feedback and help.
best, holger krekel
[pytest]
norecursedirs = .hg data* # don’t ever recurse in such dirs
addopts = -x --pyargs # add these command line options by default
see http://pytest.org/2.0.0/customize.html
• improved standard unittest support. In general py.test should now better be able to run custom unittest.TestCases
like twisted trial or Django based TestCases. Also you can now run the tests of an installed ‘unittest’ package
with py.test:
py.test --pyargs unittest
• new “-q” option which decreases verbosity and prints a more nose/unittest-style “dot” output.
• many many more detailed improvements details
12.24.2 Fixes
• fix issue126 - introduce py.test.set_trace() to trace execution via PDB during the running of tests even if capturing
is ongoing.
• fix issue124 - make reporting more resilient against tests opening files on filedescriptor 1 (stdout).
• fix issue109 - sibling conftest.py files will not be loaded. (and Directory collectors cannot be customized any-
more from a Directory’s conftest.py - this needs to happen at least one level up).
• fix issue88 (finding custom test nodes from command line arg)
• fix issue93 stdout/stderr is captured while importing conftest.py
• fix bug: unittest collected functions now also can have “pytestmark” applied at class/module level
• The usual way in pre-2.0 times to use py.test in python code was to import “py” and then e.g. use “py.test.raises”
for the helper. This remains valid and is not planned to be deprecated. However, in most examples and internal
code you’ll find “import pytest” and “pytest.raises” used as the recommended default way.
• pytest now first performs collection of the complete test suite before running any test. This changes for example
the semantics of when pytest_collectstart/pytest_collectreport are called. Some plugins may need upgrading.
• The pytest package consists of a 400 LOC core.py and about 20 builtin plugins, summing up to roughly 5000
LOCs, including docstrings. To be fair, it also uses generic code from the “pylib”, and the new “py” package to
help with filesystem and introspection/code manipulation.
– removed the “disabled” attribute in test classes. Use the skipping and pytestmark mechanism to skip or
xfail a test class.
• py.test.collect.Directory does not exist anymore and it is not possible to provide an own “Directory” object. If
you have used this and don’t know what to do, get in contact. We’ll figure something out.
Note that pytest_collect_directory() is still called but any return value will be ignored. This allows to keep old
code working that performed for example “py.test.skip()” in collect() to prevent recursion into directory trees if
a certain dependency or command line option is missing.
see Changelog history for more detailed changes.
THIRTEEN
CHANGELOG HISTORY
13.1 2.5.2
• fix issue409 – better interoperate with cx_freeze by not trying to import from collections.abc which causes
problems for py27/cx_freeze. Thanks Wolfgang L. for reporting and tracking it down.
• fixed docs and code to use “pytest” instead of “py.test” almost everywhere. Thanks Jurko Gospodnetic for the
complete PR.
• fix issue425: mention at end of “py.test -h” that –markers and –fixtures work according to specified test path (or
current dir)
• fix issue413: exceptions with unicode attributes are now printed correctly also on python2 and with pytest-xdist
runs. (the fix requires py-1.4.20)
• copy, cleanup and integrate py.io capture from pylib 1.4.20.dev2 (rev 13d9af95547e)
• address issue416: clarify docs as to conftest.py loading semantics
• fix issue429: comparing byte strings with non-ascii chars in assert expressions now work better. Thanks Floris
Bruynooghe.
• make capfd/capsys.capture private, its unused and shouldnt be exposed
13.2 2.5.1
13.3 2.5.0
• dropped python2.5 from automated release testing of pytest itself which means it’s probably going to break soon
(but still works with this release we believe).
159
pytest Documentation, Release 2.5.2
• simplified and fixed implementation for calling finalizers when parametrized fixtures or function arguments are
involved. finalization is now performed lazily at setup time instead of in the “teardown phase”. While this
might sound odd at first, it helps to ensure that we are correctly handling setup/teardown even in complex code.
User-level code should not be affected unless it’s implementing the pytest_runtest_teardown hook and expecting
certain fixture instances are torn down within (very unlikely and would have been unreliable anyway).
• PR90: add –color=yes|no|auto option to force terminal coloring mode (“auto” is default). Thanks Marc
Abramowitz.
• fix issue319 - correctly show unicode in assertion errors. Many thanks to Floris Bruynooghe for the complete
PR. Also means we depend on py>=1.4.19 now.
• fix issue396 - correctly sort and finalize class-scoped parametrized tests independently from number of methods
on the class.
• refix issue323 in a better way – parametrization should now never cause Runtime Recursion errors because
the underlying algorithm for re-ordering tests per-scope/per-fixture is not recursive anymore (it was tail-call
recursive before which could lead to problems for more than >966 non-function scoped parameters).
• fix issue290 - there is preliminary support now for parametrizing with repeated same values (sometimes useful
to to test if calling a second time works as with the first time).
• close issue240 - document precisely how pytest module importing works, discuss the two common test directory
layouts, and how it interacts with PEP420-namespace packages.
• fix issue246 fix finalizer order to be LIFO on independent fixtures depending on a parametrized higher-than-
function scoped fixture. (was quite some effort so please bear with the complexity of this sentence :) Thanks
Ralph Schmitt for the precise failure example.
• fix issue244 by implementing special index for parameters to only use indices for paramentrized test ids
• fix issue287 by running all finalizers but saving the exception from the first failing finalizer and re-raising it
so teardown will still have failed. We reraise the first failing exception because it might be the cause for other
finalizers to fail.
• fix ordering when mock.patch or other standard decorator-wrappings are used with test methods. This fixues
issue346 and should help with random “xdist” collection failures. Thanks to Ronny Pfannschmidt and Donald
Stufft for helping to isolate it.
• fix issue357 - special case “-k” expressions to allow for filtering with simple strings that are not valid python
expressions. Examples: “-k 1.3” matches all tests parametrized with 1.3. “-k None” filters all tests that have
“None” in their name and conversely “-k ‘not None”’. Previously these examples would raise syntax errors.
• fix issue384 by removing the trial support code since the unittest compat enhancements allow trial to handle it
on its own
• don’t hide an ImportError when importing a plugin produces one. fixes issue375.
• fix issue275 - allow usefixtures and autouse fixtures for running doctest text files.
• fix issue380 by making –resultlog only rely on longrepr instead of the “reprcrash” attribute which only exists
sometimes.
• address issue122: allow @pytest.fixture(params=iterator) by exploding into a list early on.
• fix pexpect-3.0 compatibility for pytest’s own tests. (fixes issue386)
• allow nested parametrize-value markers, thanks James Lan for the PR.
• fix unicode handling with new monkeypatch.setattr(import_path, value) API. Thanks Rob Dennis. Fixes is-
sue371.
• fix unicode handling with junitxml, fixes issue368.
• In assertion rewriting mode on Python 2, fix the detection of coding cookies. See issue #330.
• make “–runxfail” turn imperative pytest.xfail calls into no ops (it already did neutralize pytest.mark.xfail mark-
ers)
• refine pytest / pkg_resources interactions: The AssertionRewritingHook PEP302 compliant loader now registers
itself with setuptools/pkg_resources properly so that the pkg_resources.resource_stream method works properly.
Fixes issue366. Thanks for the investigations and full PR to Jason R. Coombs.
• pytestconfig fixture is now session-scoped as it is the same object during the whole test run. Fixes issue370.
• avoid one surprising case of marker malfunction/confusion:
@pytest.mark.some(lambda arg: ...)
def test_function():
would not work correctly because pytest assumes @pytest.mark.some gets a function to be decorated already.
We now at least detect if this arg is an lambda and thus the example will work. Thanks Alex Gaynor for bringing
it up.
• xfail a test on pypy that checks wrong encoding/ascii (pypy does not error out). fixes issue385.
• internally make varnames() deal with classes’s __init__, although it’s not needed by pytest itself atm. Also fix
caching. Fixes issue376.
• fix issue221 - handle importing of namespace-package with no __init__.py properly.
• refactor internal FixtureRequest handling to avoid monkeypatching. One of the positive user-facing effects is
that the “request” object can now be used in closures.
• fixed version comparison in pytest.importskip(modname, minverstring)
• fix issue377 by clarifying in the nose-compat docs that pytest does not duplicate the unittest-API into the “plain”
namespace.
• fix verbose reporting for @mock’d test functions
13.4 v2.4.2
• on Windows require colorama and a newer py lib so that py.io.TerminalWriter() now uses colorama instead of
its own ctypes hacks. (fixes issue365) thanks Paul Moore for bringing it up.
• fix “-k” matching of tests where “repr” and “attr” and other names would cause wrong matches because of an
internal implementation quirk (don’t ask) which is now properly implemented. fixes issue345.
• avoid tmpdir fixture to create too long filenames especially when parametrization is used (issue354)
• fix pytest-pep8 and pytest-flakes / pytest interactions (collection names in mark plugin was assuming an item
always has a function which is not true for those plugins etc.) Thanks Andi Zeidler.
• introduce node.get_marker/node.add_marker API for plugins like pytest-pep8 and pytest-flakes to avoid the
messy details of the node.keywords pseudo-dicts. Adapated docs.
• remove attempt to “dup” stdout at startup as it’s icky. the normal capturing should catch enough possibilities of
tests messing up standard FDs.
• add pluginmanager.do_configure(config) as a link to config.do_configure() for plugin-compatibility
13.5 v2.4.1
• When using parser.addoption() unicode arguments to the “type” keyword should also be converted to the re-
spective types. thanks Floris Bruynooghe, @dnozay. (fixes issue360 and issue362)
• fix dotted filename completion when using argcomplete thanks Anthon van der Neuth. (fixes issue361)
• fix regression when a 1-tuple (“arg”,) is used for specifying parametrization (the values of the parametrization
were passed nested in a tuple). Thanks Donald Stufft.
• merge doc typo fixes, thanks Andy Dirnberger
13.6 v2.4
known incompatibilities:
• if calling –genscript from python2.7 or above, you only get a standalone script which works on python2.7 or
above. Use Python2.6 to also get a python2.5 compatible version.
• all xunit-style teardown methods (nose-style, pytest-style, unittest-style) will not be called if the corresponding
setup method failed, see issue322 below.
• the pytest_plugin_unregister hook wasn’t ever properly called and there is no known implementation of the hook
- so it got removed.
• pytest.fixture-decorated functions cannot be generators (i.e. use yield) anymore. This change might be re-
versed in 2.4.1 if it causes unforeseen real-life issues. However, you can always write and return an inner
function/generator and change the fixture consumer to iterate over the returned generator. This change was done
in lieu of the new pytest.yield_fixture decorator, see below.
new features:
• experimentally introduce a new pytest.yield_fixture decorator which accepts exactly the same pa-
rameters as pytest.fixture but mandates a yield statement instead of a return statement from fixture
functions. This allows direct integration with “with-style” context managers in fixture functions and generally
avoids registering of finalization callbacks in favour of treating the “after-yield” as teardown code. Thanks
Andreas Pelme, Vladimir Keleshev, Floris Bruynooghe, Ronny Pfannschmidt and many others for discussions.
• allow boolean expression directly with skipif/xfail if a “reason” is also specified. Rework skipping documen-
tation to recommend “condition as booleans” because it prevents surprises when importing markers between
modules. Specifying conditions as strings will remain fully supported.
• reporting: color the last line red or green depending if failures/errors occured or everything passed. thanks
Christian Theunert.
• make “import pdb ; pdb.set_trace()” work natively wrt capturing (no “-s” needed anymore), making
pytest.set_trace() a mere shortcut.
• fix issue181: –pdb now also works on collect errors (and on internal errors) . This was implemented by a slight
internal refactoring and the introduction of a new hook pytest_exception_interact hook (see next
item).
• fix issue341: introduce new experimental hook for IDEs/terminals to intercept debugging:
pytest_exception_interact(node, call, report).
• new monkeypatch.setattr() variant to provide a shorter invocation for patching out classes/functions from mod-
ules:
monkeypatch.setattr(“requests.get”, myfunc)
will replace the “get” function of the “requests” module with myfunc.
• fix issue322: tearDownClass is not run if setUpClass failed. Thanks Mathieu Agopian for the initial fix. Also
make all of pytest/nose finalizer mimick the same generic behaviour: if a setupX exists and fails, don’t run
teardownX. This internally introduces a new method “node.addfinalizer()” helper which can only be called
during the setup phase of a node.
• simplify pytest.mark.parametrize() signature: allow to pass a CSV-separated string to specify argnames. For
example: pytest.mark.parametrize("input,expected", [(1,2), (2,3)]) works as well
as the previous: pytest.mark.parametrize(("input", "expected"), ...).
• add support for setUpModule/tearDownModule detection, thanks Brian Okken.
• integrate tab-completion on options through use of “argcomplete”. Thanks Anthon van der Neut for the PR.
• change option names to be hyphen-separated long options but keep the old spelling backward compatible. py.test
-h will only show the hyphenated version, for example “–collect-only” but “–collectonly” will remain valid as
well (for backward-compat reasons). Many thanks to Anthon van der Neut for the implementation and to Hynek
Schlawack for pushing us.
• fix issue 308 - allow to mark/xfail/skip individual parameter sets when parametrizing. Thanks Brianna Laugher.
• call new experimental pytest_load_initial_conftests hook to allow 3rd party plugins to do something before a
conftest is loaded.
Bug fixes:
• fix issue358 - capturing options are now parsed more properly by using a new parser.parse_known_args method.
• pytest now uses argparse instead of optparse (thanks Anthon) which means that “argparse” is added as a depen-
dency if installing into python2.6 environments or below.
• fix issue333: fix a case of bad unittest/pytest hook interaction.
• PR27: correctly handle nose.SkipTest during collection. Thanks Antonio Cuni, Ronny Pfannschmidt.
• fix issue355: junitxml puts name=”pytest” attribute to testsuite tag.
• fix issue336: autouse fixture in plugins should work again.
• fix issue279: improve object comparisons on assertion failure for standard datatypes and recognise collec-
tions.abc. Thanks to Brianna Laugher and Mathieu Agopian.
• fix issue317: assertion rewriter support for the is_package method
• fix issue335: document py.code.ExceptionInfo() object returned from pytest.raises(), thanks Mathieu Agopian.
• remove implicit distribute_setup support from setup.py.
• fix issue305: ignore any problems when writing pyc files.
• SO-17664702: call fixture finalizers even if the fixture function partially failed (finalizers would not always be
called before)
• fix issue320 - fix class scope for fixtures when mixed with module-level functions. Thanks Anatloy Bubenkoff.
• you can specify “-q” or “-qq” to get different levels of “quieter” reporting (thanks Katarzyna Jachim)
• fix issue300 - Fix order of conftest loading when starting py.test in a subdirectory.
• fix issue323 - sorting of many module-scoped arg parametrizations
• make sessionfinish hooks execute with the same cwd-context as at session start (helps fix plugin behaviour
which write output files with relative path such as pytest-cov)
• fix issue316 - properly reference collection hooks in docs
• fix issue 306 - cleanup of -k/-m options to only match markers/test names/keywords respectively. Thanks Wouter
van Ackooy.
• improved doctest counting for doctests in python modules – files without any doctest items will not show up
anymore and doctest examples are counted as separate test items. thanks Danilo Bellini.
• fix issue245 by depending on the released py-1.4.14 which fixes py.io.dupfile to work with files with no mode.
Thanks Jason R. Coombs.
• fix junitxml generation when test output contains control characters, addressing issue267, thanks Jaap
Broekhuizen
• fix issue338: honor –tb style for setup/teardown errors as well. Thanks Maho.
• fix issue307 - use yaml.safe_load in example, thanks Mark Eichin.
• better parametrize error messages, thanks Brianna Laugher
• pytest_terminal_summary(terminalreporter) hooks can now use ”.section(title)” and ”.line(msg)” methods to
print extra information at the end of a test run.
13.7 v2.3.5
• allow to specify prefixes starting with “_” when customizing python_functions test discovery. (thanks Graham
Horler)
• improve PYTEST_DEBUG tracing output by puting extra data on a new lines with additional indent
• ensure OutcomeExceptions like skip/fail have initialized exception attributes
• issue 260 - don’t use nose special setup on plain unittest cases
• fix issue134 - print the collect errors that prevent running specified test items
• fix issue266 - accept unicode in MarkEvaluator expressions
13.8 v2.3.4
• yielded test functions will now have autouse-fixtures active but cannot accept fixtures as funcargs
- it’s anyway recommended to rather use the post-2.0 parametrize features instead of yield, see:
http://pytest.org/latest/example/parametrize.html
• fix autouse-issue where autouse-fixtures would not be discovered if defined in a a/conftest.py file and tests in
a/tests/test_some.py
• fix issue226 - LIFO ordering for fixture teardowns
• fix issue224 - invocations with >256 char arguments now work
• fix issue91 - add/discuss package/directory level setups in example
• allow to dynamically define markers via item.keywords[...]=assignment integrating with “-m” option
• make “-k” accept an expressions the same as with “-m” so that one can write: -k “name1 or name2” etc. This is
a slight incompatibility if you used special syntax like “TestClass.test_method” which you now need to write as
-k “TestClass and test_method” to match a certain method in a certain test class.
13.9 v2.3.3
• fix issue214 - parse modules that contain special objects like e. g. flask’s request object which blows up on
getattr access if no request is active. thanks Thomas Waldmann.
• fix issue213 - allow to parametrize with values like numpy arrays that do not support an __eq__ operator
• fix issue215 - split test_python.org into multiple files
• fix issue148 - @unittest.skip on classes is now recognized and avoids calling setUpClass/tearDownClass, thanks
Pavel Repin
• fix issue209 - reintroduce python2.4 support by depending on newer pylib which re-introduced statement-finding
for pre-AST interpreters
• nose support: only call setup if its a callable, thanks Andrew Taumoefolau
• fix issue219 - add py2.4-3.3 classifiers to TROVE list
• in tracebacks ,* arg values are now shown next to normal arguments (thanks Manuel Jacob)
• fix issue217 - support mock.patch with pytest’s fixtures - note that you need either mock-1.0.1 or the python3.3
builtin unittest.mock.
• fix issue127 - improve documentation for pytest_addoption() and add a config.getoption(name) helper
function for consistency.
13.10 v2.3.2
• fix issue208 and fix issue29 use new py version to avoid long pauses when printing tracebacks in long modules
• fix issue205 - conftests in subdirs customizing pytest_pycollect_makemodule and pytest_pycollect_makeitem
now work properly
• fix teardown-ordering for parametrized setups
• fix issue127 - better documentation for pytest_addoption and related objects.
• fix unittest behaviour: TestCase.runtest only called if there are test methods defined
• improve trial support: don’t collect its empty unittest.TestCase.runTest() method
• “python setup.py test” now works with pytest itself
• fix/improve internal/packaging related bits:
– exception message check of test_nose.py now passes on python33 as well
– issue206 - fix test_assertrewrite.py to work when a global PYTHONDONTWRITEBYTECODE=1 is
present
– add tox.ini to pytest distribution so that ignore-dirs and others config bits are properly distributed for
maintainers who run pytest-own tests
13.11 v2.3.1
• fix issue202 - fix regression: using “self” from fixture functions now works as expected (it’s the same “self”
instance that a test method which uses the fixture sees)
• skip pexpect using tests (test_pdb.py mostly) on freebsd* systems due to pexpect not supporting it properly
(hanging)
• link to web pages from –markers output which provides help for pytest.mark.* usage.
13.12 v2.3.0
13.13 v2.2.4
• fix issue 140: propperly get the real functions of bound classmethods for setup/teardown_class
• fix issue #141: switch from the deceased paste.pocoo.org to bpaste.net
• fix issue #143: call unconfigure/sessionfinish always when configure/sessionstart where called
• fix issue #144: better mangle test ids to junitxml classnames
• upgrade distribute_setup.py to 0.6.27
13.14 v2.2.3
13.15 v2.2.2
• fix issue101: wrong args to unittest.TestCase test function now produce better output
• fix issue102: report more useful errors and hints for when a test directory was renamed and some
pyc/__pycache__ remain
• fix issue106: allow parametrize to be applied multiple times e.g. from module, class and at function level.
• fix issue107: actually perform session scope finalization
• don’t check in parametrize if indirect parameters are funcarg names
• add chdir method to monkeypatch funcarg
• fix crash resulting from calling monkeypatch undo a second time
• fix issue115: make –collectonly robust against early failure (missing files/directories)
• “-qq –collectonly” now shows only files and the number of tests in them
• “-q –collectonly” now shows test ids
• allow adding of attributes to test reports such that it also works with distributed testing (no upgrade of pytest-
xdist needed)
13.16 v2.2.1
• fix issue99 (in pytest and py) internallerrors with resultlog now produce better output - fixed by normalizing
pytest_internalerror input arguments.
• fix issue97 / traceback issues (in pytest and py) improve traceback output in conjunction with jinja2 and cython
which hack tracebacks
• fix issue93 (in pytest and pytest-xdist) avoid “delayed teardowns”: the final test in a test node will now run
its teardown directly instead of waiting for the end of the session. Thanks Dave Hunt for the good reporting
and feedback. The pytest_runtest_protocol as well as the pytest_runtest_teardown hooks now have “nextitem”
available which will be None indicating the end of the test run.
• fix collection crash due to unknown-source collected items, thanks to Ralf Schmitt (fixed by depending on a
more recent pylib)
13.17 v2.2.0
• fix issue90: introduce eager tearing down of test items so that teardown function are called earlier.
• add an all-powerful metafunc.parametrize function which allows to parametrize test function arguments in mul-
tiple steps and therefore from indepdenent plugins and palces.
• add a @pytest.mark.parametrize helper which allows to easily call a test function with different argument values
• Add examples to the “parametrize” example page, including a quick port of Test scenarios and the new
parametrize function and decorator.
• introduce registration for “pytest.mark.*” helpers via ini-files or through plugin hooks. Also introduce a “–strict”
option which will treat unregistered markers as errors allowing to avoid typos and maintain a well described set
of markers for your test suite. See exaples at http://pytest.org/latest/mark.html and its links.
• issue50: introduce “-m marker” option to select tests based on markers (this is a stricter and more predictable
version of ‘-k’ in that “-m” only matches complete markers and has more obvious rules for and/or semantics.
• new feature to help optimizing the speed of your tests: –durations=N option for displaying N slowest test calls
and setup/teardown methods.
• fix issue87: –pastebin now works with python3
• fix issue89: –pdb with unexpected exceptions in doctest work more sensibly
• fix and cleanup pytest’s own test suite to not leak FDs
• fix issue83: link to generated funcarg list
• fix issue74: pyarg module names are now checked against imp.find_module false positives
• fix compatibility with twisted/trial-11.1.0 use cases
• simplify Node.listchain
• simplify junitxml output code by relying on py.xml
• add support for skip properties on unittest classes and functions
13.18 v2.1.3
13.19 v2.1.2
• fix assertion rewriting on files with windows newlines on some Python versions
• refine test discovery by package/module name (–pyargs), thanks Florian Mayer
• fix issue69 / assertion rewriting fixed on some boolean operations
• fix issue68 / packages now work with assertion rewriting
• fix issue66: use different assertion rewriting caches when the -O option is passed
• don’t try assertion rewriting on Jython, use reinterp
13.20 v2.1.1
13.21 v2.1.0
13.22 v2.0.3
• fix issue38: nicer tracebacks on calls to hooks, particularly early configure/sessionstart ones
• fix missing skip reason/meta information in junitxml files, reported via http://lists.idyll.org/pipermail/testing-in-
python/2011-March/003928.html
• fix issue34: avoid collection failure with “test” prefixed classes deriving from object.
• don’t require zlib (and other libs) for genscript plugin without –genscript actually being used.
• speed up skips (by not doing a full traceback represenation internally)
• fix issue37: avoid invalid characters in junitxml’s output
13.23 v2.0.2
• tackle issue32 - speed up test runs of very quick test functions by reducing the relative overhead
• fix issue30 - extended xfail/skipif handling and improved reporting. If you have a syntax error in your skip/xfail
expressions you now get nice error reports.
Also you can now access module globals from xfail/skipif expressions so that this for example works now:
import pytest
import mymodule
@pytest.mark.skipif("mymodule.__version__[0] == "1")
def test_function():
pass
This will not run the test function if the module’s version string does not start with a “1”. Note that specifying
a string instead of a boolean expressions allows py.test to report meaningful information when summarizing a
test run as to what conditions lead to skipping (or xfail-ing) tests.
• fix issue28 - setup_method and pytest_generate_tests work together The setup_method fixture method now gets
called also for test function invocations generated from the pytest_generate_tests hook.
• fix issue27 - collectonly and keyword-selection (-k) now work together Also, if you do “py.test –collectonly
-q” you now get a flat list of test ids that you can use to paste to the py.test commandline in order to execute a
particular test.
• fix issue25 avoid reported problems with –pdb and python3.2/encodings output
• fix issue23 - tmpdir argument now works on Python3.2 and WindowsXP Starting with Python3.2 os.symlink
may be supported. By requiring a newer py lib version the py.path.local() implementation acknowledges this.
• fixed typos in the docs (thanks Victor Garcia, Brianna Laugher) and particular thanks to Laura Creighton who
also revieved parts of the documentation.
• fix slighly wrong output of verbose progress reporting for classes (thanks Amaury)
• more precise (avoiding of) deprecation warnings for node.Class|Function accesses
• avoid std unittest assertion helper code in tracebacks (thanks Ronny)
13.24 v2.0.1
• refine and unify initial capturing so that it works nicely even if the logging module is used on an early-loaded
conftest.py file or plugin.
• allow to omit “()” in test ids to allow for uniform test ids as produced by Alfredo’s nice pytest.vim plugin.
• fix issue12 - show plugin versions with “–version” and “–traceconfig” and also document how to add extra
information to reporting test header
• fix issue17 (import-* reporting issue on python3) by requiring py>1.4.0 (1.4.1 is going to include it)
• fix issue10 (numpy arrays truth checking) by refining assertion interpretation in py lib
• fix issue15: make nose compatibility tests compatible with python3 (now that nose-1.0 supports python3)
• remove somewhat surprising “same-conftest” detection because it ignores conftest.py when they appear in sev-
eral subdirs.
• improve assertions (“not in”), thanks Floris Bruynooghe
• improve behaviour/warnings when running on top of “python -OO” (assertions and docstrings are turned off,
leading to potential false positives)
• introduce a pytest_cmdline_processargs(args) hook to allow dynamic computation of command line arguments.
This fixes a regression because py.test prior to 2.0 allowed to set command line options from conftest.py files
which so far pytest-2.0 only allowed from ini-files now.
• fix issue7: assert failures in doctest modules. unexpected failures in doctests will not generally show nicer, i.e.
within the doctest failing context.
• fix issue9: setup/teardown functions for an xfail-marked test will report as xfail if they fail but report as nor-
mally passing (not xpassing) if they succeed. This only is true for “direct” setup/teardown invocations because
teardown_class/ teardown_module cannot closely relate to a single test.
• fix issue14: no logging errors at process exit
• refinements to “collecting” output on non-ttys
• refine internal plugin registration and –traceconfig output
• introduce a mechanism to prevent/unregister plugins from the command line, see
http://pytest.org/plugins.html#cmdunregister
• activate resultlog plugin by default
• fix regression wrt yielded tests which due to the collection-before-running semantics were not setup as with
pytest 1.3.4. Note, however, that the recommended and much cleaner way to do test parametraization remains
the “pytest_generate_tests” mechanism, see the docs.
13.25 v2.0.0
• fix issue109 - sibling conftest.py files will not be loaded. (and Directory collectors cannot be customized any-
more from a Directory’s conftest.py - this needs to happen at least one level up).
• introduce (customizable) assertion failure representations and enhance output on assertion failures for compar-
isons and other cases (Floris Bruynooghe)
• nose-plugin: pass through type-signature failures in setup/teardown functions instead of not calling them (Ed
Singleton)
• remove py.test.collect.Directory (follows from a major refactoring and simplification of the collection process)
• majorly reduce py.test core code, shift function/python testing to own plugin
• fix issue88 (finding custom test nodes from command line arg)
• refine ‘tmpdir’ creation, will now create basenames better associated with test names (thanks Ronny)
• “xpass” (unexpected pass) tests don’t cause exitcode!=0
• fix issue131 / issue60 - importing doctests in __init__ files used as namespace packages
• fix issue93 stdout/stderr is captured while importing conftest.py
• fix bug: unittest collected functions now also can have “pytestmark” applied at class/module level
• add ability to use “class” level for cached_setup helper
• fix strangeness: mark.* objects are now immutable, create new instances
13.26 v1.3.4
13.27 v1.3.3
• fix issue113: assertion representation problem with triple-quoted strings (and possibly other cases)
• make conftest loading detect that a conftest file with the same content was already loaded, avoids surprises
in nested directory structures which can be produced e.g. by Hudson. It probably removes the need to use
–confcutdir in most cases.
• fix terminal coloring for win32 (thanks Michael Foord for reporting)
• fix weirdness: make terminal width detection work on stdout instead of stdin (thanks Armin Ronacher for
reporting)
• remove trailing whitespace in all py/text distribution files
13.28 v1.3.2
def test_function(arg):
...
• improved error reporting on collection and import errors. This makes use of a more general mechanism, namely
that for custom test item/collect nodes node.repr_failure(excinfo) is now uniformly called so that
you can override it to return a string error representation of your choice which is going to be reported as a (red)
string.
• introduce ‘–junitprefix=STR’ option to prepend a prefix to all reports in the junitxml file.
• make tests and the pytest_recwarn plugin in particular fully compatible to Python2.7 (if you use the
recwarn funcarg warnings will be enabled so that you can properly check for their existence in a cross-python
manner).
• refine –pdb: ignore xfailed tests, unify its TB-reporting and don’t display failures again at the end.
• fix assertion interpretation with the ** operator (thanks Benjamin Peterson)
• fix issue105 assignment on the same line as a failing assertion (thanks Benjamin Peterson)
• fix issue104 proper escaping for test names in junitxml plugin (thanks anonymous)
• fix issue57 -f|–looponfail to work with xpassing tests (thanks Ronny)
• fix issue92 collectonly reporter and –pastebin (thanks Benjamin Peterson)
• fix py.code.compile(source) to generate unique filenames
• fix assertion re-interp problems on PyPy, by defering code compilation to the (overridable) Frame.eval class.
(thanks Amaury Forgeot)
• fix py.path.local.pyimport() to work with directories
• streamline py.path.local.mkdtemp implementation and usage
13.29 v1.3.1
• issue91: introduce new py.test.xfail(reason) helper to imperatively mark a test as expected to fail. Can be used
from within setup and test functions. This is useful especially for parametrized tests when certain configurations
are expected-to-fail. In this case the declarative approach with the @py.test.mark.xfail cannot be used as it
would mark all configurations as xfail.
• issue102: introduce new –maxfail=NUM option to stop test runs after NUM failures. This is a generalization of
the ‘-x’ or ‘–exitfirst’ option which is now equivalent to ‘–maxfail=1’. Both ‘-x’ and ‘–maxfail’ will now also
print a line near the end indicating the Interruption.
• issue89: allow py.test.mark decorators to be used on classes (class decorators were introduced with python2.6)
and also allow to have multiple markers applied at class/module level by specifying a list.
• improve and refine letter reporting in the progress bar: . pass f failed test s skipped tests (reminder: use for
dependency/platform mismatch only) x xfailed test (test that was expected to fail) X xpassed test (test that was
expected to fail but passed)
You can use any combination of ‘fsxX’ with the ‘-r’ extended reporting option. The xfail/xpass results will show
up as skipped tests in the junitxml output - which also fixes issue99.
• make py.test.cmdline.main() return the exitstatus instead of raising SystemExit and also allow it to be called
multiple times. This of course requires that your application and tests are properly teared down and don’t have
global state.
• improved traceback presentation: - improved and unified reporting for “–tb=short” option - Errors during test
module imports are much shorter, (using –tb=short style) - raises shows shorter more relevant tracebacks -
–fulltrace now more systematically makes traces longer / inhibits cutting
• improve support for raises and other dynamically compiled code by manipulating python’s linecache.cache
instead of the previous rather hacky way of creating custom code objects. This makes it seemlessly work on
Jython and PyPy where it previously didn’t.
• fix issue96: make capturing more resilient against Control-C interruptions (involved somewhat substantial refac-
toring to the underlying capturing functionality to avoid race conditions).
• fix chaining of conditional skipif/xfail decorators - so it works now as expected to use multiple
@py.test.mark.skipif(condition) decorators, including specific reporting which of the conditions lead to skip-
ping.
• fix issue95: late-import zlib so that it’s not required for general py.test startup.
• fix issue94: make reporting more robust against bogus source code (and internally be more careful when pre-
senting unexpected byte sequences)
13.30 v1.3.0
• deprecate –report option in favour of a new shorter and easier to remember -r option: it takes a string argument
consisting of any combination of ‘xfsX’ characters. They relate to the single chars you see during the dotted
progress printing and will print an extra line per test at the end of the test run. This extra line indicates the exact
position or test ID that you directly paste to the py.test cmdline in order to re-run a particular test.
• allow external plugins to register new hooks via the new pytest_addhooks(pluginmanager) hook. The new
release of the pytest-xdist plugin for distributed and looponfailing testing requires this feature.
• add a new pytest_ignore_collect(path, config) hook to allow projects and plugins to define exclusion behaviour
for their directory structure - for example you may define in a conftest.py this method:
def pytest_ignore_collect(path):
return path.check(link=1)
13.31 v1.2.0
• add a new option “py.test –funcargs” which shows available funcargs and their help strings (docstrings on their
respective factory function) for a given test path
• display a short and concise traceback if a funcarg lookup fails
• early-load “conftest.py” files in non-dot first-level sub directories. allows to conveniently keep and access test-
related options in a test subdir and still add command line options.
• fix issue67: new super-short traceback-printing option: “–tb=line” will print a single line for each failing
(python) test indicating its filename, lineno and the failure value
• fix issue78: always call python-level teardown functions even if the according setup failed. This includes
refinements for calling setup_module/class functions which will now only be called once instead of the previous
behaviour where they’d be called multiple times if they raise an exception (including a Skipped exception). Any
exception will be re-corded and associated with all tests in the according module/class scope.
• fix issue63: assume <40 columns to be a bogus terminal width, default to 80
• fix pdb debugging to be in the correct frame on raises-related errors
• update apipkg.py to fix an issue where recursive imports might unnecessarily break importing
• fix plugin links
13.32 v1.1.1
• moved dist/looponfailing from py.test core into a new separately released pytest-xdist plugin.
• new junitxml plugin: –junitxml=path will generate a junit style xml file which is processable e.g. by the Hudson
CI system.
• new option: –genscript=path will generate a standalone py.test script which will not need any libraries installed.
thanks to Ralf Schmitt.
• new option: –ignore will prevent specified path from collection. Can be specified multiple times.
• new option: –confcutdir=dir will make py.test only consider conftest files that are relative to the specified dir.
• new funcarg: “pytestconfig” is the pytest config object for access to command line args and can now be easily
used in a test.
• install ‘py.test’ and py.which with a -$VERSION suffix to disambiguate between Python3, python2.X, Jython
and PyPy installed versions.
13.33 v1.1.0
13.34 v1.0.2
13.35 v1.0.2
• fixing packaging issues, triggered by fedora redhat packaging, also added doc, examples and contrib dirs to the
tarball.
• added a documentation link to the new django plugin.
13.36 v1.0.1
13.37 v1.0.0
• more terse reporting try to show filesystem path relatively to current dir
• improve xfail output a bit
13.38 v1.0.0b9
• setup/teardown or collection problems now show as ERRORs or with big “E“‘s in the progress lines. they are
reported and counted separately.
• dist-testing: properly handle test items that get locally collected but cannot be collected on the remote side -
often due to platform/dependency reasons
• simplified py.test.mark API - see keyword plugin documentation
• integrate better with logging: capturing now by default captures test functions and their immediate
setup/teardown in a single stream
• capsys and capfd funcargs now have a readouterr() and a close() method (underlyingly py.io.StdCapture/FD
objects are used which grew a readouterr() method as well to return snapshots of captured out/err)
• make assert-reinterpretation work better with comparisons not returning bools (reported with numpy from thanks
maciej fijalkowski)
• reworked per-test output capturing into the pytest_iocapture.py plugin and thus removed capturing code from
config object
• item.repr_failure(excinfo) instead of item.repr_failure(excinfo, outerr)
13.39 v1.0.0b8
13.40 v1.0.0b7
• renamed py.test.xfail back to py.test.mark.xfail to avoid two ways to decorate for xfail
• re-added py.test.mark decorator for setting keywords on functions (it was actually documented so removing it
was not nice)
• remove scope-argument from request.addfinalizer() because request.cached_setup has the scope arg.
TOOWTDI.
• perform setup finalization before reporting failures
• apply modified patches from Andreas Kloeckner to allow test functions to have no func_code (#22) and to make
“-k” and function keywords work (#20)
• apply patch from Daniel Peolzleithner (issue #23)
13.41 v1.0.0b3
• plugin classes are removed: one now defines hooks directly in conftest.py or global pytest_*.py files.
• added new pytest_namespace(config) hook that allows to inject helpers directly to the py.test.* namespace.
• documented and refined many hooks
• added new style of generative tests via pytest_generate_tests hook that integrates well with function arguments.
13.42 v1.0.0b1
13.43 v0.9.2
• refined installation and metadata, created new setup.py, now based on setuptools/ez_setup (thanks to Ralf
Schmitt for his support).
• improved the way of making py.* scripts available in windows environments, they are now added to the Scripts
directory as ”.cmd” files.
• py.path.svnwc.status() now is more complete and uses xml output from the ‘svn’ command if available (Guido
Wesdorp)
• fix for py.path.svn* to work with svn 1.5 (Chris Lamb)
• fix path.relto(otherpath) method on windows to use normcase for checking if a path is relative.
• py.test’s traceback is better parseable from editors (follows the filenames:LINENO: MSG convention) (thanks
to Osmo Salomaa)
• fix to javascript-generation, “py.test –runbrowser” should work more reliably now
• removed previously accidentally added py.test.broken and py.test.notimplemented helpers.
• there now is a py.__version__ attribute
13.44 v0.9.1
This is a fairly complete list of v0.9.1, which can serve as a reference for developers.
• allowing + signs in py.path.svn urls [39106]
• fixed support for Failed exceptions without excinfo in py.test [39340]
• added support for killing processes for Windows (as well as platforms that support os.kill) in py.misc.killproc
[39655]
• added setup/teardown for generative tests to py.test [40702]
• added detection of FAILED TO LOAD MODULE to py.test [40703, 40738, 40739]
• fixed problem with calling .remove() on wcpaths of non-versioned files in py.path [44248]
• fixed some import and inheritance issues in py.test [41480, 44648, 44655]
• fail to run greenlet tests when pypy is available, but without stackless [45294]
• small fixes in rsession tests [45295]
• fixed issue with 2.5 type representations in py.test [45483, 45484]
• made that internal reporting issues displaying is done atomically in py.test [45518]
• made that non-existing files are igored by the py.lookup script [45519]
• improved exception name creation in py.test [45535]
• made that less threads are used in execnet [merge in 45539]
• removed lock required for atomical reporting issue displaying in py.test [45545]
• removed globals from execnet [45541, 45547]
• refactored cleanup mechanics, made that setDaemon is set to 1 to make atexit get called in 2.5 (py.execnet)
[45548]
• fixed bug in joining threads in py.execnet’s servemain [45549]
• refactored py.test.rsession tests to not rely on exact output format anymore [45646]
• using repr() on test outcome [45647]
• added ‘Reason’ classes for py.test.skip() [45648, 45649]
• killed some unnecessary sanity check in py.test.collect [45655]
• avoid using os.tmpfile() in py.io.fdcapture because on Windows it’s only usable by Administrators [45901]
• added support for locking and non-recursive commits to py.path.svnwc [45994]
• locking files in py.execnet to prevent CPython from segfaulting [46010]
• added export() method to py.path.svnurl
• fixed -d -x in py.test [47277]
• fixed argument concatenation problem in py.path.svnwc [49423]
• restore py.test behaviour that it exits with code 1 when there are failures [49974]
• don’t fail on html files that don’t have an accompanying .txt file [50606]
• fixed ‘utestconvert.py < input’ [50645]
• small fix for code indentation in py.code.source [50755]
A E
add() (MarkInfo method), 54 excinfo (CallInfo attribute), 77
add_marker() (Node method), 76 exit() (in module _pytest.runner), 20
addcall() (Metafunc method), 44 extra_keyword_matches (Node attribute), 76
addfinalizer() (FixtureRequest method), 22
addfinalizer() (Node method), 77 F
addini() (Parser method), 76 fail() (in module _pytest.runner), 20
addinivalue_line() (Config method), 75 fixture() (in module _pytest.python), 21
addoption() (Parser method), 76 fixturename (FixtureRequest attribute), 21
addopts FixtureRequest (class in _pytest.python), 21
configuration value, 24 fromdictargs() (_pytest.config.Config class method), 75
applymarker() (FixtureRequest method), 22 fspath (FixtureRequest attribute), 21
args (MarkInfo attribute), 54 fspath (Node attribute), 76
Function (class in _pytest.python), 77
C function (FixtureRequest attribute), 21
cached_setup() (FixtureRequest method), 22 function (Function attribute), 77
CallInfo (class in _pytest.runner), 77
chdir() (monkeypatch method), 49 G
Class (class in _pytest.python), 77 get_marker() (Node method), 76
cls (FixtureRequest attribute), 21 getfuncargvalue() (FixtureRequest method), 22
collect() (Collector method), 77 getgroup() (Parser method), 75
Collector (class in _pytest.main), 77 getini() (Config method), 75
Collector.CollectError, 77 getoption() (Config method), 75
Config (class in _pytest.config), 75 getparent() (Node method), 77
config (FixtureRequest attribute), 21 getvalue() (Config method), 75
config (Node attribute), 76 getvalueorskip() (Config method), 75
configuration value
addopts, 24 I
minversion, 24 ihook (Node attribute), 76
norecursedirs, 24 importorskip() (in module _pytest.runner), 20
python_classes, 25 instance (FixtureRequest attribute), 21
python_files, 25 Item (class in _pytest.main), 77
python_functions, 25
K
D keywords (FixtureRequest attribute), 21
delattr() (monkeypatch method), 48 keywords (Node attribute), 76
delenv() (monkeypatch method), 49 keywords (TestReport attribute), 78
delitem() (monkeypatch method), 49 kwargs (MarkInfo attribute), 54
deprecated_call() (in module pytest), 20
duration (TestReport attribute), 78 L
listchain() (Node method), 76
185
pytest Documentation, Release 2.5.2
186 Index
pytest Documentation, Release 2.5.2
X
xfail() (in module _pytest.skipping), 20
Index 187