Pytest
Pytest
Pytest
Release 2.1.0
CONTENTS
Getting started basics 1.1 Welcome to pytest! . . . . . . . 1.2 Installation and Getting Started 1.3 Usage and Invocations . . . . . 1.4 Good Integration Practises . . . 1.5 Project examples . . . . . . . . 1.6 Some Issues and Questions . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
3 3 4 7 10 12 13 17 17 26 31 33 39 40 43 43 45 47 50 54 56 57 58 61 62 65 68 68 69 70 71 71 72 72 72 73
Usages and Examples 2.1 Demo of Python failure reports with py.test . . . 2.2 basic patterns and examples . . . . . . . . . . . 2.3 mysetup pattern: application specic test xtures 2.4 parametrizing tests . . . . . . . . . . . . . . . . 2.5 Changing standard (Python) test discovery . . . 2.6 Working with non-python tests . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
py.test reference documentation 3.1 pytest builtin helpers . . . . . . . . . . . . . . . . . . 3.2 basic test conguration . . . . . . . . . . . . . . . . . 3.3 The writing and reporting of assertions in tests . . . . 3.4 Injecting objects into test functions (funcargs) . . . . . 3.5 extended xUnit style setup xtures . . . . . . . . . . . 3.6 Capturing of the stdout/stderr output . . . . . . . . . . 3.7 monkeypatching/mocking modules and environments 3.8 xdist: pytest distributed testing plugin . . . . . . . . . 3.9 temporary directories and les . . . . . . . . . . . . . 3.10 skip and xfail: dealing with tests that can not succeed . 3.11 mark test functions with attributes . . . . . . . . . . . 3.12 asserting deprecation and other warnings. . . . . . . . 3.13 unittest.TestCase support . . . . . . . . . . . . . . . . 3.14 Running test written for nose . . . . . . . . . . . . . 3.15 doctest integration for modules and test les . . . . . Working with plugins and conftest les 4.1 conftest.py: local per-directory plugins . 4.2 Installing External Plugins / Searching . 4.3 Writing a plugin by looking at examples 4.4 Making your plugin installable by others 4.5 Plugin discovery order at tool startup . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Requiring/Loading plugins in a test module or conftest le Accessing another plugin by name . . . . . . . . . . . . . Finding out which plugins are active . . . . . . . . . . . . deactivate / unregister a plugin by name . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
73 73 73 74 75 77 77 77 77 78 78 79 83 83 83 85 85 85 87 87 88 88 89 90 93 93 93 94 94 95 96 96 96 98 99 99 100 101 101 103 103 103 103 104 104 104 105 105
py.test default plugin reference py.test hook reference 6.1 hook specication and validation . . . . . . . . . . 6.2 initialisation, command line and conguration hooks 6.3 generic runtest hooks . . . . . . . . . . . . . . . 6.4 collection hooks . . . . . . . . . . . . . . . . . . . 6.5 reporting hooks . . . . . . . . . . . . . . . . . . . . Reference of important objects involved in hooks Talks and Tutorials 8.1 tutorial examples and blog postings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 conference talks and tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feedback and contribute to py.test 9.1 Contact channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Working from version control or a tarball . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
7 8
10 Release announcements 10.1 py.test 2.1.0: perfected assertions and bug xes . . . . . . . . . . . 10.2 py.test 2.0.3: bug xes and speed ups . . . . . . . . . . . . . . . . 10.3 py.test 2.0.2: bug xes, improved xfail/skip expressions, speedups . 10.4 py.test 2.0.1: bug xes . . . . . . . . . . . . . . . . . . . . . . . . 10.5 py.test 2.0.0: asserts++, unittest++, reporting++, cong++, docs++ 11 Changelog history 11.1 Changes between 2.0.3 and 2.1.0.DEV 11.2 Changes between 2.0.2 and 2.0.3 . . . 11.3 Changes between 2.0.1 and 2.0.2 . . . 11.4 Changes between 2.0.0 and 2.0.1 . . . 11.5 Changes between 1.3.4 and 2.0.0 . . . 11.6 Changes between 1.3.3 and 1.3.4 . . . 11.7 Changes between 1.3.2 and 1.3.3 . . . 11.8 Changes between 1.3.1 and 1.3.2 . . . 11.9 Changes between 1.3.0 and 1.3.1 . . . 11.10 Changes between 1.2.1 and 1.3.0 . . . 11.11 Changes between 1.2.1 and 1.2.0 . . . 11.12 Changes between 1.2 and 1.1.1 . . . . 11.13 Changes between 1.1.1 and 1.1.0 . . . 11.14 Changes between 1.1.0 and 1.0.2 . . . 11.15 Changes between 1.0.1 and 1.0.2 . . . 11.16 Changes between 1.0.0 and 1.0.1 . . . 11.17 Changes between 1.0.0b9 and 1.0.0 . . 11.18 Changes between 1.0.0b8 and 1.0.0b9 . 11.19 Changes between 1.0.0b7 and 1.0.0b8 . 11.20 Changes between 1.0.0b3 and 1.0.0b7 . 11.21 Changes between 1.0.0b1 and 1.0.0b3 . 11.22 Changes between 0.9.2 and 1.0.0b1 . . 11.23 Changes between 0.9.1 and 0.9.2 . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
ii
11.24 Changes between 0.9.0 and 0.9.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 12 New pytest names in 2.0 (at is better than nested) 13 example: specifying and selecting acceptance tests 14 example: decorating a funcarg in a test module Python Module Index Index 107 109 111 113 115
iii
iv
CONTENTS
CONTENTS
CHAPTER
ONE
can integrate nose, unittest.py and doctest.py style tests, including running testcases made for Django and trial supports extended xUnit style setup supports domain-specic Working with non-python tests supports the generation of testing coverage reports Javascript unit- and functional testing extensive plugin and customization system all collection, reporting, running aspects are delegated to hook functions customizations can be per-directory, per-project or per PyPI released plugins it is easy to add command line options or do other kind of add-ons and customizations.
1.2.1 Installation
Installation options:
easy_install -U pytest pip install -U pytest # or
$ py.test =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 1 items test_sample.py F ================================= FAILURES ================================= _______________________________ test_answer ________________________________ def test_answer(): assert func(3) == 5 assert 4 == 5 + where 4 = func(3)
> E E
py.test found the test_answer function by following standard test discovery rules, basically detecting the test_ prexes. We got a failure report because our little func(3) call did not return 5. Note: You can simply use the assert statement for asserting test expectations. pytests Advanced assertion introspection will intelligently report intermediate values of the assert expression freeing you from the need to learn the many names of JUnit legacy methods.
The two tests are found because of the standard Conventions for Python test discovery. There is no need to subclass anything. We can simply run the module by passing its lename:
$ py.test -q test_class.py collecting ... collected 2 items .F ================================= FAILURES ================================= ____________________________ TestClass.test_two ____________________________ self = <test_class.TestClass instance at 0x142c320> def test_two(self): x = "hello" assert hasattr(x, check) assert False + where False = hasattr(hello, check)
> E E
The rst test passed, the second failed. Again we can easily see the intermediate values used in the assertion, helping us to understand the reason for the failure.
We list the name tmpdir in the test function signature and py.test will lookup and call a factory to create the resource before performing the test function call. Lets just run it:
$ py.test -q test_tmpdir.py collecting ... collected 1 items F ================================= FAILURES ================================= _____________________________ test_needsfiles ______________________________ tmpdir = local(/tmp/pytest-10/test_needsfiles0) def test_needsfiles(tmpdir): print tmpdir assert 0 assert 0
> E
test_tmpdir.py:3: AssertionError
Before the test runs, a unique-per-test-invocation temporary directory was created. More info at temporary directories and les. You can nd out what kind of builtin Dependency injection through function arguments exist by typing:
py.test --funcargs # shows builtin and custom function arguments
This is equivalent to invoking the command line script py.test [...] directly.
Import pkg and use its lesystem location to nd and run tests:
py.test --pyargs pkg # run all tests found below directory of pypkg
This will invoke the Python debugger on every failure. Often you might only want to do this for the rst failing test to understand a certain failure situation:
py.test -x --pdb # drop to PDB on first failure, then end test session py.test --pdb --maxfail=3 # drop to PDB for the first three failures
In previous versions you could only enter PDB tracing if you disable capturing on the command line via py.test -s.
and look at the content at the path location. Such les are used e.g. by the PyPy-test web page to show test results over several revisions.
This will submit test run information to a remote Paste service and provide a URL for each failure. You may select tests as usual or add for example -x if you only want to send one particular failure. Creating a URL for a whole test session log:
py.test --pastebin=all
this acts as if you would call py.test from the command line. It will not raise SystemExit but return the exitcode instead. You can pass in options and arguments:
pytest.main([x, mytestdir])
or pass in a string:
pytest.main("-x mytestdir")
generates a runtests.py script which is a fully functional basic py.test script, running unchanged under Python2 and Python3. You can tell people to download the script and then e.g. run it like this:
python runtests.py
10
and make this script part of your distribution and then add this to your setup.py le:
from distutils.core import setup, Command # you can also import from setuptools class PyTest(Command): user_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): import sys,subprocess errno = subprocess.call([sys.executable, runtest.py]) raise SystemExit(errno) setup( #..., cmdclass = {test: PyTest}, #..., )
this will execute your tests using runtest.py. As this is a standalone version of py.test no prior installation whatsoever is required for calling the test command. You can also pass additional arguments to the subprocess-calls such as your test directory or other options.
11
test_app.py ...
putting tests into an extra directory outside your actual application code, useful if you have many functional tests or want to keep tests separate from actual application code:
mypkg/ __init__.py appmodule.py tests/ test_app.py ...
Note: Test modules are imported under their fully qualied name as follows: nd basedir this is the rst upward (towards the root) directory not containing an __init__.py perform sys.path.insert(0, basedir) to make the fully qualied test module path importable. import path.to.test_module where the path is determined by converting path separators into . les. This means you must follow the convention of having directory and le names map to the import names.
12
bbfreeze create standalone executables from Python scripts pdb++ a fancier version of PDB py-s3fuse Amazon S3 FUSE based lesystem waskr WSGI Stats Middleware guachi global persistent congs for Python modules Circuits lightweight Event Driven Framework pygtk-helpers easy interaction with PyGTK QuantumCore statusmessage and repoze openid plugin pydataportability libraries for managing the open web XIST extensible HTML/XML generator tiddlyweb optionally headless, extensible RESTful datastore fancycompleter for colorful tab-completion Paludis tools for Gentoo Paludis package manager Gerald schema comparison tool abjad Python API for Formalized Score control bu a microscopic build system katcp Telescope communication protocol over Twisted kss plugin timer
13
14
Can I yield multiple values from a funcarg factory function? There are two conceptual reasons why yielding from a factory function is not possible: Calling factories for obtaining test function arguments is part of setting up and running a test. At that point it is not possible to add new test calls to the test collection anymore. If multiple factories yielded values there would be no natural place to determine the combination policy - in real-world examples some combinations often should not run. Use the pytest_generate_tests hook to solve both issues and implement the parametrization scheme of your choice.
15
16
CHAPTER
TWO
> E
failure_demo.py:15: AssertionError _________________________ TestFailing.test_simple __________________________ self = <failure_demo.TestFailing object at 0x14b9890> def test_simple(self): def f(): return 42 def g(): return 43 > E E assert f() == g() assert 42 == 43 + where 42 = <function f at 0x14a5e60>()
17
and
43 = <function g at 0x14bc1b8>()
failure_demo.py:28: AssertionError ____________________ TestFailing.test_simple_multiline _____________________ self = <failure_demo.TestFailing object at 0x14b9b50> def test_simple_multiline(self): otherfunc_multi( 42, 6*9)
>
> E
failure_demo.py:12: AssertionError ___________________________ TestFailing.test_not ___________________________ self = <failure_demo.TestFailing object at 0x14b9790> def test_not(self): def f(): return 42 assert not f() assert not 42 + where 42 = <function f at 0x14bc398>()
> E E
failure_demo.py:38: AssertionError _________________ TestSpecialisedExplanations.test_eq_text _________________ self = <failure_demo.TestSpecialisedExplanations object at 0x14aa810> def test_eq_text(self): assert spam == eggs assert spam == eggs - spam + eggs
> E E E
failure_demo.py:42: AssertionError _____________ TestSpecialisedExplanations.test_eq_similar_text _____________ self = <failure_demo.TestSpecialisedExplanations object at 0x1576190> def test_eq_similar_text(self): assert foo 1 bar == foo 2 bar assert foo 1 bar == foo 2 bar - foo 1 bar ? ^ + foo 2 bar ? ^
> E E E E E
18
failure_demo.py:45: AssertionError ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ self = <failure_demo.TestSpecialisedExplanations object at 0x14a7450> def test_eq_multiline_text(self): assert foo\nspam\nbar == foo\neggs\nbar assert foo\nspam\nbar == foo\neggs\nbar foo - spam + eggs bar
> E E E E E
failure_demo.py:48: AssertionError ______________ TestSpecialisedExplanations.test_eq_long_text _______________ self = <failure_demo.TestSpecialisedExplanations object at 0x14b9350> def test_eq_long_text(self): a = 1*100 + a + 2*100 b = 1*100 + b + 2*100 assert a == b assert 111111111111...2222222222222 == 1111111111111...2222222222222 Skipping 90 identical leading characters in diff Skipping 91 identical trailing characters in diff - 1111111111a222222222 ? ^ + 1111111111b222222222 ? ^
> E E E E E E E
failure_demo.py:53: AssertionError _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ self = <failure_demo.TestSpecialisedExplanations object at 0x15764d0> def test_eq_long_text_multiline(self): a = 1\n*100 + a + 2\n*100 b = 1\n*100 + b + 2\n*100 assert a == b assert 1\n1\n1\n1\n...n2\n2\n2\n2\n == 1\n1\n1\n1\n1...n2\n2\n2\n2\n Skipping 190 identical leading characters in diff Skipping 191 identical trailing characters in diff 1 1 1 1 1 - a2 + b2 2 2 2 2
> E E E E E E E E E E E E E E
19
> E E
def test_eq_list(self): assert [0, 1, 2] == [0, 1, 3] assert [0, 1, 2] == [0, 1, 3] At index 2 diff: 2 != 3
failure_demo.py:61: AssertionError ______________ TestSpecialisedExplanations.test_eq_list_long _______________ self = <failure_demo.TestSpecialisedExplanations object at 0x1576f10> def test_eq_list_long(self): a = [0]*100 + [1] + [3]*100 b = [0]*100 + [2] + [3]*100 assert a == b assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...] At index 100 diff: 1 != 2
> E E
failure_demo.py:66: AssertionError _________________ TestSpecialisedExplanations.test_eq_dict _________________ self = <failure_demo.TestSpecialisedExplanations object at 0x1576390> def test_eq_dict(self): assert {a: 0, b: 1} == {a: 0, b: 2} assert {a: 0, b: 1} == {a: 0, b: 2} - {a: 0, b: 1} ? ^ + {a: 0, b: 2} ? ^
> E E E E E
failure_demo.py:69: AssertionError _________________ TestSpecialisedExplanations.test_eq_set __________________ self = <failure_demo.TestSpecialisedExplanations object at 0x14bd790> def test_eq_set(self): assert set([0, 10, 11, 12]) == set([0, 20, 21]) assert set([0, 10, 11, 12]) == set([0, 20, 21]) Extra items in the left set: 10 11 12 Extra items in the right set: 20 21
> E E E E E E E E
failure_demo.py:72: AssertionError _____________ TestSpecialisedExplanations.test_eq_longer_list ______________ self = <failure_demo.TestSpecialisedExplanations object at 0x157a7d0> def test_eq_longer_list(self): assert [1,2] == [1,2,3] assert [1, 2] == [1, 2, 3] Right contains more items, first extra item: 3
> E E
20
self = <failure_demo.TestSpecialisedExplanations object at 0x157ab50> def test_in_list(self): assert 1 in [0, 2, 3, 4, 5] assert 1 in [0, 2, 3, 4, 5]
> E
failure_demo.py:78: AssertionError __________ TestSpecialisedExplanations.test_not_in_text_multiline __________ self = <failure_demo.TestSpecialisedExplanations object at 0x157a090> def test_not_in_text_multiline(self): text = some multiline\ntext\nwhich\nincludes foo\nand a\ntail assert foo not in text assert foo not in some multiline\ntext\nw...ncludes foo\nand a\ntail foo is contained here: some multiline text which includes foo ? +++ and a tail
> E E E E E E E E E
failure_demo.py:82: AssertionError ___________ TestSpecialisedExplanations.test_not_in_text_single ____________ self = <failure_demo.TestSpecialisedExplanations object at 0x14aaa50> def test_not_in_text_single(self): text = single foo line assert foo not in text assert foo not in single foo line foo is contained here: single foo line ? +++
> E E E E
failure_demo.py:86: AssertionError _________ TestSpecialisedExplanations.test_not_in_text_single_long _________ self = <failure_demo.TestSpecialisedExplanations object at 0x157ab90> def test_not_in_text_single_long(self): text = head * 50 + foo + tail * 20 assert foo not in text assert foo not in head head head head hea...ail tail tail tail tail foo is contained here: head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail ? +++
> E E E E
failure_demo.py:90: AssertionError ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ self = <failure_demo.TestSpecialisedExplanations object at 0x1576ed0> def test_not_in_text_single_long_term(self): text = head * 50 + f*70 + tail * 20 assert f*70 not in text
>
21
E E E E
assert fffffffffff...ffffffffffff not in head head he...l tail tail ffffffffffffffffff...fffffffffffffffffff is contained here: head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
failure_demo.py:94: AssertionError ______________________________ test_attribute ______________________________ def test_attribute(): class Foo(object): b = 1 i = Foo() assert i.b == 2 assert 1 == 2 + where 1 = <failure_demo.Foo object at 0x157a910>.b
> E E
failure_demo.py:101: AssertionError _________________________ test_attribute_instance __________________________ def test_attribute_instance(): class Foo(object): b = 1 assert Foo().b == 2 assert 1 == 2 + where 1 = <failure_demo.Foo object at 0x1584610>.b + where <failure_demo.Foo object at 0x1584610> = <class failure_demo.Foo>()
> E E E
failure_demo.py:107: AssertionError __________________________ test_attribute_failure __________________________ def test_attribute_failure(): class Foo(object): def _get_b(self): raise Exception(Failed to get attrib) b = property(_get_b) i = Foo() assert i.b == 2
>
failure_demo.py:116: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <failure_demo.Foo object at 0x157a3d0> def _get_b(self): raise Exception(Failed to get attrib) Exception: Failed to get attrib
> E
failure_demo.py:113: Exception _________________________ test_attribute_multiple __________________________ def test_attribute_multiple(): class Foo(object): b = 1 class Bar(object): b = 2 assert Foo().b == Bar().b assert 1 == 2 + where 1 = <failure_demo.Foo object at 0x157a1d0>.b
> E E
22
E E E
+ + +
where <failure_demo.Foo object at 0x157a1d0> = <class failure_demo.Foo>() and 2 = <failure_demo.Bar object at 0x157a9d0>.b where <failure_demo.Bar object at 0x157a9d0> = <class failure_demo.Bar>()
failure_demo.py:124: AssertionError __________________________ TestRaises.test_raises __________________________ self = <failure_demo.TestRaises instance at 0x157d7e8> def test_raises(self): s = qwe raises(TypeError, "int(s)")
>
failure_demo.py:133: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > E int(s) ValueError: invalid literal for int() with base 10: qwe
<0-codegen /home/hpk/p/pytest/_pytest/python.py:831>:1: ValueError ______________________ TestRaises.test_raises_doesnt _______________________ self = <failure_demo.TestRaises instance at 0x158ae60> def test_raises_doesnt(self): raises(IOError, "int(3)") Failed: DID NOT RAISE
> E
failure_demo.py:136: Failed __________________________ TestRaises.test_raise ___________________________ self = <failure_demo.TestRaises instance at 0x158bb90> def test_raise(self): raise ValueError("demo error") ValueError: demo error
> E
failure_demo.py:139: ValueError ________________________ TestRaises.test_tupleerror ________________________ self = <failure_demo.TestRaises instance at 0x157cd40> def test_tupleerror(self): a,b = [1] ValueError: need more than 1 value to unpack
> E
failure_demo.py:142: ValueError ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ self = <failure_demo.TestRaises instance at 0x157d488> def test_reinterpret_fails_with_print_for_the_fun_of_it(self): l = [1,2,3] print ("l is %r" % l) a,b = l.pop() TypeError: int object is not iterable
> E
failure_demo.py:147: TypeError
23
----------------------------- Captured stdout -----------------------------l is [1, 2, 3] ________________________ TestRaises.test_some_error ________________________ self = <failure_demo.TestRaises instance at 0x158a7e8> def test_some_error(self): if namenotexi: NameError: global name namenotexi is not defined
> E
failure_demo.py:150: NameError ____________________ test_dynamic_compile_shows_nicely _____________________ def test_dynamic_compile_shows_nicely(): src = def foo():\n assert 1 == 0\n name = abc-123 module = py.std.imp.new_module(name) code = py.code.compile(src, name, exec) py.builtin.exec_(code, module.__dict__) py.std.sys.modules[name] = module module.foo()
>
> E
<2-codegen abc-123 /home/hpk/p/pytest/doc/example/assertion/failure_demo.py:162>:2: AssertionError ____________________ TestMoreErrors.test_complex_error _____________________ self = <failure_demo.TestMoreErrors instance at 0x158f8c0> def test_complex_error(self): def f(): return 44 def g(): return 43 somefunc(f(), g())
>
>
> E
24
failure_demo.py:5: AssertionError ___________________ TestMoreErrors.test_z1_unpack_error ____________________ self = <failure_demo.TestMoreErrors instance at 0x158c998> def test_z1_unpack_error(self): l = [] a,b = l ValueError: need more than 0 values to unpack
> E
failure_demo.py:179: ValueError ____________________ TestMoreErrors.test_z2_type_error _____________________ self = <failure_demo.TestMoreErrors instance at 0x15854d0> def test_z2_type_error(self): l = 3 a,b = l TypeError: int object is not iterable
> E
failure_demo.py:183: TypeError ______________________ TestMoreErrors.test_startswith ______________________ self = <failure_demo.TestMoreErrors instance at 0x14b65a8> def test_startswith(self): s = "123" g = "456" assert s.startswith(g) assert False + where False = <built-in method startswith of str object at 0x14902a0>(456) + where <built-in method startswith of str object at 0x14902a0> = 123.startswith
> E E E
failure_demo.py:188: AssertionError __________________ TestMoreErrors.test_startswith_nested ___________________ self = <failure_demo.TestMoreErrors instance at 0x158d518> def test_startswith_nested(self): def f(): return "123" def g(): return "456" assert f().startswith(g()) assert False + where False = <built-in method startswith of str object at 0x14902a0>(456) + where <built-in method startswith of str object at 0x14902a0> = 123.startswith + where 123 = <function f at 0x15806e0>() + and 456 = <function g at 0x1580aa0>()
> E E E E E
failure_demo.py:195: AssertionError _____________________ TestMoreErrors.test_global_func ______________________ self = <failure_demo.TestMoreErrors instance at 0x1593440> def test_global_func(self): assert isinstance(globf(42), float) assert False
> E
25
E E
+ +
failure_demo.py:198: AssertionError _______________________ TestMoreErrors.test_instance _______________________ self = <failure_demo.TestMoreErrors instance at 0x15952d8> def test_instance(self): self.x = 6*7 assert self.x != 42 assert 42 != 42 + where 42 = 42 + where 42 = <failure_demo.TestMoreErrors instance at 0x15952d8>.x
> E E E
failure_demo.py:202: AssertionError _______________________ TestMoreErrors.test_compare ________________________ self = <failure_demo.TestMoreErrors instance at 0x1593758> def test_compare(self): assert globf(10) < 5 assert 11 < 5 + where 11 = globf(10)
> E E
failure_demo.py:205: AssertionError _____________________ TestMoreErrors.test_try_finally ______________________ self = <failure_demo.TestMoreErrors instance at 0x157cd88> def test_try_finally(self): x = 1 try: assert x == 0 assert 1 == 0
> E
For this to work we need to add a command line option and provide the cmdopt through a function argument factory:
26
# content of conftest.py def pytest_addoption(parser): parser.addoption("--cmdopt", action="store", default="type1", help="my option: type1 or type2") def pytest_funcarg__cmdopt(request): return request.config.option.cmdopt
Lets run this without supplying our new command line option:
$ py.test -q test_sample.py collecting ... collected 1 items F ================================= FAILURES ================================= _______________________________ test_answer ________________________________ cmdopt = type1 def test_answer(cmdopt): if cmdopt == "type1": print ("first") elif cmdopt == "type2": print ("second") assert 0 # to see what was printed assert 0
> E
> E
Ok, this completes the basic pattern. However, one often rather wants to process command line options outside of the test and rather pass in different or more complex objects. See the next example or refer to mysetup pattern: application specic test xtures for more information on real-life examples.
27
If you have the xdist plugin installed you will now always perform test runs using a number of subprocesses close to your CPU. Running in an empty directory with the above conftest.py:
$ py.test =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 gw0 I / gw1 I / gw2 I / gw3 I gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0] scheduling tests via LoadScheduling ============================= in 0.52 seconds =============================
28
$ py.test -rs # "-rs" means report details on the little s =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 2 items test_module.py .s ========================= short test summary info ========================== SKIP [1] /tmp/doc-exec-42/conftest.py:9: need --runslow option to run =================== 1 passed, 1 skipped in 0.01 seconds ====================
The __tracebackhide__ setting inuences py.test showing of tracebacks: the checkconfig function will not be shown unless the --fulltrace command line option is specied. Lets run our little function:
$ py.test -q test_checkconfig.py collecting ... collected 1 items F ================================= FAILURES ================================= ______________________________ test_something ______________________________ def test_something(): checkconfig(42) Failed: not configured: 42
> E
29
accordingly in your application. Its also a good idea to rather use your own application module rather than sys for handling ag.
You can also return a list of strings which will be considered as several lines of information. You can of course also make the amount of reporting information on e.g. the value of config.option.verbose so that you present more information appropriately:
# content of conftest.py def pytest_report_header(config): if config.option.verbose > 0: return ["info1: did you know that ...", "did you?"]
30
info1: did you know that ... did you? collecting ... collected 0 items ============================= in 0.00 seconds =============================
To run this test py.test needs to nd and call a factory to obtain the required mysetup function argument. To make an according factory ndable we write down a specically named factory method in a local plugin
# content of conftest.py from myapp import MyApp def pytest_funcarg__mysetup(request): # "mysetup" factory function return MySetup() class MySetup: # instances of this are seen by test functions def myapp(self): return MyApp()
31
collecting ... collected 1 items test_sample.py F ================================= FAILURES ================================= _______________________________ test_answer ________________________________ mysetup = <conftest.MySetup instance at 0x2c1b128> def test_answer(mysetup): app = mysetup.myapp() answer = app.question() assert answer == 42 assert 54 == 42
> E
This means that our mysetup object was successfully instantiated and mysetup.app() returned an initialized MyApp instance. We can ask it about the question and if you are confused as to what the concrete question or answers actually mean, please see here.
class MySetup: def __init__(self, request): self.config = request.config def myapp(self): return MyApp() def getsshconnection(self): host = self.config.option.ssh if host is None: pytest.skip("specify ssh host with --ssh") return execnet.SshGateway(host)
Now any test function can use the mysetup.getsshconnection() method like this:
# content of test_ssh.py class TestClass: def test_function(self, mysetup):
32
Running it yields:
$ py.test test_ssh.py -rs =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 1 items test_ssh.py s ========================= short test summary info ========================== SKIP [1] /tmp/doc-exec-37/conftest.py:22: specify ssh host with --ssh ======================== 1 skipped in 0.01 seconds =========================
If you specify a command line option like py.test --ssh=python.org the test will execute as expected. Note that neither the TestClass nor the test_function need to know anything about how to setup the test state. It is handled separately in your test setup glue code in the conftest.py le. It is easy to extend the mysetup object for further needs in the test code - and for use by any other test functions in the les and directories below the conftest.py le.
33
We run only two computations, so we see two dots. lets run the full monty:
$ py.test -q --all collecting ... collected 5 items ....F ================================= FAILURES ================================= _____________________________ test_compute[4] ______________________________ param1 = 4 def test_compute(param1): assert param1 < 4 assert 4 < 4
> E
As expected when running the full range of param1 values well get an error on the last one.
Now we add a test conguration that takes care to generate two invocations of the test_db_initialized function and furthermore a factory that creates a database object when each test is actually run:
# content of conftest.py def pytest_generate_tests(metafunc): if db in metafunc.funcargnames: metafunc.addcall(param="d1") metafunc.addcall(param="d2") class DB1: "one database object" class DB2: "alternative database object" def pytest_funcarg__db(request): if request.param == "d1": return DB1() elif request.param == "d2": return DB2()
34
> E
Now you see that one invocation of the test passes and another fails, as it to be expected.
35
Running it means we are two tests for each test functions, using the respective settings:
$ py.test -q collecting ... collected 6 items .FF..F ================================= FAILURES ================================= __________________________ test_db_initialized[1] __________________________ db = <conftest.DB2 instance at 0x19bcb90> def test_db_initialized(db): # a dummy test if db.__class__.__name__ == "DB2": pytest.fail("deliberately failing for demo purposes") Failed: deliberately failing for demo purposes
> E
test_backends.py:6: Failed _________________________ TestClass.test_equals[0] _________________________ self = <test_parametrize.TestClass instance at 0x19ca8c0>, a = 1, b = 2 def test_equals(self, a, b): assert a == b assert 1 == 2
> E
test_parametrize.py:17: AssertionError ______________________ TestClass.test_zerodivision[1] ______________________ self = <test_parametrize.TestClass instance at 0x19cd4d0>, a = 3, b = 2 def test_zerodivision(self, a, b): pytest.raises(ZeroDivisionError, "a/b") Failed: DID NOT RAISE
> E
36
def pytest_generate_tests(metafunc): for funcargs in getattr(metafunc.function, funcarglist, ()): metafunc.addcall(funcargs=funcargs) # actual test code class TestClass: @params([dict(a=1, b=2), dict(a=3, b=3), ]) def test_equals(self, a, b): assert a == b @params([dict(a=1, b=0), dict(a=3, b=2)]) def test_zerodivision(self, a, b): pytest.raises(ZeroDivisionError, "a/b")
> E
test_parametrize2.py:19: AssertionError ______________________ TestClass.test_zerodivision[1] ______________________ self = <test_parametrize2.TestClass instance at 0x1d02170>, a = 3, b = 2 @params([dict(a=1, b=0), dict(a=3, b=2)]) def test_zerodivision(self, a, b): pytest.raises(ZeroDivisionError, "a/b") Failed: DID NOT RAISE
> E
37
pythonlist = [python2.4, python2.5, python2.6, python2.7, python2.8] def pytest_generate_tests(metafunc): if python1 in metafunc.funcargnames: assert python2 in metafunc.funcargnames for obj in metafunc.function.multiarg.kwargs[obj]: for py1 in pythonlist: for py2 in pythonlist: metafunc.addcall(id="%s-%s-%s" % (py1, py2, obj), param=(py1, py2, obj)) @py.test.mark.multiarg(obj=[42, {}, {1:3},]) def test_basic_objects(python1, python2, obj): python1.dumps(obj) python2.load_and_is_true("obj == %s" % obj) def pytest_funcarg__python1(request): tmpdir = request.getfuncargvalue("tmpdir") picklefile = tmpdir.join("data.pickle") return Python(request.param[0], picklefile) def pytest_funcarg__python2(request): python1 = request.getfuncargvalue("python1") return Python(request.param[1], python1.picklefile) def pytest_funcarg__obj(request): return request.param[2] class Python: def __init__(self, version, picklefile): self.pythonpath = py.path.local.sysfind(version) if not self.pythonpath: py.test.skip("%r not found" %(version,)) self.picklefile = picklefile def dumps(self, obj): dumpfile = self.picklefile.dirpath("dump.py") dumpfile.write(py.code.Source(""" import pickle f = open(%r, wb) s = pickle.dump(%r, f) f.close() """ % (str(self.picklefile), obj))) py.process.cmdexec("%s %s" %(self.pythonpath, dumpfile)) def load_and_is_true(self, expression): loadfile = self.picklefile.dirpath("load.py") loadfile.write(py.code.Source(""" import pickle f = open(%r, rb) obj = pickle.load(f) f.close() res = eval(%r) if not res: raise SystemExit(1) """ % (str(self.picklefile), expression))) print (loadfile) py.process.cmdexec("%s %s" %(self.pythonpath, loadfile))
38
This would tell py.test to not recurse into typical subversion or sphinx-build directories or into any tmp prexed directory.
This would make py.test look for check_ prexes in Python lenames, Check prexes in classes and check prexes in functions and classes. For example, if we have:
# content of check_myapp.py class CheckMyApp: def check_simple(self): pass def check_complex(self): pass
39
which would run the respective test module. Like with other options, through an ini-le and the addopts option you can make this change more permanently:
# content of pytest.ini [pytest] addopts = --pyargs
Now a simple invocation of py.test NAME will check if NAME exists as an importable package/module and otherwise treat it as a lesystem path.
40
class YamlItem(pytest.Item): def __init__(self, name, parent, spec): super(YamlItem, self).__init__(name, parent) self.spec = spec def runtest(self): for name, value in self.spec.items(): # some custom test execution (dumb example follows) if name != value: raise YamlException(self, name, value) def repr_failure(self, excinfo): """ called when self.runtest() raises an exception. """ if isinstance(excinfo.value, YamlException): return "\n".join([ "usecase execution failed", " spec failed: %r: %r" % excinfo.value.args[1:3], " no further details known at this point." ]) def reportinfo(self): return self.fspath, 0, "usecase: %s" % self.name class YamlException(Exception): """ custom exception for error reporting. """
and if you installed PyYAML or a compatible YAML-parser you can now execute the test specication:
nonpython $ py.test test_simple.yml =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 2 items test_simple.yml .F ================================= FAILURES ================================= ______________________________ usecase: hello ______________________________ usecase execution failed spec failed: some: other no further details known at this point. ==================== 1 failed, 1 passed in 0.24 seconds ====================
You get one dot for the passing sub1: sub1 check and one failure. Obviously in the above conftest.py youll want to implement a more interesting interpretation of the yaml-values. You can easily write your own domain specic testing language this way. Note: repr_failure(excinfo) is called for representing test failures. If you create custom collection nodes you can return an error representation string of your choice. It will be reported as a (red) string. 2.6. Working with non-python tests 41
reportinfo() is used for representing the test location and is also consulted for reporting in verbose mode:
nonpython $ py.test -v =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 -- /home/hpk/venv/0/bin/python collecting ... collected 2 items test_simple.yml:1: usecase: ok PASSED test_simple.yml:1: usecase: hello FAILED ================================= FAILURES ================================= ______________________________ usecase: hello ______________________________ usecase execution failed spec failed: some: other no further details known at this point. ==================== 1 failed, 1 passed in 0.07 seconds ====================
While developing your custom test collection and execution its also interesting to just look at the collection tree:
nonpython $ py.test --collectonly =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 2 items <YamlFile test_simple.yml> <YamlItem ok> <YamlItem hello> ============================= in 0.07 seconds =============================
42
CHAPTER
THREE
to get an overview on the globally available helpers. unit and functional testing with Python. pytest.main(args=None, plugins=None) returned exit code integer, after an in-process testing run with the given command line arguments, preloading an optional list of passed in plugin objects. pytest.fail(msg=, pytrace=True) explicitely fail an currently-executing test with the given Message. if @pytrace is not True the msg represents the full failure information. pytest.skip(msg=) skip an executing test with the given message. Note: its usually better to use the py.test.mark.skipif marker to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. See the pytest_skipping plugin for details. pytest.exit(msg) exit testing process as if KeyboardInterrupt was triggered. pytest.importorskip(modname, minversion=None) return imported module if it has a higher __version__ than the optionally specied minversion - otherwise call py.test.skip() with a message detailing the mismatch. pytest.raises(ExpectedException, *args, **kwargs) assert that a code block/function call raises @ExpectedException and raise a failure exception otherwise. If using Python 2.5 or above, you may use this function as a context manager:
>>> with raises(ZeroDivisionError): ... 1/0
43
pytest.xfail(reason=) xfail an executing test or setup functions with the given reason. pytest.deprecated_call(func, *args, **kwargs) assert that calling func(*args, **kwargs) triggers a DeprecationWarning.
44
All modifications will be undone after the requesting test function has finished. The raising parameter determines if a KeyError or AttributeError will be raised if the set/deletion operation has no target. recwarn Return a WarningsRecorder instance that provides these methods: * pop(category=None): return last warning matching the category. * clear(): clear list of warnings See http://docs.python.org/library/warnings.html for information on warning categories.
This will display command line and conguration le settings which were registered by installed plugins.
Searching stops when the rst [pytest] section is found. There is no merging of conguration values from multiple les. Example:
py.test path/to/testdir
If argument is provided to a py.test run, the current working directory is used to start the search.
From now on, running py.test will add the specied options.
Default is to add no options. norecursedirs Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style) patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:
* ? [seq] [!seq] matches matches matches matches everything any single character any character in seq any char not in seq
Default patterns are .* _* CVS {args}. Setting a norecurse replaces the default. Here is an example of how to avoid certain directories:
# content of setup.cfg [pytest] norecursedirs = .svn _build tmp*
This would tell py.test to not look into typical subversion or sphinx-build directories or into any tmp prexed directory. python_files One or more Glob-style le patterns determining which python les are considered as test modules. python_classes One or more name prexes determining which test classes are considered as test modules. python_functions One or more name prexes determining which test functions and methods are considered as test modules. See change naming conventions for examples.
46
to assert that your function returns a certain value. If this assertion fails you will see the return value of the function call:
$ py.test test_assert1.py ============================= test session starts ============================== platform linux2 -- Python 2.6.6 -- pytest-2.1.0.dev6 collecting ... collected 1 items test_assert1.py F =================================== FAILURES =================================== ________________________________ test_function _________________________________ def test_function(): assert f() == 4 assert 3 == 4 + where 3 = f()
> E E
py.test has support for showing the values of the most common subexpressions including calls, attributes, comparisons, and binary and unary operators. (See Demo of Python failure reports with py.test). This allows you to use the idiomatic python constructs without boilerplate code while not losing introspection information. However, if you specify a message with the assertion like this:
assert a % 2 == 0, "value was odd, should be even"
then no assertion introspection takes places at all and the message will be simply shown in the traceback. See Advanced assertion introspection for more information on assertion introspection.
and if you need to have access to the actual exception info you may use:
47
with pytest.raises(RuntimeError) as excinfo: def f(): f() f() # do checks related to excinfo.type, excinfo.value, excinfo.traceback
If you want to write test code that works on Python2.4 as well, you may also use two other ways to test for an expected exception:
pytest.raises(ExpectedException, func, *args, **kwargs) pytest.raises(ExpectedException, "func(*args, **kwargs)")
both of which execute the specied function with args and kwargs and asserts that the given ExpectedException is raised. The reporter will provide you with helpful output in case of failures such as no exception or wrong exception.
> E E E E E
Special comparisons are done for a number of cases: comparing long strings: a context diff is shown comparing long sequences: rst failing indices
48
comparing dicts: different entries See the reporting demo for many more examples.
you can run the test module and get the custom output dened in the conftest le:
$ py.test -q test_foocompare.py collecting ... collected 1 items F =================================== FAILURES =================================== _________________________________ test_compare _________________________________ def test_compare(): f1 = Foo(1) f2 = Foo(2) assert f1 == f2 assert Comparing Foo instances: vals: 1 != 2
> E E
49
If this assertion fails then the re-evaluation will probably succeed! This is because f.read() will return an empty string when it is called the second time during the re-evaluation. However, it is easy to rewrite the assertion and avoid any trouble:
content = f.read() assert content != ...
All assert introspection can be turned off by passing --assert=plain. New in version 2.1: Add assert rewriting as an alternate introspection technique.Changed in version 2.1: Introduce the --assert option. Deprecate --no-assert and --nomagic.
A test function may be invoked multiple times in which case we speak of parametrized testing. This can be very useful if you want to test e.g. against different database backends or with multiple numerical arguments sets and want to reuse the same set of test functions. Basic injection example Lets look at a simple self-contained test module:
# content of ./test_simplefactory.py def pytest_funcarg__myfuncarg(request): return 42 def test_function(myfuncarg): assert myfuncarg == 17
This test function needs an injected object named myfuncarg. py.test will discover and call the factory named pytest_funcarg__myfuncarg within the same module in this case. Running the test looks like this:
$ py.test test_simplefactory.py =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 1 items test_simplefactory.py F ================================= FAILURES ================================= ______________________________ test_function _______________________________ myfuncarg = 42 def test_function(myfuncarg): assert myfuncarg == 17 assert 42 == 17
> E
This means that indeed the test function was called with a myfuncarg argument value of 42 and the assert fails. Here is how py.test comes to call the test function this way: 1. py.test nds the test_function because of the test_ prex. The test function needs a function argument named myfuncarg. A matching factory function is discovered by looking for the name pytest_funcarg__myfuncarg. 2. pytest_funcarg__myfuncarg(request) is called and returns the value for myfuncarg. 3. the test function can now be called: test_function(42). This results in the above exception because of the assertion mismatch. Note that if you misspell a function argument or want to use one that isnt available, youll see an error with a list of available function arguments. You can always issue:
py.test --funcargs test_simplefactory.py
to see available function arguments (which you can also think of as resources).
51
52
Basic generated test example Lets consider a test module which uses the pytest_generate_tests hook to generate several calls to the same test function:
# content of test_example.py def pytest_generate_tests(metafunc): if "numiter" in metafunc.funcargnames: for i in range(10): metafunc.addcall(funcargs=dict(numiter=i)) def test_func(numiter): assert numiter < 9
Running this:
$ py.test test_example.py =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 10 items test_example.py .........F ================================= FAILURES ================================= _______________________________ test_func[9] _______________________________ numiter = 9 def test_func(numiter): assert numiter < 9 assert 9 < 9
> E
Note that the pytest_generate_tests(metafunc) hook is called during the test collection phase which is separate from the actual test running. Lets just look at what is collected:
$ py.test --collectonly test_example.py =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 10 items <Module test_example.py> <Function test_func[0]> <Function test_func[1]> <Function test_func[2]> <Function test_func[3]> <Function test_func[4]> <Function test_func[5]> <Function test_func[6]> <Function test_func[7]> <Function test_func[8]> <Function test_func[9]> ============================= in 0.00 seconds =============================
If you want to select only the run with the value 7 you could do:
53
$ py.test -v -k 7 test_example.py # or -k test_func[7] =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 -- /home/hpk/venv/0/bin/python collecting ... collected 10 items test_example.py:6: test_func[7] PASSED ======================== 9 tests deselected by 7 ========================= ================== 1 passed, 9 deselected in 0.01 seconds ==================
You might want to look at more parametrization examples. The metafunc object metafunc objects are passed to the pytest_generate_tests hook. They help to inspect a testfunction and to generate tests according to test conguration or values specied in the class or module where a test function is dened: metafunc.funcargnames: set of required function arguments for given function metafunc.function: underlying python test function metafunc.cls: class object where the test function is dened in or None. metafunc.module: the module object where the test function is dened in. metafunc.config: access to command line opts and general cong Metafunc.addcall(funcargs=None, id=_notexists, param=_notexists) add a new call to the underlying test function during the collection phase of a test run. Note that request.addcall() is called during the test collection phase prior and independently to actual test execution. Therefore you should perform setup of resources in a funcarg factory which can be instrumented with the param. Parameters funcargs argument keyword dictionary used when invoking the test function. id used for reporting and identication purposes. If you dont supply an id the length of the currently list of calls to the test function will be used. param will be exposed to a later funcarg factory invocation through the request.param attribute. It allows to defer test xture setup activities to when an actual test is run.
54
def setup_module(module): """ setup up any state specific to the execution of the given module. """ def teardown_module(module): """ teardown any state that was previously setup with a setup_module method. """
If you would rather dene test functions directly at module level you can also use the following functions to implement xtures:
def setup_function(function): """ setup up any state tied to the execution of the given function. Invoked for every test function in the module. """ def teardown_function(function): """ teardown any state that was previously setup with a setup_function call. """
Note that it possible that setup/teardown pairs are invoked multiple times per testing process.
55
and running this module will show you precisely the output of the failing function and hide the other one:
$ py.test =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 2 items test_module.py .F ================================= FAILURES ================================= ________________________________ test_func2 ________________________________
56
> E
test_module.py:9: AssertionError ----------------------------- Captured stdout -----------------------------setting up <function test_func2 at 0x238c410> ==================== 1 failed, 1 passed in 0.02 seconds ====================
The readouterr() call snapshots the output so far - and capturing will be continued. After the test function nishes the original streams will be restored. Using capsys this way frees your test from having to care about setting/resetting output streams and also interacts well with py.tests own per-test capturing. If you want to capture on fd level you can use the capfd function argument which offers the exact same interface.
57
After the test function nishes the os.path.expanduser modication will be undone.
58
or use the package in develop/in-place mode with a checkout of the pytest-xdist repository
python setup.py develop
Especially for longer running tests or tests requiring a lot of I/O this can lead to considerable speed ups. Running tests in a Python subprocess To instantiate a Python-2.4 subprocess and send tests to it, you may type:
py.test -d --tx popen//python=python2.4
This will start a subprocess which is run with the python2.4 Python interpreter, found in your system binary lookup path. If you prex the tx option value like this:
py.test -d --tx 3*popen//python=python2.4
then three subprocesses would be created and the tests will be distributed to three subprocesses and run simultanously. Running tests in looponfailing mode For refactoring a project with a medium or large test suite you can use the looponfailing mode. Simply add the --f option:
py.test -f
and py.test will run your tests. Assuming you have failures it will then wait for le changes and re-run the failing test set. File changes are detected by looking at looponfailingroots root directories and all of their contents (recursively). If the default for this value does not work for you you can change it in your project by setting a conguration option:
# content of a pytest.ini, setup.cfg or tox.ini file [pytest] looponfailroots = mypkg testdir
This would lead to only looking for le changes in the respective directories, specied relatively to the ini-les directory.
59
Sending tests to remote SSH accounts Suppose you have a package mypkg which contains some tests that you can successfully run locally. And you also have a ssh-reachable machine myhost. Then you can ad-hoc distribute your tests by typing:
py.test -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg
This will synchronize your mypkg package directory with a remote ssh account and then collect and run your tests at the remote side. You can specify multiple --rsyncdir directories to be sent to the remote side. Sending tests to remote Socket Servers Download the single-module socketserver.py Python program and run it like this:
python socketserver.py
It will tell you that it starts listening on the default port. You can now on your home machine specify this new socket host with something like this:
py.test -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg
Running tests on many platforms at once The basic command to run tests on multiple platforms is:
py.test --dist=each --tx=spec1 --tx=spec2
If you specify a windows host, an OSX host and a Linux environment this command will send each tests to all platforms - and report back failures from all platforms at once. The specications strings use the xspec syntax. Specifying test exec environments in an ini le pytest (since version 2.0) supports ini-style conguration. For example, you could make running with three subprocesses your default:
[pytest] addopts = -n3
to run tests in each of the environments. Specifying rsync dirs in an ini-le In a tox.ini or setup.cfg le in your root project directory you may specify directories to include or to exclude in synchronisation:
60
These directory specications are relative to the directory where the conguration le was found.
Running this would result in a passed test except for the last assert 0 line which we use to look at values:
$ py.test test_tmpdir.py =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 1 items test_tmpdir.py F ================================= FAILURES ================================= _____________________________ test_create_file _____________________________ tmpdir = local(/tmp/pytest-11/test_create_file0) def test_create_file(tmpdir): p = tmpdir.mkdir("sub").join("hello.txt") p.write("content") assert p.read() == "content" assert len(tmpdir.listdir()) == 1 assert 0 assert 0
> E
61
You can override the default temporary directory setting like this:
py.test --basetemp=mydir
When distributing tests on the local machine, py.test takes care to congure a basetemp directory for the sub processes such that all temporary data lands below a single per-test run basetemp directory.
3.10 skip and xfail: dealing with tests that can not succeed
If you have test functions that cannot be run on certain platforms or that you expect to fail you can mark them accordingly or you may call helper functions during execution of setup or test functions. A skip means that you expect your test to pass unless a certain conguration or condition (e.g. wrong Python interpreter, missing dependency) prevents it to run. And xfail means that your test can run but you expect it to fail because there is an implementation problem. py.test counts and lists skip and xfail tests separately. However, detailed information about skipped/xfailed tests is not shown by default to avoid cluttering the output. You can use the -r option to see details corresponding to the short letters shown in the test progress:
py.test -rxs # show extra info on skips and xfails
During test function setup the skipif condition is evaluated by calling eval(sys.version_info >= (3,0), namespace). (New in version 2.0.2) The namespace contains all the module globals of the test function so that you can for example check for versions of a module you are using:
import mymodule @pytest.mark.skipif("mymodule.__version__ < 1.2") def test_function(): ...
The test function will not be run (skipped) if mymodule is below the specied version. The reason for specifying the condition as a string is mainly that py.test can report a summary of skip conditions. For information on the construction of the namespace see evaluation of skipif/xfail conditions. You can of course create a shortcut for your conditional skip decorator at module level like this:
win32only = pytest.mark.skipif("sys.platform != win32") @win32only def test_function(): ...
62
The pytestmark special name tells py.test to apply it to each test function in the class. If your code targets python2.6 or above you can more naturally use the skipif decorator (and any other marker) on classes:
@pytest.mark.skipif("sys.platform == win32") class TestPosixCalls: def test_function(self): "will not be setup or run under win32 platform"
Using multiple skipif decorators on a single function is generally ne - it means that if any of the conditions apply the function execution will be skipped.
This test will be run but no traceback will be reported when it fails. Instead terminal reporting will list it in the expected to fail or unexpectedly passing sections. By specifying on the commandline:
pytest --runxfail
you can force the running and reporting of an xfail marked test as if it werent marked at all. As with skipif you can also mark your expectation of a failure on a particular platform:
@pytest.mark.xfail("sys.version_info >= (3,0)") def test_function(): ...
You can furthermore prevent the running of an xfail test or specify a reason such as a bug ID or similar. Here is a simple test le with the several usages:
import pytest xfail = pytest.mark.xfail @xfail def test_hello(): assert 0 @xfail(run=False) def test_hello2(): assert 0
3.10. skip and xfail: dealing with tests that can not succeed
63
@xfail("hasattr(os, sep)") def test_hello3(): assert 0 @xfail(reason="bug 110") def test_hello4(): assert 0 @xfail(pytest.__version__[0] != "17") def test_hello5(): assert 0 def test_hello6(): pytest.xfail("reason")
64
If docutils cannot be imported here, this will lead to a skip outcome of the test. You can also skip based on the version number of a library:
docutils = pytest.importorskip("docutils", minversion="0.3")
The version will be read from the specied modules __version__ attribute.
This will set the function attribute webtest to a MarkInfo instance. You can also specify parametrized metadata like this:
# content of test_mark.py import pytest @pytest.mark.webtest(firefox=30) def test_receive(): pass
65
This is equivalent to directly applying the decorator to the two test functions. To remain compatible with Python2.5 you can also set a pytestmark attribute on a TestClass like this:
import pytest class TestClass: pytestmark = pytest.mark.webtest
in which case it will be applied to all functions and methods dened in the module.
66
And you can also run all tests except the ones that match the keyword:
$ py.test -k-webtest =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 4 items ===================== 4 tests deselected by -webtest ===================== ======================= 4 deselected in 0.01 seconds =======================
will set a slowtest MarkInfo object on the test_function object. class _pytest.mark.MarkDecorator(name, args=None, kwargs=None) A decorator for test functions and test classes. When applied it will create MarkInfo objects which may be retrieved by hooks as item keywords. MarkDecorator instances are often created like this:
mark1 = py.test.mark.NAME # simple MarkDecorator mark2 = py.test.mark.NAME(name1=value) # parametrized MarkDecorator
class _pytest.mark.MarkInfo(name, args, kwargs) Marking object created by MarkDecorator instances. args positional argument list, empty if none specied kwargs keyword argument dictionary, empty if nothing specied
67
The recwarn function argument provides these methods: pop(category=None): return last warning matching the category. clear(): clear list of warnings
Running it yields:
68
$ py.test test_unittest.py =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 1 items test_unittest.py F ================================= FAILURES ================================= ____________________________ MyTest.test_method ____________________________ self = <test_unittest.MyTest testMethod=test_method> def test_method(self): x = 1 self.assertEquals(x, 3) AssertionError: 1 != 3
> E
test_unittest.py:8: AssertionError ----------------------------- Captured stdout -----------------------------hello ========================= 1 failed in 0.02 seconds =========================
3.14.1 Usage
type:
py.test # instead of nosetests
and you should be able to run your nose style tests and make use of py.tests capabilities.
69
on the command line. You can also trigger running of doctests from docstrings in all python modules (including regular python test modules):
py.test --doctest-modules
You can make these changes permanent in your project by putting them into a pytest.ini le like this:
# content of pytest.ini [pytest] addopts = --doctest-modules
then you can just invoke py.test without command line options:
$ py.test =========================== test session starts ============================ platform linux2 -- Python 2.6.6 -- pytest-2.0.3 collecting ... collected 1 items mymodule.py . ========================= 1 passed in 0.40 seconds =========================
70
CHAPTER
FOUR
Note: If you have conftest.py les which do not reside in a python package directory (i.e. one containing an __init__.py) then import conftest can be ambiguous because there might be other conftest.py les as well on your PYTHONPATH or sys.path. It is thus good practise for projects to either put conftest.py under a package scope or to never import anything from a conftest.py le. 71
If a plugin is installed, py.test automatically nds and integrates it, there is no need to activate it. Here is a list of known plugins: pytest-capturelog: to capture and assert about messages from the logging module pytest-xdist: to distribute tests to CPUs and remote hosts, looponfailing mode, see also xdist: pytest distributed testing plugin pytest-cov: coverage reporting, compatible with distributed testing pytest-pep8: a --pep8 option to enable PEP8 compliance checking. oejskit: a plugin to run javascript unittests in life browsers (version 0.8.9 not compatible with pytest-2.0) You may discover more plugins through a pytest- pypi.python.org search.
72
}, )
If a package is installed this way, py.test will load myproject.pluginmodule as a plugin which can dene well specied hooks.
When the test module or conftest plugin is loaded the specied plugins will be loaded as well. You can also use dotted path like this:
pytest_plugins = "myapp.testsupport.myplugin"
If you want to look at the names of existing plugins, use the --traceconfig option.
and will get an extended test header which shows activated plugins and their names. It will also print local plugins aka conftest.py les when they are loaded.
73
This means that any subsequent try to activate/load the named plugin will it already existing. See Finding out which plugins are active for how to obtain the name of a plugin.
74
CHAPTER
FIVE
75
76
CHAPTER
SIX
77
_pytest.hookspec.pytest_runtest_protocol(item) implements the standard runtest_setup/call/teardown protocol including capturing exceptions and calling reporting hooks on the results accordingly. Return boolean True if no further hook implementations should be invoked. _pytest.hookspec.pytest_runtest_setup(item) called before pytest_runtest_call(item). _pytest.hookspec.pytest_runtest_call(item) called to execute the test item. _pytest.hookspec.pytest_runtest_teardown(item) called after pytest_runtest_call. _pytest.hookspec.pytest_runtest_makereport(item, call) return a _pytest.runner.TestReport object for _pytest.runner.CallInfo. the given pytest.Item and
For deeper understanding you may look at the default implementation of these hooks in _pytest.runner and maybe also in _pytest.pdb which interacts with _pytest.capture and its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs. The _pytest.terminal reported specically uses the reporting hook to print information about a test run.
78
CHAPTER
SEVEN
79
Class deprecated attribute Class, use pytest.Class File deprecated attribute File, use pytest.File Function deprecated attribute Function, use pytest.Function Instance deprecated attribute Instance, use pytest.Instance Item deprecated attribute Item, use pytest.Item Module deprecated attribute Module, use pytest.Module config the test cong object fspath lesystem path where this node was collected from listchain() return list of all parent collectors up to self, starting from root of collection tree. name a unique name with the scope of the parent parent the parent collector node. session the collection this node is part of class _pytest.runner.CallInfo(func, when) Result/Exception info a function invocation. excinfo None or ExceptionInfo object. when context of invocation: one of setup, call, teardown, memocollect class _pytest.runner.TestReport(nodeid, location, keywords, outcome, longrepr, when) Basic test report object (also used for setup and teardown calls if they fail). keywords a name -> value dictionary containing all keywords and markers associated with a test invocation. location a (lesystempath, lineno, domaininfo) tuple indicating the actual location of a test item - it might be different from the collected one e.g. if a method is inherited from a different module. longrepr None or a failure representation. nodeid normalized collection node id outcome test outcome, always one of passed, failed, skipped.
80
81
82
CHAPTER
EIGHT
83
ep2009-pytest.pdf 60 minute py.test talk, highlighting unique features and a roadmap (July 2009) pycon2009-pytest-introduction.zip slides and les, extended version of py.test basic introduction, discusses more options, also introduces old-style xUnit setup, looponfailing and other features. pycon2009-pytest-advanced.pdf contain a slightly older version of funcargs and distributed testing, compared to the EuroPython 2009 slides.
84
CHAPTER
NINE
You can also go to the python package index and download and unpack a TAR le:
http://pypi.python.org/pypi/pytest/
in order to work inline with the tools and the lib of your checkout. If this command complains that it could not nd the required version of py then you need to use the development pypi repository:
85
86
CHAPTER
TEN
RELEASE ANNOUNCEMENTS
10.1 py.test 2.1.0: perfected assertions and bug xes
Welcome to the relase of pytest-2.1, a mature testing tool for Python, supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See the improved extensive docs (now also as PDF!) with tested examples here: http://pytest.org/ The single biggest news about this release are perfected assertions courtesy of Benjamin Peterson. You can now safely use assert statements in test modules without having to worry about side effects or python optimization (-OO) options. This is achieved by rewriting assert statements in test modules upon import, using a PEP302 hook. See http://pytest.org/assert.html#advanced-assertion-introspection for detailed information. The work has been partly sponsored by my company, merlinux GmbH. For further details on bug xes and smaller enhancements see below. If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or easy_install -U pytest
x issue 35 - provide PDF doc version and download link from index page
There also is a bugx release 1.6 of pytest-xdist, the plugin that enables seemless distributed and looponfail testing for Python. best, holger krekel
Welcome to pytest-2.0.2, a maintenance and bug x release of pytest, a mature testing tool for Python, supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See the extensive docs with tested examples here: http://pytest.org/ If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or easy_install -U pytest
Many thanks to all issue reporters and people asking questions or complaining, particularly Jurko for his insistence, Laura, Victor and Brianna for helping with improving and Ronny for his general advise. best, holger krekel
88
This will not run the test function if the modules version string does not start with a 1. Note that specifying a string instead of a boolean expressions allows py.test to report meaningful information when summarizing a test run as to what conditions lead to skipping (or xfail-ing) tests. x issue28 - setup_method and pytest_generate_tests work together The setup_method xture method now gets called also for test function invocations generated from the pytest_generate_tests hook. x issue27 - collectonly and keyword-selection (-k) now work together Also, if you do py.test collectonly -q you now get a at list of test ids that you can use to paste to the py.test commandline in order to execute a particular test. x issue25 avoid reported problems with pdb and python3.2/encodings output x issue23 - tmpdir argument now works on Python3.2 and WindowsXP Starting with Python3.2 os.symlink may be supported. By requiring a newer py lib version the py.path.local() implementation acknowledges this. xed typos in the docs (thanks Victor Garcia, Brianna Laugher) and particular thanks to Laura Creighton who also revieved parts of the documentation. x slighly wrong output of verbose progress reporting for classes (thanks Amaury) more precise (avoiding of) deprecation warnings for node.Class|Function accesses avoid std unittest assertion helper code in tracebacks (thanks Ronny)
Many thanks to all issue reporters and people asking questions or complaining. Particular thanks to Floris Bruynooghe and Ronny Pfannschmidt for their great coding contributions and many others for feedback and help. best, holger krekel
89
Welcome to pytest-2.0.0, a major new release of py.test, the rapid easy Python testing tool. There are many new features and enhancements, see below for summary and detailed lists. A lot of long-deprecated code has been removed, resulting in a much smaller and cleaner implementation. See the new docs with examples here: http://pytest.org/2.0.0/index.html A note on packaging: pytest used to part of the py distribution up until version py-1.3.4 but this has changed now: pytest-2.0.0 only contains py.test related code and is expected to be backward-compatible to existing test code. If you want to install pytest, just type one of:
90
Many thanks to all issue reporters and people asking questions or complaining. Particular thanks to Floris Bruynooghe and Ronny Pfannschmidt for their great coding contributions and many others for feedback and help. best, holger krekel
see http://pytest.org/2.0.0/usage.html for details. new and better reporting information in assert expressions if comparing lists, sequences or strings. see http://pytest.org/2.0.0/assert.html#newreport new conguration through ini-les (setup.cfg or tox.ini recognized), for example:
[pytest] norecursedirs = .hg data* addopts = -x --pyargs # dont ever recurse in such dirs # add these command line options by default
see http://pytest.org/2.0.0/customize.html improved standard unittest support. In general py.test should now better be able to run custom unittest.TestCases like twisted trial or Django based TestCases. Also you can now run the tests of an installed unittest package with py.test:
py.test --pyargs unittest
new -q option which decreases verbosity and prints a more nose/unittest-style dot output. many many more detailed improvements details
10.5.2 Fixes
x issue126 - introduce py.test.set_trace() to trace execution via PDB during the running of tests even if capturing is ongoing. x issue124 - make reporting more resilient against tests opening les on ledescriptor 1 (stdout). x issue109 - sibling conftest.py les will not be loaded. (and Directory collectors cannot be customized anymore from a Directorys conftest.py - this needs to happen at least one level up). x issue88 (nding custom test nodes from command line arg) x issue93 stdout/stderr is captured while importing conftest.py x bug: unittest collected functions now also can have pytestmark applied at class/module level
91
92
CHAPTER
ELEVEN
CHANGELOG HISTORY
11.1 Changes between 2.0.3 and 2.1.0.DEV
x issue53 call nosestyle setup functions with correct ordering x issue58 and issue59: new assertion code xes merge Benjamins assertionrewrite branch: now assertions for test modules on python 2.6 and above are done by rewriting the AST and saving the pyc le before the test module is imported. see doc/assert.txt for more info. x issue43: improve doctests with better traceback reporting on unexpected exceptions x issue47: timing output in junitxml for test cases is now correct x issue48: typo in MarkInfo repr leading to exception x issue49: avoid confusing error when initizaliation partially fails x issue44: env/username expansion for junitxml le path show releaselevel information in test runs for pypy reworked doc pages for better navigation and PDF generation report KeyboardInterrupt even if interrupted during session startup x issue 35 - provide PDF doc version and download link from index page
93
This will not run the test function if the modules version string does not start with a 1. Note that specifying a string instead of a boolean expressions allows py.test to report meaningful information when summarizing a test run as to what conditions lead to skipping (or xfail-ing) tests. x issue28 - setup_method and pytest_generate_tests work together The setup_method xture method now gets called also for test function invocations generated from the pytest_generate_tests hook. x issue27 - collectonly and keyword-selection (-k) now work together Also, if you do py.test collectonly -q you now get a at list of test ids that you can use to paste to the py.test commandline in order to execute a particular test. x issue25 avoid reported problems with pdb and python3.2/encodings output x issue23 - tmpdir argument now works on Python3.2 and WindowsXP Starting with Python3.2 os.symlink may be supported. By requiring a newer py lib version the py.path.local() implementation acknowledges this. xed typos in the docs (thanks Victor Garcia, Brianna Laugher) and particular thanks to Laura Creighton who also revieved parts of the documentation. x slighly wrong output of verbose progress reporting for classes (thanks Amaury) more precise (avoiding of) deprecation warnings for node.Class|Function accesses avoid std unittest assertion helper code in tracebacks (thanks Ronny)
94
improve behaviour/warnings when running on top of python -OO (assertions and docstrings are turned off, leading to potential false positives) introduce a pytest_cmdline_processargs(args) hook to allow dynamic computation of command line arguments. This xes a regression because py.test prior to 2.0 allowed to set command line options from conftest.py les which so far pytest-2.0 only allowed from ini-les now. x issue7: assert failures in doctest modules. unexpected failures in doctests will not generally show nicer, i.e. within the doctest failing context. x issue9: setup/teardown functions for an xfail-marked test will report as xfail if they fail but report as normally passing (not xpassing) if they succeed. This only is true for direct setup/teardown invocations because teardown_class/ teardown_module cannot closely relate to a single test. x issue14: no logging errors at process exit renements to collecting output on non-ttys rene internal plugin registration and tracecong output introduce a mechanism to prevent/unregister http://pytest.org/plugins.html#cmdunregister activate resultlog plugin by default x regression wrt yielded tests which due to the collection-before-running semantics were not setup as with pytest 1.3.4. Note, however, that the recommended and much cleaner way to do test parametraization remains the pytest_generate_tests mechanism, see the docs. plugins from the command line, see
remove py.test.collect.Directory (follows from a major refactoring and simplication of the collection process) majorly reduce py.test core code, shift function/python testing to own plugin x issue88 (nding custom test nodes from command line arg) rene tmpdir creation, will now create basenames better associated with test names (thanks Ronny) xpass (unexpected pass) tests dont cause exitcode!=0 x issue131 / issue60 - importing doctests in __init__ les used as namespace packages x issue93 stdout/stderr is captured while importing conftest.py x bug: unittest collected functions now also can have pytestmark applied at class/module level add ability to use class level for cached_setup helper x strangeness: mark.* objects are now immutable, create new instances
96
(thanks Ronny Pfannschmidt) Funcarg factories can now dynamically apply a marker to a test invocation. This is for example useful if a factory provides parameters to a test which are expected-to-fail:
def pytest_funcarg__arg(request): request.applymarker(py.test.mark.xfail(reason="flaky config")) ... def test_function(arg): ...
improved error reporting on collection and import errors. This makes use of a more general mechanism, namely that for custom test item/collect nodes node.repr_failure(excinfo) is now uniformly called so that you can override it to return a string error representation of your choice which is going to be reported as a (red) string. introduce junitprex=STR option to prepend a prex to all reports in the junitxml le.
97
make initial conftest discovery ignore prexed arguments x resultlog plugin when used in an multicpu/multihost xdist situation (thanks Jakub Gustak) perform distributed testing related reporting in the xdist-plugin rather than having dist-related code in the generic py.test distribution x homedir detection on Windows ship distribute_setup.py version 0.6.13
98
x issue94: make reporting more robust against bogus source code (and internally be more careful when presenting unexpected byte sequences)
to prevent even a collection try of any tests in symlinked dirs. new pytest_pycollect_makemodule(path, parent) hook for allowing customization of the Module collection object for a matching test module. extend and rene xfail mechanism: @py.test.mark.xfail(run=False) do not run the decorated test @py.test.mark.xfail(reason="...") prints the reason string in xfail summaries speciying --runxfail on command line virtually ignores xfail markers expose (previously internal) commonly useful methods: py.io.get_terminal_with() -> return terminal width py.io.ansi_print(...) -> print colored/bold text on linux/win32 py.io.saferepr(obj) -> return limited representation string expose test outcome related exceptions as py.test.skip.Exception, py.test.raises.Exception etc., useful mostly for plugins doing special outcome interpretation/tweaking (issue85) x junitxml plugin to handle tests with non-ascii output x/rene python3 compatibility (thanks Benjamin Peterson) xes for making the jython/win32 combination work, note however: jython2.5.1/win32 does not provide a command line launcher, see http://bugs.jython.org/issue1491 . See pylib install documentation for how to work around. xes for handling of unicode exception values and unprintable objects (issue87) x unboundlocal error in assertionold code (issue86) improve documentation for looponfailing rene IO capturing: stdin-redirect pseudo-le now has a NOP close() method ship distribute_setup.py version 0.6.10 added links to the new capturelog and coverage plugins
99
# remove "*.pyc" and "*$py.class" (jython) files -e .swp -e .cache # also remove files with these extensions -s # remove "build" and "dist" directory next to setup.py files -d # also remove empty directories -a # synonym for "-s -d -e pip-log.txt" -n # dry run, only show what would be removed
add a new option py.test funcargs which shows available funcargs and their help strings (docstrings on their respective factory function) for a given test path display a short and concise traceback if a funcarg lookup fails early-load conftest.py les in non-dot rst-level sub directories. allows to conveniently keep and access testrelated options in a test subdir and still add command line options. x issue67: new super-short traceback-printing option: tb=line will print a single line for each failing (python) test indicating its lename, lineno and the failure value x issue78: always call python-level teardown functions even if the according setup failed. This includes renements for calling setup_module/class functions which will now only be called once instead of the previous behaviour where theyd be called multiple times if they raise an exception (including a Skipped exception). Any exception will be re-corded and associated with all tests in the according module/class scope. x issue63: assume <40 columns to be a bogus terminal width, default to 80 x pdb debugging to be in the correct frame on raises-related errors update apipkg.py to x an issue where recursive imports might unnecessarily break importing x plugin links
100
allow pytest_generate_tests to be dened in classes as well deprecate usage of disabled attribute in favour of pytestmark deprecate denition of Directory, Module, Class and Function nodes in conftest.py les. Use pytest collect hooks instead. collection/item node specic runtest/collect hooks are only called exactly on matching conftest.py les, i.e. ones which are exactly below the lesystem path of an item change: the rst pytest_collect_directory hook to return something will now prevent further hooks to be called. change: gleaf plugin now requires gleaf to run. Also change its long command line options to be a bit shorter (see py.test -h). change: pytest doctest plugin is now enabled by default and has a new option doctest-glob to set a pattern for le matches. change: remove internal py._* helper vars, only keep py._pydir robustify capturing to survive if custom pytest_runtest_setup code failed and prevented the capturing setup code from running. make py.test.* helpers provided by default plugins visible early - works transparently both for pydoc and for interactive sessions which will regularly see e.g. py.test.mark and py.test.importorskip. simplify internal plugin manager machinery simplify internal collection tree by introducing a RootCollector node x assert reinterpreation that sees a call containing keyword=... x issue66: invoke pytest_sessionstart and pytest_sessionnish hooks on slaves during dist-testing, report module/session teardown hooks correctly. x issue65: properly handle dist-testing if no execnet/py lib installed remotely. skip some install-tests if no execnet is available x docs, x internal bin/ script generation
x py.test dist-testing to work with execnet >= 1.0.0b4 re-introduce py.test.cmdline.main() for better backward compatibility svn paths: x a bug with path.check(versioned=True) for svn paths, allow % in svn paths, make svnwc.update() default to interactive mode like in 1.0.x and add svnwc.update(interactive=False) to inhibit interaction. rene distributed tarball to contain test and no pyc les try harder to have deprecation warnings for py.compat.* accesses report a correct location
101
remove py.rest tool and internal namespace - it was never really advertised and can still be used with the old release if needed. If there is interest it could be revived into its own tool i guess. x issue48 and issue59: raise an Error if the module from an imported test le does not seem to come from the lepath - avoids same-name confusion that has been reported repeatedly merged Ronnys nose-compatibility hacks: now nose-style setup_module() and setup() functions are supported introduce generalized py.test.mark function marking reshufe / rene command line grouping deprecate parser.addgroup in favour of getgroup which creates option group add report command line option that allows to control showing of skipped/xfailed sections generalized skipping: a new way to mark python functions with skipif or xfail at function, class and modules level based on platform or sys-module attributes. extend py.test.mark decorator to allow for positional args introduce and test py.cleanup -d to remove empty directories x issue #59 - robustify unittest test collection make bpython/help interaction work by adding an __all__ attribute to ApiModule, cleanup initpkg use MIT license for pylib, add some contributors remove py.execnet code and substitute all usages with execnet proper x issue50 - cached_setup now caches more to expectations for test functions with multiple arguments. merge Jarkos xes, issue #45 and #46 add the ability to specify a path for py.lookup to search in x a funcarg cached_setup bug probably only occuring in distributed testing and module scope with teardown. many xes and changes for making the code base python3 compatible, many thanks to Benjamin Peterson for helping with this. consolidate builtins implementation to be compatible with >=2.3, add helpers to ease keeping 2 and 3k compatible code deprecate py.compat.doctest|subprocess|textwrap|optparse deprecate py.magic.autopath, remove py/magic directory move pytest assertion handling to py/code and a pytest_assertion plugin, add no-assert option, deprecate py.magic namespaces in favour of (less) py.code ones. consolidate and cleanup py/code classes and les cleanup py/misc, move tests to bin-for-dist introduce delattr/delitem/delenv methods to py.tests monkeypatch funcarg consolidate py.log implementation, remove old approach. introduce py.io.TextIO and py.io.BytesIO for distinguishing between text/unicode and byte-streams (uses underlying standard lib io.* if available) make py.unittest_convert helper script available which converts unittest.py style les into the simpler assert/direct-test-classes py.test/nosetests style. The script was written by Laura Creighton. simplied internal localpath implementation
102
capturing of unicode writes or encoded strings to sys.stdout/err work better, also terminalwriting was adapted and somewhat unied between windows and linux. improved documentation layout and content a lot added a help-cong option to show conftest.py / ENV-var names for all longopt cmdline options, and some special conftest.py variables. renamed conf_capture conftest setting to option_capture accordingly. x issue #27: better reporting on non-collectable items given on commandline (e.g. pyc les) x issue #33: added version ag (thanks Benjamin Peterson) x issue #32: adding support for incomplete paths to wcpath.status() Test prexed classes are not collected by default anymore if they have an __init__ method monkeypatch setenv() now accepts a prepend parameter improved reporting of collection error tracebacks simplied multicall mechanism and plugin architecture, renamed some internal methods and argnames
103
capsys and capfd funcargs now have a readouterr() and a close() method (underlyingly py.io.StdCapture/FD objects are used which grew a readouterr() method as well to return snapshots of captured out/err) make assert-reinterpretation work better with comparisons not returning bools (reported with numpy from thanks maciej jalkowski) reworked per-test output capturing into the pytest_iocapture.py plugin and thus removed capturing code from cong object item.repr_failure(excinfo) instead of item.repr_failure(excinfo, outerr)
104
added new pytest_namespace(cong) hook that allows to inject helpers directly to the py.test.* namespace. documented and rened many hooks added new style of generative tests via pytest_generate_tests hook that integrates well with function arguments.
added detection of FAILED TO LOAD MODULE to py.test [40703, 40738, 40739] xed problem with calling .remove() on wcpaths of non-versioned les in py.path [44248] xed some import and inheritance issues in py.test [41480, 44648, 44655] fail to run greenlet tests when pypy is available, but without stackless [45294] small xes in rsession tests [45295] xed issue with 2.5 type representations in py.test [45483, 45484] made that internal reporting issues displaying is done atomically in py.test [45518] made that non-existing les are igored by the py.lookup script [45519] improved exception name creation in py.test [45535] made that less threads are used in execnet [merge in 45539] removed lock required for atomical reporting issue displaying in py.test [45545] removed globals from execnet [45541, 45547] refactored cleanup mechanics, made that setDaemon is set to 1 to make atexit get called in 2.5 (py.execnet) [45548] xed bug in joining threads in py.execnets servemain [45549] refactored py.test.rsession tests to not rely on exact output format anymore [45646] using repr() on test outcome [45647] added Reason classes for py.test.skip() [45648, 45649] killed some unnecessary sanity check in py.test.collect [45655] avoid using os.tmple() in py.io.fdcapture because on Windows its only usable by Administrators [45901] added support for locking and non-recursive commits to py.path.svnwc [45994] locking les in py.execnet to prevent CPython from segfaulting [46010] added export() method to py.path.svnurl xed -d -x in py.test [47277] xed argument concatenation problem in py.path.svnwc [49423] restore py.test behaviour that it exits with code 1 when there are failures [49974] dont fail on html les that dont have an accompanying .txt le [50606] xed utestconvert.py < input [50645] small x for code indentation in py.code.source [50755] x _docgen.py documentation building [51285] improved checks for source representation of code blocks in py.test [51292] added support for passing authentication to py.path.svn* objects [52000, 52001] removed sorted() call for py.apigen tests in favour of [].sort() to support Python 2.3 [52481]
106
CHAPTER
TWELVE
The old py.test.* ways to access functionality remain valid but you are encouraged to do global renaming according to the above rules in your test code.
107
108
Chapter 12. New pytest names in 2.0 (at is better than nested)
CHAPTER
THIRTEEN
If you run this test without specifying a command line option the test will get skipped with an appropriate message. Otherwise you can start to add convenience and test support methods to your AcceptFuncarg and drive running of tools or applications and provide ways to do assertions about the output.
109
110
CHAPTER
FOURTEEN
Our module level factory will be invoked rst and it can ask its request object to call the next factory and then decorate its result. This mechanism allows us to stay ignorant of how/where the function argument is provided - in our example from a conftest plugin. sidenote: the temporary directory used here are instances of the py.path.local class which provides many of the os.path methods in a convenient way.
111
112
p
pytest, 43
113
114
INDEX
File (_pytest.main.Node attribute), 80 fromdictargs() (_pytest.cong.Cong class method), 79 addcall() (_pytest.python.Metafunc method), 54 addnalizer() (_pytest.python.FuncargRequest method), fspath (_pytest.main.Node attribute), 80 FuncargRequest (class in _pytest.python), 52 52 Function (_pytest.main.Node attribute), 80 addini() (_pytest.cong.Parser method), 79 function (_pytest.python.FuncargRequest attribute), 52 addoption() (_pytest.cong.Parser method), 79 addopts G conguration value, 46 (_pytest.python.FuncargRequest applymarker() (_pytest.python.FuncargRequest method), getfuncargvalue() method), 52 52 getgroup() (_pytest.cong.Parser method), 79 args (_pytest.mark.MarkInfo attribute), 67 getini() (_pytest.cong.Cong method), 79 C getvalue() (_pytest.cong.Cong method), 79 cached_setup() (_pytest.python.FuncargRequest method), getvalueorskip() (_pytest.cong.Cong method), 79 52 I CallInfo (class in _pytest.runner), 80 Class (_pytest.main.Node attribute), 79 importorskip() (in module pytest), 43 cls (_pytest.python.FuncargRequest attribute), 52 Instance (_pytest.main.Node attribute), 80 cong (_pytest.main.Node attribute), 80 Item (_pytest.main.Node attribute), 80 cong (_pytest.python.FuncargRequest attribute), 52 Cong (class in _pytest.cong), 79 K conguration value keywords (_pytest.python.FuncargRequest attribute), 52 addopts, 46 keywords (_pytest.runner.TestReport attribute), 80 minversion, 46 kwargs (_pytest.mark.MarkInfo attribute), 67 norecursedirs, 46 python_classes, 46 L python_les, 46 listchain() (_pytest.main.Node method), 80 python_functions, 46 location (_pytest.runner.TestReport attribute), 80 longrepr (_pytest.runner.TestReport attribute), 80 D delattr() (_pytest.monkeypatch.monkeypatch method), 58 M delenv() (_pytest.monkeypatch.monkeypatch method), 58 delitem() (_pytest.monkeypatch.monkeypatch method), main() (in module pytest), 43 MarkDecorator (class in _pytest.mark), 67 58 MarkGenerator (class in _pytest.mark), 67 deprecated_call() (in module pytest), 44 MarkInfo (class in _pytest.mark), 67 E minversion conguration value, 46 excinfo (_pytest.runner.CallInfo attribute), 80 Module (_pytest.main.Node attribute), 80 exit() (in module pytest), 43 module (_pytest.python.FuncargRequest attribute), 52 F monkeypatch (class in _pytest.monkeypatch), 58 fail() (in module pytest), 43 115
N
name (_pytest.main.Node attribute), 80 name (_pytest.mark.MarkInfo attribute), 67 Node (class in _pytest.main), 79 nodeid (_pytest.runner.TestReport attribute), 80 norecursedirs conguration value, 46
S
session (_pytest.main.Node attribute), 80 setattr() (_pytest.monkeypatch.monkeypatch method), 58 setenv() (_pytest.monkeypatch.monkeypatch method), 58 setitem() (_pytest.monkeypatch.monkeypatch method), 58 skip() (in module pytest), 43 syspath_prepend() (_pytest.monkeypatch.monkeypatch method), 58
O
option (_pytest.cong.Cong attribute), 79 outcome (_pytest.runner.TestReport attribute), 80
T
TestReport (class in _pytest.runner), 80
P
parent (_pytest.main.Node attribute), 80 Parser (class in _pytest.cong), 79 pluginmanager (_pytest.cong.Cong attribute), 79 pytest (module), 43 pytest_addoption() (in module _pytest.hookspec), 77 pytest_assertrepr_compare() (in module _pytest.hookspec), 49 pytest_cmdline_main() (in module _pytest.hookspec), 77 pytest_cmdline_parse() (in module _pytest.hookspec), 77 pytest_cmdline_preparse() (in module _pytest.hookspec), 77 pytest_collect_directory() (in module _pytest.hookspec), 78 pytest_collect_le() (in module _pytest.hookspec), 78 pytest_congure() (in module _pytest.hookspec), 77 pytest_ignore_collect() (in module _pytest.hookspec), 78 pytest_namespace() (in module _pytest.hookspec), 77 pytest_pycollect_makeitem() (in module _pytest.hookspec), 78 pytest_runtest_call() (in module _pytest.hookspec), 78 pytest_runtest_makereport() (in module _pytest.hookspec), 78 pytest_runtest_protocol() (in module _pytest.hookspec), 77 pytest_runtest_setup() (in module _pytest.hookspec), 78 pytest_runtest_teardown() (in module _pytest.hookspec), 78 pytest_uncongure() (in module _pytest.hookspec), 77 Python Enhancement Proposals PEP 8, 3 python_classes conguration value, 46 python_les conguration value, 46 python_functions conguration value, 46
U
undo() (_pytest.monkeypatch.monkeypatch method), 58
W
when (_pytest.runner.CallInfo attribute), 80 when (_pytest.runner.TestReport attribute), 80
X
xfail() (in module pytest), 44
R
raises() (in module pytest), 43
116
Index