Reference¶
This page contains the full reference to pytest’s API.
Functions¶
pytest.skip¶
-
skip
(msg[, allow_module_level=False])[source]¶ Skip an executing test with the given message.
This function should be called only during testing (setup, call or teardown) or during collection by using the
allow_module_level
flag.Parameters: allow_module_level (bool) – allows this function to be called at module level, skipping the rest of the module. Default to False. Note
It is better to use the pytest.mark.skipif marker when possible to declare a test to be skipped under certain conditions like mismatching platforms or dependencies.
pytest.xfail¶
-
xfail
(reason='')[source]¶ Imperatively xfail an executing test or setup functions with the given reason.
This function should be called only during testing (setup, call or teardown).
Note
It is better to use the pytest.mark.xfail marker when possible to declare a test to be xfailed under certain conditions like known bugs or missing features.
pytest.freeze_includes¶
Tutorial: Freezing pytest.
Marks¶
Marks can be used apply meta data to test functions (but not fixtures), which can then be accessed by fixtures or plugins.
pytest.mark.filterwarnings¶
Tutorial: @pytest.mark.filterwarnings.
Add warning filters to marked test items.
-
pytest.mark.
filterwarnings
(filter)¶ Parameters: filter (str) – A warning specification string, which is composed of contents of the tuple
(action, message, category, module, lineno)
as specified in The Warnings filter section of the Python documentation, separated by":"
. Optional fields can be omitted. Module names passed for filtering are not regex-escaped.For example:
@pytest.mark.warnings("ignore:.*usage will be deprecated.*:DeprecationWarning") def test_foo(): ...
pytest.mark.skipif¶
Tutorial: Skipping test functions.
Skip a test function if a condition is True
.
-
pytest.mark.
skipif
(condition, *, reason=None)¶ Parameters: - condition (bool or str) –
True/False
if the condition should be skipped or a condition string. - reason (str) – Reason why the test function is being skipped.
- condition (bool or str) –
pytest.mark.usefixtures¶
Tutorial: Using fixtures from classes, modules or projects.
Mark a test function as using the given fixture names.
Warning
This mark has no effect when applied to a fixture function.
-
pytest.mark.
usefixtures
(*names)¶ Parameters: args – the names of the fixture to use, as strings
pytest.mark.xfail¶
Tutorial: XFail: mark test functions as expected to fail.
Marks a test function as expected to fail.
-
pytest.mark.
xfail
(condition=None, *, reason=None, raises=None, run=True, strict=False)¶ Parameters: - condition (bool or str) –
True/False
if the condition should be marked as xfail or a condition string. - reason (str) – Reason why the test function is marked as xfail.
- raises (Exception) – Exception subclass expected to be raised by the test function; other exceptions will fail the test.
- run (bool) – If the test function should actually be executed. If
False
, the function will always xfail and will not be executed (useful a function is segfaulting). - strict (bool) –
- If
False
(the default) the function will be shown in the terminal output asxfailed
if it fails and asxpass
if it passes. In both cases this will not cause the test suite to fail as a whole. This is particularly useful to mark flaky tests (tests that random at fail) to be tackled later. - If
True
, the function will be shown in the terminal output asxfailed
if it fails, but if it unexpectedly passes then it will fail the test suite. This is particularly useful to mark functions that are always failing and there should be a clear indication if they unexpectedly start to pass (for example a new release of a library fixes a known bug).
- If
- condition (bool or str) –
custom marks¶
Marks are created dynamically using the factory object pytest.mark
and applied as a decorator.
For example:
@pytest.mark.timeout(10, "slow", method="thread")
def test_function():
...
Will create and attach a Mark
object to the collected
Item
, which can then be accessed by fixtures or hooks with
Node.iter_markers
. The mark
object will have the following attributes:
mark.args == (10, "slow")
mark.kwargs == {"method": "thread"}
Fixtures¶
Tutorial: pytest fixtures: explicit, modular, scalable.
Fixtures are requested by test functions or other fixtures by declaring them as argument names.
Example of a test requiring a fixture:
def test_output(capsys):
print("hello")
out, err = capsys.readouterr()
assert out == "hello\n"
Example of a fixture requiring another fixture:
@pytest.fixture
def db_session(tmpdir):
fn = tmpdir / "db.file"
return connect(str(fn))
For more details, consult the full fixtures docs.
config.cache¶
Tutorial: Cache: working with cross-testrun state.
The config.cache
object allows other plugins and fixtures
to store and retrieve values across test runs. To access it from fixtures
request pytestconfig
into your fixture and get it with pytestconfig.cache
.
Under the hood, the cache plugin uses the simple
dumps
/loads
API of the json
stdlib module.
capsys¶
Tutorial: Capturing of the stdout/stderr output.
capfd¶
Tutorial: Capturing of the stdout/stderr output.
request¶
Tutorial: Pass different values to a test function, depending on command line options.
The request
fixture is a special fixture providing information of the requesting test function.
record_property¶
Tutorial: record_property.
testdir¶
This fixture provides a Testdir
instance useful for black-box testing of test files, making it ideal to
test plugins.
To use it, include in your top-most conftest.py
file:
pytest_plugins = 'pytester'
recwarn¶
Tutorial: Asserting warnings with the warns function
Each recorded warning is an instance of warnings.WarningMessage
.
Note
RecordedWarning
was changed from a plain class to a namedtuple in pytest 3.1
Note
DeprecationWarning
and PendingDeprecationWarning
are treated
differently; see Ensuring code triggers a deprecation warning.
tmpdir¶
Tutorial: Temporary directories and files
tmpdir_factory¶
Tutorial: The ‘tmpdir_factory’ fixture
tmpdir_factory
instances have the following methods:
Hooks¶
Tutorial: Writing plugins.
Reference to all hooks which can be implemented by conftest.py files and plugins.
Bootstrapping hooks¶
Bootstrapping hooks called for plugins registered early enough (internal and setuptools plugins).
Initialization hooks¶
Initialization hooks called for plugins and conftest.py
files.
Test running hooks¶
All runtest related hooks receive a pytest.Item
object.
For deeper understanding you may look at the default implementation of
these hooks in _pytest.runner
and maybe also
in _pytest.pdb
which interacts with _pytest.capture
and its input/output capturing in order to immediately drop
into interactive debugging when a test failure occurs.
The _pytest.terminal
reported specifically uses
the reporting hook to print information about a test run.
Collection hooks¶
pytest
calls the following hooks for collecting files and directories:
For influencing the collection of objects in Python modules you can use the following hook:
After collection is complete, you can modify the order of items, delete or otherwise amend the test items:
Reporting hooks¶
Session related reporting hooks:
And here is the central hook for reporting about test execution:
You can also use this hook to customize assertion representation for some types:
Debugging/Interaction hooks¶
There are few hooks which can be used for special reporting or interaction with exceptions:
Special Variables¶
pytest treats some global variables in a special manner when defined in a test module.
pytest_plugins¶
Tutorial: Requiring/Loading plugins in a test module or conftest file
Can be declared at the global level in test modules and conftest.py files to register additional plugins.
Can be either a str
or Sequence[str]
.
pytest_plugins = "myapp.testsupport.myplugin"
pytest_plugins = ("myapp.testsupport.tools", "myapp.testsupport.regression")
pytest_mark¶
Tutorial: Marking whole classes or modules
Can be declared at the global level in test modules to apply one or more marks to all test functions and methods. Can be either a single mark or a sequence of marks.
import pytest
pytestmark = pytest.mark.webtest
import pytest
pytestmark = (pytest.mark.integration, pytest.mark.slow)
PYTEST_DONT_REWRITE (module docstring)¶
The text PYTEST_DONT_REWRITE
can be add to any module docstring to disable
assertion rewriting for that module.
Environment Variables¶
Environment variables that can be used to change pytest’s behavior.
PYTEST_ADDOPTS¶
This contains a command-line (parsed by the py:mod:shlex module) that will be prepended to the command line given by the user, see How to change command line options defaults for more information.
PYTEST_DEBUG¶
When set, pytest will print tracing and debug information.
PYTEST_PLUGINS¶
Contains comma-separated list of modules that should be loaded as plugins:
export PYTEST_PLUGINS=mymodule.plugin,xdist
PYTEST_DISABLE_PLUGIN_AUTOLOAD¶
When set, disables plugin auto-loading through setuptools entrypoints. Only explicitly specified plugins will be loaded.
PYTEST_CURRENT_TEST¶
This is not meant to be set by users, but is set by pytest internally with the name of the current test so other processes can inspect it, see PYTEST_CURRENT_TEST environment variable for more information.
Configuration Options¶
Here is a list of builtin configuration options that may be written in a pytest.ini
, tox.ini
or setup.cfg
file, usually located at the root of your repository. All options must be under a [pytest]
section
([tool:pytest]
for setup.cfg
files).
Configuration file options may be overwritten in the command-line by using -o/--override
, which can also be
passed multiple times. The expected format is name=value
. For example:
pytest -o console_output_style=classic -o cache_dir=/tmp/mycache
-
addopts
¶ Add the specified
OPTS
to the set of command line arguments as if they had been specified by the user. Example: if you have this ini file content:# content of pytest.ini [pytest] addopts = --maxfail=2 -rf # exit after 2 failures, report fail info
issuing
pytest test_hello.py
actually means:pytest --maxfail=2 -rf test_hello.py
Default is to add no options.
-
cache_dir
¶ New in version 3.2.
Sets a directory where stores content of cache plugin. Default directory is
.pytest_cache
which is created in rootdir. Directory may be relative or absolute path. If setting relative path, then directory is created relative to rootdir. Additionally path may contain environment variables, that will be expanded. For more information about cache plugin please refer to Cache: working with cross-testrun state.
-
confcutdir
¶ Sets a directory where search upwards for
conftest.py
files stops. By default, pytest will stop searching forconftest.py
files upwards frompytest.ini
/tox.ini
/setup.cfg
of the project if any, or up to the file-system root.
-
console_output_style
¶ New in version 3.3.
Sets the console output style while running tests:
classic
: classic pytest output.progress
: like classic pytest output, but with a progress indicator.count
: like progress, but shows progress as the number of tests completed instead of a percent.
The default is
progress
, but you can fallback toclassic
if you prefer or the new mode is causing unexpected problems:# content of pytest.ini [pytest] console_output_style = classic
-
doctest_encoding
¶ New in version 3.1.
Default encoding to use to decode text files with docstrings. See how pytest handles doctests.
-
doctest_optionflags
¶ One or more doctest flag names from the standard
doctest
module. See how pytest handles doctests.
-
empty_parameter_set_mark
¶ New in version 3.4.
Allows to pick the action for empty parametersets in parameterization
skip
skips tests with an empty parameterset (default)xfail
marks tests with an empty parameterset as xfail(run=False)fail_at_collect
raises an exception if parametrize collects an empty parameter set
# content of pytest.ini [pytest] empty_parameter_set_mark = xfail
Note
The default value of this option is planned to change to
xfail
in future releases as this is considered less error prone, see #3155 for more details.
-
filterwarnings
¶ New in version 3.1.
Sets a list of filters and actions that should be taken for matched warnings. By default all warnings emitted during the test session will be displayed in a summary at the end of the test session.
# content of pytest.ini [pytest] filterwarnings = error ignore::DeprecationWarning
This tells pytest to ignore deprecation warnings and turn all other warnings into errors. For more information please refer to Warnings Capture.
-
junit_suite_name
¶ New in version 3.1.
To set the name of the root test suite xml item, you can configure the
junit_suite_name
option in your config file:[pytest] junit_suite_name = my_suite
-
log_cli_date_format
¶ New in version 3.3.
Sets a
time.strftime()
-compatible string that will be used when formatting dates for live logging.[pytest] log_cli_date_format = %Y-%m-%d %H:%M:%S
For more information, see Live Logs.
-
log_cli_format
¶ New in version 3.3.
Sets a
logging
-compatible string used to format live logging messages.[pytest] log_cli_format = %(asctime)s %(levelname)s %(message)s
For more information, see Live Logs.
-
log_cli_level
¶ New in version 3.3.
Sets the minimum log message level that should be captured for live logging. The integer value or the names of the levels can be used.
[pytest] log_cli_level = INFO
For more information, see Live Logs.
-
log_date_format
¶ New in version 3.3.
Sets a
time.strftime()
-compatible string that will be used when formatting dates for logging capture.[pytest] log_date_format = %Y-%m-%d %H:%M:%S
For more information, see Logging.
-
log_file
¶ New in version 3.3.
Sets a file name relative to the
pytest.ini
file where log messages should be written to, in addition to the other logging facilities that are active.[pytest] log_file = logs/pytest-logs.txt
For more information, see Logging.
-
log_file_date_format
¶ New in version 3.3.
Sets a
time.strftime()
-compatible string that will be used when formatting dates for the logging file.[pytest] log_file_date_format = %Y-%m-%d %H:%M:%S
For more information, see Logging.
-
log_file_format
¶ New in version 3.3.
Sets a
logging
-compatible string used to format logging messages redirected to the logging file.[pytest] log_file_format = %(asctime)s %(levelname)s %(message)s
For more information, see Logging.
-
log_file_level
¶ New in version 3.3.
Sets the minimum log message level that should be captured for the logging file. The integer value or the names of the levels can be used.
[pytest] log_file_level = INFO
For more information, see Logging.
-
log_format
¶ New in version 3.3.
Sets a
logging
-compatible string used to format captured logging messages.[pytest] log_format = %(asctime)s %(levelname)s %(message)s
For more information, see Logging.
-
log_level
¶ New in version 3.3.
Sets the minimum log message level that should be captured for logging capture. The integer value or the names of the levels can be used.
[pytest] log_level = INFO
For more information, see Logging.
-
log_print
¶ New in version 3.3.
If set to
False
, will disable displaying captured logging messages for failed tests.[pytest] log_print = False
For more information, see Logging.
-
markers
¶ List of markers that are allowed in test functions, enforced when
--strict
command-line argument is used. You can use a marker name per line, indented from the option name.[pytest] markers = slow serial
-
minversion
¶ Specifies a minimal pytest version required for running tests.
# content of pytest.ini [pytest] minversion = 3.0 # will fail if we run with pytest-2.8
-
norecursedirs
¶ Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style) patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:
* matches everything ? matches any single character [seq] matches any character in seq [!seq] matches any char not in seq
Default patterns are
'.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg', 'venv'
. Setting anorecursedirs
replaces the default. Here is an example of how to avoid certain directories:[pytest] norecursedirs = .svn _build tmp*
This would tell
pytest
to not look into typical subversion or sphinx-build directories or into anytmp
prefixed directory.Additionally,
pytest
will attempt to intelligently identify and ignore a virtualenv by the presence of an activation script. Any directory deemed to be the root of a virtual environment will not be considered during test collection unless‑‑collect‑in‑virtualenv
is given. Note also thatnorecursedirs
takes precedence over‑‑collect‑in‑virtualenv
; e.g. if you intend to run tests in a virtualenv with a base directory that matches'.*'
you must overridenorecursedirs
in addition to using the‑‑collect‑in‑virtualenv
flag.
-
python_classes
¶ One or more name prefixes or glob-style patterns determining which classes are considered for test collection. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any class prefixed with
Test
as a test collection. Here is an example of how to collect tests from classes that end inSuite
:[pytest] python_classes = *Suite
Note that
unittest.TestCase
derived classes are always collected regardless of this option, asunittest
‘s own collection framework is used to collect those tests.
-
python_files
¶ One or more Glob-style file patterns determining which python files are considered as test modules. Search for multiple glob patterns by adding a space between patterns:
[pytest] python_files = test_*.py check_*.py example_*.py
Or one per line:
[pytest] python_files = test_*.py check_*.py example_*.py
By default, files matching
test_*.py
and*_test.py
will be considered test modules.
-
python_functions
¶ One or more name prefixes or glob-patterns determining which test functions and methods are considered tests. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any function prefixed with
test
as a test. Here is an example of how to collect test functions and methods that end in_test
:[pytest] python_functions = *_test
Note that this has no effect on methods that live on a
unittest .TestCase
derived class, asunittest
‘s own collection framework is used to collect those tests.See Changing naming conventions for more detailed examples.
-
testpaths
¶ New in version 2.8.
Sets list of directories that should be searched for tests when no specific directories, files or test ids are given in the command line when executing pytest from the rootdir directory. Useful when all project tests are in a known location to speed up test collection and to avoid picking up undesired tests by accident.
[pytest] testpaths = testing doc
This tells pytest to only look for tests in
testing
anddoc
directories when executing from the root directory.
-
usefixtures
¶ List of fixtures that will be applied to all test functions; this is semantically the same to apply the
@pytest.mark.usefixtures
marker to all test functions.[pytest] usefixtures = clean_db
-
xfail_strict
¶ If set to
True
, tests marked with@pytest.mark.xfail
that actually succeed will by default fail the test suite. For more information, see strict parameter.[pytest] xfail_strict = True