pytest-check
A pytest plugin that allows multiple failures per test.
Description
pytest-check
A pytest plugin that allows multiple failures per test.
Normally, a test function will fail and stop running with the first failed assert.
That's totally fine for tons of kinds of software tests.
However, there are times where you'd like to check more than one thing, and you'd really like to know the results of each check, even if one of them fails.
pytest-check allows multiple failed "checks" per test function, so you can see the whole picture of what's going wrong.
Installation
From PyPI:
$ pip install pytest-check
From conda (conda-forge):
$ conda install -c conda-forge pytest-check
Example
Quick example of where you might want multiple checks:
import httpx
from pytest_check import check
def test_httpx_get():
r = httpx.get('https://www.example.org/')
# bail if bad status code
assert r.status_code == 200
# but if we get to here
# then check everything else without stopping
with check:
assert r.is_redirect is False
with check:
assert r.encoding == 'utf-8'
with check:
assert 'Example Domain' in r.text
Import vs fixture
The example above used import: from pytest_check import check.
You can also grab check as a fixture with no import:
def test_httpx_get(check):
r = httpx.get('https://www.example.org/')
...
with check:
assert r.is_redirect == False
...
Validation functions
check also helper functions for common checks.
These methods do NOT need to be inside of a with check: block.
| Function | Meaning | Notes |
|---|---|---|
equal(a, b, msg="") | a == b | |
not_equal(a, b, msg="") | a != b | |
is_(a, b, msg="") | a is b | |
is_not(a, b, msg="") | a is not b | |
is_true(x, msg="") | bool(x) is True | |
is_false(x, msg="") | bool(x) is False | |
is_none(x, msg="") | x is None | |
is_not_none(x, msg="") | x is not None | |
is_in(a, b, msg="") | a in b | |
is_not_in(a, b, msg="") | a not in b | |
is_instance(a, b, msg="") | isinstance(a, b) | |
is_not_instance(a, b, msg="") | not isinstance(a, b) | |
is_nan(x, msg="") | math.isnan(x) | math.isnan |
is_not_nan(x, msg="") | not math.isnan(x) | math.isnan |
almost_equal(a, b, rel=None, abs=None, msg="") | a == pytest.approx(b, rel, abs) | pytest.approx |
not_almost_equal(a, b, rel=None, abs=None, msg="") | a != pytest.approx(b, rel, abs) | pytest.approx |
greater(a, b, msg="") | a > b | |
greater_equal(a, b, msg="") | a >= b | |
less(a, b, msg="") | a < b | |
less_equal(a, b, msg="") | a <= b | |
between(b, a, c, msg="", ge=False, le=False) | a < b < c | |
between_equal(b, a, c, msg="") | a <= b <= c | same as between(b, a, c, msg, ge=True, le=True) |
raises(expected_exception, *args, **kwargs) | Raises given exception | similar to pytest.raises |
fail(msg) | Log a failure |
Note: This is a list of relatively common logic operators. I'm reluctant to add to the list too much, as it's easy to add your own.
The httpx example can be rewritten with helper functions:
def test_httpx_get_with_helpers():
r = httpx.get('https://www.example.org/')
assert r.status_code == 200
check.is_false(r.is_redirect)
check.equal(r.encoding, 'utf-8')
check.is_in('Example Domain', r.text)
Which you use is personal preference.
Defining your own check functions
Using @check.check_func
The @check.check_func decorator allows you to wrap any test helper that has an assert statement in it to be a non-blocking assert function.
from pytest_check import check
@check.check_func
def is_four(a):
assert a == 4
def test_all_four():
is_four(1)
is_four(2)
is_four(3)
is_four(4)
Built in check functions return bool
The return value of all the check functions that come pre-written in pytest-check return a bool value.
You can use that to determine if it passes or fails.
So, if you want to perform some action based on success/failure, you can just do something like:
from pytest_check import check
def test_something()
...
if check.equal(a, b):
# they are equal
...
else
# they are not equal
# and a failure was registered by the check method
...
Using check.fail()
Using @check.check_func is probably the easiest.
However, it does have a bit of overhead in the passing cases
that can affect large loops of checks.
If you need a bit of a speedup, use the following style with the help of check.fail().
from pytest_check import check
def is_four(a):
__tracebackhide__ = True
if a == 4:
return True
else:
check.fail(f"check {a} == 4")
return False
def test_all_four():
is_four(1)
is_four(2)
is_four(3)
is_four(4)
Using raises as a context manager
raises is used as context manager, much like pytest.raises. The main difference being that a failure to raise the right exception won't stop the execution of the test method.
from pytest_check import check
def test_raises():
with check.raises(AssertionError):
x = 3
assert 1 < x < 4
def test_raises_exception_value():
with check.raises(ValueError) as e:
raise ValueError("This is a ValueError")
check.equal(str(e.value) == "This is a ValueError")
If the exception isn't correct, as in it isn't the exception type raised, the error message reported for the test failure will describe the actual exception.
def test_raises_fail():
with check.raises(ValueError):
x = 1 / 0 # division by zero error, NOT ValueError
assert x == 0
If you want a custom message instead, you can supply one.
Note, this doesn't check that msg matches the exception string.
It simply is a custom failure message for your test.
def test_raises_and_custom_fail_message():
with check.raises(ValueError, msg="custom"):
x = 1 / 0 # division by zero error, NOT ValueError
assert x == 0
Pseudo-tracebacks
With check, tests can have multiple failures per test.
This would possibly make for extensive output if we include the full traceback for
every failure.
To make the output a little more concise, pytest-check implements a shorter version, which we call pseudo-tracebacks.
And to further make the output more concise, and speed up the test run, only the first pseudo-traceback is turned on by default.
For example, take this test:
def test_example():
a = 1
b = 2
c = [2, 4, 6]
check.greater(a, b)
check.less_equal(b, a)
check.is_in(a, c, "Is 1 in the list")
check.is_not_in(b, c, "make sure 2 isn't in list")
This will result in:
$ pytest test_check.py
...
================================= FAILURES =================================
_______________________________ test_example _______________________________
FAILURE: check 1 > 2
test_check.py:7 in test_example() -> check.greater(a, b)
FAILURE: check 2 <= 1
FAILURE: check 1 in [2, 4, 6]: Is 1 in the list
FAILURE: check 2 not in [2, 4, 6]: make sure 2 isn't in list
------------------------------------------------------------
Failed Checks: 4
========================= short test summary info ==========================
FAILED test_check.py::test_example - check 1 > 2
============================ 1 failed in 0.01s =============================
If you wish to see more pseudo-tracebacks than just the first, you can set --check-max-tb=5 or something larger:
(.venv) $ pytest test_check.py --check-max-tb=5
=========================== test session starts ============================
collected 1 item
test_check.py F [100%]
================================= FAILURES =================================
_______________________________ test_example _______________________________
FAILURE: check 1 > 2
test_check.py:7 in test_example() -> check.greater(a, b)
FAILURE: check 2 <= 1
test_check.py:8 in test_example() -> check.less_equal(b, a)
FAILURE: check 1 in [2, 4, 6]: Is 1 in the list
test_check.py:9 in test_example() -> check.is_in(a, c, "Is 1 in the list")
FAILURE: check 2 not in [2, 4, 6]: make sure 2 isn't in list
test_check.py:10 in test_example() -> check.is_not_in(b, c, "make sure 2 isn't in list")
------------------------------------------------------------
Failed Checks: 4
========================= short test summary info ==========================
FAILED test_check.py::test_example - check 1 > 2
============================ 1 failed in 0.01s =============================
Red output
The failures will also be red, unless you turn that off with pytests --color=no.
No output
You can turn off the failure reports with pytests --tb=no.
Stop on Fail (maxfail behavior)
Setting -x or --maxfail=1 will cause this plugin to abort testing after the first failed check.
Setting -maxfail=2 or greater will turn off any handling of maxfail within this plugin and the be