Python testing cheat sheet
Table of Contents
Some notes on the tools I’ve used to manage test suites in Python. I believe unittest
and hypothesis
are sufficient for any project, and doctest
is very useful to write short usage examples in function docstrings. Pytest is very popular, so I’ve written down what I’ve learnt about it.
Unittest⌗
unittest is the standard python unit test module. The official documentation is full of useful examples to learn how to use it. The basic example is:
import unittest
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
Hypothesis⌗
Hypothesis can be used with unittest:
import unittest
import hypothesis
import hypothesis.strategies as st
from python.fisher_yates.shuffle import fisher_yates_shuffle
class TestFisherYatesShuffle(unittest.TestCase):
@hypothesis.given(st.lists(st.integers(), unique=True))
@hypothesis.example([])
def test_property_length(self, integers):
"""Test simple properties of the function.
- Length does not change after shuffling
- Content doesn't change after shuffling
"""
shuffled = fisher_yates_shuffle(integers)
self.assertEqual(len(integers), len(shuffled))
self.assertEqual(set(integers), set(shuffled))
@hypothesis.given(st.lists(st.integers(), unique=True, min_size=20))
def test_property_order(self, integers):
"""Test more properties of the function.
- With larger lists, we expect that the order changes
after a shuffle
"""
shuffled = fisher_yates_shuffle(integers)
self.assertEqual(set(integers), set(shuffled))
self.assertNotEqual(integers, shuffled)
Doctest⌗
Doctest lets us write tests within a function’s docstring. This is, in my opinion, very useful to show how a function is itended to be used. Pandas uses doctest extensively, for example this is a doctest in their code, and this is the output in the documentation. Such examples are guaranteed to be up-to-date since they are run as tests.
Pytest⌗
Pytest is, in my opinion, a bit slow to run, but many people prefer writing tests a standalone functions. It also has a few plugins that can bring other quality of life improvements.
SetUp/TearDown⌗
We can use yield
in pytest fixtures to emulate class-based tests SetUp/TearDown methods.
It works particularly well with decorators - the example below will run test_foo
with a patched module.function
, then reset the patched function after the test finishes:
from unittest.mock import patch
import pytest
@pytest.fixture
def my_fixture():
with patch('module.function', return_value=42)
yield
@pytest.mark.usefixtures('my_fixture')
def test_foo():
import module
assert module.function() == 42
Fixtures scope and autouse⌗
Fixtures can have a scope that is one of:
- function
- class
- module
- package
- session (the whole test run)
With the autouse
parameter, we can automatically load a function for all the tests in a module, for example:
from mymodels import SomeModel
@django_db
@pytest.fixture(scope='module', autouse=True)
def fixture_populate_database():
SomeModel.objects.create(foo='Bar')
def test_foo_is_bar():
assert list(SomeModel.objects.values_list('foo')) == ['bar']
Sadly, we can’t use session-level fixtures that require a database access (in a Django project). There is a GitHub issue open.
Testing logs⌗
We can use the caplog
fixture, then filter its output:
def test_my_logs(caplog):
fixture = some_fixture()
assert some_function() == some_result
# filter by level and logger
info_logs = [
line
for logger, level, line in caplog.record_tuples
if level == logging.INFO and logger == 'some_module.some_function'
]
# assert the logs are what we expected
assert info_logs == [
'first message',
'second_message',
'third_message: %d' % fixture.id,
]
Parametrized tests⌗
It can be useful to run a single test over multiple inputs. Using the parametrize
decorator,
we can list inputs that we expect to succeed or fail.
@pytest.mark.parametrize(
"test_input,expected",
[
("3+5", 8),
("2+4", 6),
pytest.param("6*9", 42, marks=pytest.mark.xfail)
],
)
def test_eval(test_input, expected):
assert eval(test_input) == expected
Hypothesis⌗
Hypothesis integrates well with Pytest. This example from the official doc:
from hypothesis import example, given, strategies as st
@given(st.text())
@example("")
def test_decode_inverts_encode(s):
assert decode(encode(s)) == s
Other plugins⌗
- pytest-django can help use Pytest within a Django project.
- pytest-xdist helps run tests in parallel.
- pytest-cov allows code coverage measurement