Pytest Tests
Contents
Pytest Tests#
Pytest is a testing framework for Python. In this document we’ll discuss how to write tests for Pytest as well as some of the extra features that Pytest provides.
For installation and other usage information, see the Pytest tool guide.
Part 1: Writing tests#
In this section we’ll discuss how to write tests for Pytest.
Part 1.1: Hello World#
Pytest expects tests to be located in files whose names begin with test_
or
end with _test.py
.
Individual tests are written in functions that begins with test_
and contain
one or more assert
statements which determine if it passes or fails.
Here’s a simple example test that will always pass.
1def test_truth():
2 assert True
Important
Do not duplicate test names. If you do, only the first test will be run and any duplicates will be ignored by Pytest.
To run the test use the command line to type pytest
followed by the filename.
$ pytest test_hello_world.py
The result will look something like this.
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 1 item
test_hello_world.py . [100%]
====================== 1 passed in 0.00s =======================
Congratulations, you’ve run your first Pytest test!
Part 1.2: Test failures#
Let’s look at an example of a failing tests. In the following test_lies()
function the assert statement fails, which will in turn cause that test to
fail.
1def test_truth():
2 assert True
3
4def test_lies():
5 assert False
Here is what your test output looks like now.
$ pytest test_hello_world.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 2 items
test_hello_world.py .F [100%]
=========================== FAILURES ===========================
__________________________ test_lies ___________________________
def test_lies():
> assert False
E assert False
test_hello_world.py:5: AssertionError
=================== short test summary info ====================
FAILED test_hello_world.py::test_lies - assert False
================= 1 failed, 1 passed in 0.05s ==================
Part 1.3: Reading test output#
Let’s take a closer look at that test output.
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 2 items
test_hello_world.py .F [100%]
This line indicates the progress of each test file and each test in the file is represented by a single character following the filename.
Test No |
Symbol |
Means… |
---|---|---|
1 |
|
passing test |
2 |
|
failing test |
|
all tests were run |
|
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 2 items
test_hello_world.py .F [100%]
=========================== FAILURES ===========================
__________________________ test_lies ___________________________
def test_lies():
> assert False
E assert False
This section of the output shows you detailed information about each failing test.
Output |
Means… |
---|---|
|
start of info about |
|
the line where the failure happened, indicated by |
|
more information about the error, indicated by a red |
If this was an error in your code instead of a failing assert
there might
be multiple lines beginning with E
and a lot more information. In this case
though there’s no additional information so it looks the same as the line above.
$ pytest test_hello_world.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 2 items
test_hello_world.py .F [100%]
=========================== FAILURES ===========================
__________________________ test_lies ___________________________
def test_lies():
> assert False
E assert False
test_hello_world.py:5: AssertionError
This line is probably the most useful of all. It tells you the three most important pieces of information:
Output |
Means… |
---|---|
|
the file where the error occurred |
|
the line number that it happened on |
|
the exception class that was raised |
$ pytest test_hello_world.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 2 items
test_hello_world.py .F [100%]
=========================== FAILURES ===========================
__________________________ test_lies ___________________________
def test_lies():
> assert False
E assert False
test_hello_world.py:5: AssertionError
=================== short test summary info ====================
FAILED test_hello_world.py::test_lies - assert False
================= 1 failed, 1 passed in 0.05s ==================
Finally, this line shows a summary of all the failures.
Output |
Means… |
---|---|
|
the file where the error occurred |
|
the test that failed |
|
the code that failed |
|
total failing tests |
|
total passing tests |
Part 1.4: Testing functions#
Let’s take a look at what a test for a piece of code might look like.
Below we add to the test_hello_world.py
file a increment()
function
which returns number
incremented by one.
The test_increment()
test calls the increment()
function with an argument of
5
and assigns the result to the variable answer
. Then it asserts that the
answer
should be 6
.
1def increment(number):
2 return number + 1
3
4def test_truth():
5 assert True
6
7def test_increment():
8 answer = increment(5)
9 assert answer == 6
Part 1.5: Importing functions#
We typically do not keep our functions in the same file as our tests. This means in order to test our functions, we’ll need to import them into the test file.
Let’s say we have a file called my_project.py
.
1def can_drink(age):
2 return age >= 21
We would typically name our test file test_my_project.py
. Then we’d import
the can_drink
function from the my_project
module before defining the test
for it.
1from my_project import can_drink
2
3def test_can_drink():
4 is_allowed = can_drink(5)
5 assert not is_allowed
Part 1.6: More imports#
In larger projects it is common to break things into multiple files and directories. A common directory structure looks like this:
.
├── README.md
├── my_project
│ ├── __init__.py
│ └── main.py
├── setup.py
└── tests
└── test_main.py
In a setup like this your code would be in the my_project
directory and your
tests in the tests
directory.
If we imagine that we renamed the my_project.py
file to my_project/main.py
,
then we would need to modify the import statement in our test file.
1from my_project.main import can_drink
2
3def test_can_drink():
4 is_allowed = can_drink(5)
5 assert not is_allowed
To run the tests you would then run the command
$ pytest tests/test_main.py
Or to run all tests in the tests
directory simply:
$ pytest tests
Important
You must run pytest
from the root directory of your project for the imports
to work properly.
Part 2: Skipping tests#
In this section we’ll discuss how to skip tests.
Part 2.1: Skipping a test#
Sometimes we have a test that is not currently working for some reason–maybe
it’s a work in progress or represents something you want to implement at a
later date. Here’s an example in our old test_hello_world.py
file.
First you need to import the pytest
module in your test. Then above the test
function add the line @pytest.mark.skip(reason="EXPLANATION")
with a
brief explanation of why it is being skipped.
import pytest
def test_truth():
assert True
@pytest.mark.skip(reason="an example of a test failure")
def test_lies():
assert False
def test_increment():
answer = increment(5)
assert answer == 6
When you run the tests, your output will show a yellow s
for skipped tests.
$ pytest test_hello_world.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 3 items
test_hello_world.py .s. [100%]
================= 2 passed, 1 skipped in 0.00s =================
Part 2.2: Skipping a failure#
Another way to do the same thing is to mark it as an expected failure. It is
the same as above, except use xfail
instead of skip
.
import pytest
def test_truth():
assert True
@pytest.mark.xfail(reason="an example of a test failure")
def test_lies():
assert False
def test_increment():
answer = increment(5)
assert answer == 6
In the output, the skipped test will be marked with a yellow x
.
$ pytest test_hello_world.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 3 items
test_hello_world.py .x. [100%]
================= 2 passed, 1 xfailed in 0.03s =================
Part 2.3: Skipping sometimes#
Sometimes you may want to skip certain tests that only work in certain environments. For example, based on a developers operating system or the version of Python they are running.
In these cases the skipif
decorator is useful. The first argument is the
condition under which to skip the test, then use the reason
keyword argument
as usual.
import sys
import pytest
@pytest.mark.skipif(sys.platform != "darwin", reason="mac-specific testing")
def test_macos():
...
Part 3: Exception Handling#
Sometimes our program may raise exceptions on purpose, in which case we need to be able to test those exceptions without causing the tests to fail.
Part 3.1: pytest.raises#
To test for a raised exception use pytest.raises
as a context manager with
the code that is being tested inside of the context manager.
For example, the following code effectively asserts that when you call the
do_quit()
function a SystemExit
exception is raised. (Which is how most
Python programs exit, under the hood.)
import pytest
from my_game import do_quit
def test_do_quit():
with pytest.raises(SystemExit):
# code to test
do_quit()
Part 3.1: Exception messages#
Sometimes just checking the exception class isn’t enough – we have to test for
a specific exception message. For example, our program may raise a ValueError
in a function. However ValueError
exceptions are very common.
In that case you can use the optional as
part of the pytest.raises
context
manager to get the exception info, then run the code you wish to test as
before. Then after the with statement, you can add an assert statement on
info.value
.
import pytest
from my_game import inventory_remove
def test_inventory_remove_with_invalid_argument():
with pytest.raises(ValueError) as info:
# code to test
inventory_remove(5)
# check the exception message
message = str(info.value)
assert "inventory_remove() expected an item key (str)." in message
Part 3.3: Message patterns#
Another way to test the error message is to use the match
keyword argument in
pytest.raises()
.
import pytest
from my_game import Player
def test_buy():
player = Player(gems=10)
sword = Sword(price=90)
with pytest.raises(ValueError, match="you are 80 gems short") as info:
# code to test
player.buy(sword)
The match
argument can be a regular expression, so you could also do.
import pytest
from my_game import Player
def test_buy():
player = Player(gems=10)
sword = Sword(price=90)
with pytest.raises(ValueError, match=r"you are \d+ gems short") as info:
# code to test
player.buy(sword)
Part 4: Testing printed output#
Sometimes we need to test not what is returned, but what is printed to the
screen. To do this we can use the special fixture (more on those later)
capsys
.
Part 4.1: Testing stdout#
When you call the print()
function the string that you pass is sent to a
special file called stdout that your terminal knows to display to the
end user.
When we are testing something that prints to the screen, add capsys
to the
function definition to let Pytest know that it should capture the system output
and save it for us.
def test_write(capsys):
write("hello", lines_after=3)
output = capsys.readouterr().out
assert output.endswith("\n\n\n")
Note that every time you call readouterr()
it sets the out
attribute to all
of the content that has been sent to stdout
since the function started or the
last time readouterr()
was called.
1def test_stdout(capsys):
2 print("a")
3 print("b")
4 print("c")
5
6 output_1 = capsys.readouterr().out
7
8 print("x")
9 print("y")
10 print("z")
11
12 output_2 = capsys.readouterr().out
13
14 assert output_1 == "a\nb\nc\n"
15 assert output_2 == "x\ny\nz\n"
In this example
output_1
on line6
will contain what was printed on lines2
-4
output_2
on line12
will contain what was printed on lines8
-10
Part 4.2: Testing stderr#
Many command line programs print errors to a separate special file called
stderr
. This allows end users and other programs to do things that involve
handling error messages differently from normal program output. For example to
silence all errors or save a file that contains just the errors.
In Python, to print to stderr
you simply include the sys
module, then add
the keyword argument file=sys.stderr
to your print statement.
Here’s an example error()
function which adds a red Error
string to the
beginning of the message then prints all arguments to stderr
.
import sys
def error(*args):
print("\x1b[31mError\x1b[0m", *args, file=sys.stderr)
To write a test for this we would use capsys
just like before, but look for
err
instead of out
.
def test_error(capsys):
error("Please reconsider your life choices and try again.")
output = capsys.readouterr().err
assert "reconsider" in output
Part 4.3: Testing both#
If you need to check what was printed to both stdout
and stderr
, you need
to make sure that readouterr()
is only called once. You can accomplish this
by assigning it to a variable.
Let’s say we’re testing a function that starts like this:
import sys
def do_thing(*args):
print("debug: Trying to do the thing:", *args)
if not args:
print("Which thing should I do?", file=sys.stderr)
return
...
In our test, we’ll save the result of readouterr()
to the variable
captured
, then check captured.out
and captured.err
in our assert
statements.
def test_do_thing(capsys):
do_thing()
captured = capsys.readouterr()
assert "Trying to do the thing" in captured.out
assert "Which thing" in captured.err
Part 5: Parametrization#
Parametrization is used to combine the multiple test that are almost exactly the same into one test with several test cases that are stored and run as a list of arguments to a single test function.
Part 5.1: Parametrize#
Remember our can_drink()
function? Let’s say we were very responsible and
wrote a whole bunch of tests for it.
1from my_project.main import can_drink
2
3def test_can_drink_15():
4 is_allowed = can_drink(15)
5 assert not is_allowed
6
7def test_can_drink_0():
8 is_allowed = can_drink(0)
9 assert not is_allowed
10
11def test_can_drink_negative():
12 is_allowed = can_drink(-5)
13 assert not is_allowed
14
15def test_can_drink_float():
16 is_allowed = can_drink(17.5)
17 assert not is_allowed
18
19def test_can_drink_21():
20 is_allowed = can_drink(21)
21 assert is_allowed
22
23def test_can_drink_100():
24 is_allowed = can_drink(100)
25 assert is_allowed
Parametrization allows us to collapse that down into just one test with a few modifications.
A. Identify differences#
Identify what is different between the test functions.
1from my_project.main import can_drink
2
3def test_can_drink_15():
4 is_allowed = can_drink(15)
5 assert not is_allowed
6
7def test_can_drink_0():
8 is_allowed = can_drink(0)
9 assert not is_allowed
10
11def test_can_drink_negative():
12 is_allowed = can_drink(-5)
13 assert not is_allowed
14
15def test_can_drink_float():
16 is_allowed = can_drink(17.5)
17 assert not is_allowed
18
19def test_can_drink_21():
20 is_allowed = can_drink(21)
21 assert is_allowed
22
23def test_can_drink_100():
24 is_allowed = can_drink(100)
25 assert is_allowed
You can see that there are two things that change in these tests:
The age that is passed to
can_drink()
Whether we expect
can_drink()
to returnTrue
orFalse
.
B. Add age
variable#
We’ll parametrize the age passed to the can_drink()
function.
Replace the argument passed to
can_drink()
with a variableage
.Add
age
as a parameter in the declaration oftest_can_drink()
.
1from my_project.main import can_drink
2
3def test_can_drink(age):
4 is_allowed = can_drink(age)
5 assert not is_allowed
C. Add expected
variable#
Next we’ll parametrize the expected return value.
Add
expected
as a parameter in the declaration oftest_can_drink()
.Change the assert statement condition to
== expected
(instead ofis_allowed
ornot is_allowed
) .
1from my_project.main import can_drink
2
3def test_can_drink(age, expected):
4 is_allowed = can_drink(age)
5 assert is_allowed == expected
D. Add decorator#
Setup the test for parametrization.
Import
pytest
.Call
@pytest.mark.parametrize()
immediately above the test function.The first argument should be list containing the parameter names from the test function, in this case
age
andexpected
The second will eventually list of tuples, but let’s start with an empty list
1from my_project.main import can_drink
2
3import pytest
4
5@pytest.mark.parametrize(["age", "expected"], [
6])
7def test_can_drink(age, expected):
8 is_allowed = can_drink(age)
9 assert is_allowed == expected
E. Add values for one test#
Each tuple represents what would have been a test otherwise, called a test case.
Each should contain the values for the variables in the same order they show up
in in the first argument and the function declaration, in this case the values
for age
and expected
.
If we look at the first test, above, those values are 15
for age
and
False
for expected
.
Add a tuple to the empty list from above that contains the values
15
andFalse
.Now you can run your tests and it should pass.
1from my_project.main import can_drink
2
3import pytest
4
5@pytest.mark.parametrize(["age", "expected"], [
6 (15, False),
7])
8def test_can_drink(age, expected):
9 is_allowed = can_drink(age)
10 assert is_allowed == expected
As it is written now, this is functionally the same as test_can_drink_15()
from above. You should be able to run this test now.
F. Add assert message#
When using parametrization, it is helpful to have a different assert message for each test case. That way if it does fail you can tell which one is the problem.
Add
message
as a parameter in the declaration oftest_can_drink()
.Add
"message"
to the end of the list of variable names in the@pytest.mark.paramaratize()
callIn the test add an assert message that that contains the variable
message
Add a string to the end of the test case tuple describing this specific case
1from my_project.main import can_drink
2
3import pytest
4
5@pytest.mark.parametrize(["age", "expected", "message"], [
6 (15, False, "False when age is less than 21"),
7])
8def test_can_drink(age, expected, message):
9 is_allowed = can_drink(age)
10 assert is_allowed == expected, \
11 f"can_drink() should return {message}"
G. Add remaining test cases#
1from my_project.main import can_drink
2
3import pytest
4
5@pytest.mark.parametrize(["age", "expected", "message"], [
6 (15, False, "False when age is less than 21"),
7 (0, False, "False when age is zero"),
8 (-5, False, "False when age is a negative int"),
9 (17.5, False, "False when age is a negative float"),
10 (21, True, "True when age is exactly 21"),
11 (100, True, "True when age is over 21"),
12])
13def test_can_drink(age, expected, message):
14 is_allowed = can_drink(age)
15 assert is_allowed == expected, \
16 f"can_drink() should return {message}"
H. Run the tests#
If you run your tests in verbose mode with the -v
flag, you will see a line
for each test case with the values from each tuple inside brackets and
separated by dashes. (I truncated the commit message here for formatting
purposes, but your test output will show the whole thing.)
$ pytest -v test_my_project.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 6 items
test_my_project.py::test_can_drink[15-False-False...] PASSED [ 16%]
test_my_project.py::test_can_drink[0-False-False...] PASSED [ 33%]
test_my_project.py::test_can_drink[-5-False-False...] PASSED [ 50%]
test_my_project.py::test_can_drink[17.5-False-False...] PASSED [ 66%]
test_my_project.py::test_can_drink[21-True-True...] PASSED [ 83%]
test_my_project.py::test_can_drink[100-True-True...] PASSED [100%]
====================== 6 passed in 0.01s =======================
I. Failing test output#
Here’s an example of what it looks like when a parametrized test fails.
$ pytest -v test_my_project.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 6 items
test_my_project.py::test_can_drink[15-False-False...] PASSED [ 16%]
test_my_project.py::test_can_drink[0-False-False...] PASSED [ 33%]
test_my_project.py::test_can_drink[-5-False-False...] PASSED [ 50%]
=========================== FAILURES ===========================
___ test_can_drink[-5-True-False when age is a negative int] ___
age = -5, expected = True
message = 'False when age is a negative int'
@pytest.mark.parametrize(["age", "expected", "message"], [
(15, False, "False when age is less than 21"),
(0, False, "False when age is zero"),
(-5, True, "False when age is a negative int"),
(17.5, False, "False when age is a negative float"),
(21, True, "True when age is exactly 21"),
(100, True, "True when age is over 21"),
])
def test_can_drink(age, expected, message):
is_allowed = can_drink(age)
> assert is_allowed == expected, \
f"can_drink() should return {message}"
E AssertionError: can_drink() should return False when age is a negative int
E assert == failed. [pytest-clarity diff shown]
E
E LHS vs RHS shown below
E
E False
E True
E
test_my_project.py:24: AssertionError
=================== short test summary info ====================
FAILED test_my_project.py::test_can_drink[-5-True-False when age is a negative int] - AssertionError: can_drink() should return False when age is...
!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!
================= 1 failed, 2 passed in 0.08s ==================
You can figure out which test is failing by looking at:
The line starting with
___ test_can_drink
contains the parameters values in brackets separated by dashesThe lines immediately after that contain the more easily readable parameter names and values.
The line starting with
FAILED
in the test summary contains the parameters values in brackets separated by dashes
Part 5.2: Skipping instances#
You may want to include test cases that you know will fail to, for example, document something that is broken or not supported.
To do this, make a test instance that calls pytest.param()
(instead of a
tuple) and include the keyword argument marks=pytest.mark.xfail
to indicate
that it will fail.
1from my_project.main import can_drink
2
3import pytest
4
5@pytest.mark.parametrize(
6 ["age", "expected"], [
7 (15, False),
8 (0, False),
9 (-5, False),
10 (17.5, False),
11 (21, True),
12 (100, True),
13 pytest.param("100", None, marks=pytest.mark.xfail),
14
15])
16def test_can_drink(age, expected):
17 is_allowed = can_drink(age)
18 assert is_allowed == expected
In this example we add a failing test case for passing a string to
can_drink()
since it’s not supported. (expected
is None
, but it doesn’t
matter because the exception will happen on line 17
.)
When we run the tests in verbose mode Pytest will indicate that that test was
marked as XFAIL
.
$ pytest -v test_my_project.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 7 items
test_my_project.py::test_can_drink[15-False] PASSED [ 14%]
test_my_project.py::test_can_drink[0-False] PASSED [ 28%]
test_my_project.py::test_can_drink[-5-False] PASSED [ 42%]
test_my_project.py::test_can_drink[17.5-False] PASSED [ 57%]
test_my_project.py::test_can_drink[21-True] PASSED [ 71%]
test_my_project.py::test_can_drink[100-True] PASSED [ 85%]
test_my_project.py::test_can_drink[100-None] XFAIL [100%]
================= 6 passed, 1 xfailed in 0.02s =================
Part 6: Setup and Teardown#
For tests that depend on information from the environment, it is important that each test start with a clean slate. This is generally accomplished with setup and teardown code–that is, code designated to run at the start and end of a particular process.
In this section we’ll talk about a couple of the different ways that you can do this in Pytest tests.
Part 6.1: Per-module setup/teardown functions#
Some setup or teardown steps only need to be done once per module (or file). For example, you may need to open and close a connection to a database, create and delete temporary directories, load the contents of a file, or initialize global variables.
In Pytest tests you can do this by simply defining a setup_module()
function
which will be executed once per file before all tests and/or a
teardown_module()
which will be run once per file after all tests.
Here is an example that loads json files (downloaded from
https://jsonplaceholder.typicode.com/
) and stores the data in a global
variable STATE
which test all test functions can access.
1from pathlib import Path
2import json
3
4def setup_module(module):
5 """Initialize STATE global variable and load from json testdata."""
6 global STATE
7 STATE = {}
8
9 for resource in ["users", "todos"]:
10 file = Path(__file__).parent / ".data" / f"{resource}.json"
11 with file.open() as fh:
12 STATE[resource] = json.load(fh)
13
14def test_user():
15 """Test that the first user was loaded from the users.json file."""
16 user = STATE["users"][0]
17
18 assert user["id"] == 1
19 assert user["name"] == "Leanne Graham"
20
21def test_todo():
22 """Test that the first todo was loaded from the todos.json file."""
23 todo = STATE["todos"][0]
24
25 assert todo["id"] == 1
26 assert todo["title"] == "delectus aut autem"
27 assert not todo["completed"]
Part 6.2: Per-test setup/teardown functions#
It is important to start each test with a clean slate to avoid test corruption.
A. Test corruption#
Test corruption is when changes made to to one test unexpectedly breaks other tests.
28def test_modify_state():
29 """Change the STATE data"""
30 todo = STATE["todos"][0]
31 todo["completed"] = True
32
33 assert todo["completed"]
34
35def test_check_modified_state():
36 """Check the same data that was modified above."""
37 todo = STATE["todos"][0]
38
39 assert not todo["completed"]
When run test_check_modified_state()
will fail because todo["completed"]
was changed in the previous test.
$ pytest -v test_setup_teardown.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 4 items
tests/test_setup_teardown.py ...F [100%]
=========================== FAILURES ===========================
__________________ test_check_modified_state ___________________
def test_check_modified_state():
"""Check the same data that was modified above."""
todo = STATE["todos"][0]
> assert not todo["completed"]
E assert not True
tests/test_setup_teardown.py:41: AssertionError
=================== short test summary info ====================
FAILED tests/test_setup_teardown.py::test_check_modified_state - assert not True
================= 1 failed, 3 passed in 0.05s ==================
B. Define setup_function()
#
To avoid this we will define a setup_function()
function which will be run
before each test.
This setup function will copy the data saved in the STATE
variable to a new
global variable DATA
. The tests will then look at and make any changes to
DATA
and leave STATE
alone.
1from pathlib import Path
2from copy import deepcopy
3import json
4
5def setup_module(module):
6 """Initialize STATE global variable and load from json testdata."""
7 global STATE
8 STATE = {}
9
10 for resource in ["users", "todos"]:
11 file = Path(__file__).parent / ".data" / f"{resource}.json"
12 with file.open() as fh:
13 STATE[resource] = json.load(fh)
14
15def setup_function(function):
16 """Revert data to its original state before each test."""
17 global DATA
18 DATA = deepcopy(STATE)
19
20def test_user():
21 """Test that the first user was loaded from the users.json file."""
22 user = DATA["users"][0]
23
24 assert user["id"] == 1
25 assert user["name"] == "Leanne Graham"
26
27def test_todo():
28 """Test that the first todo was loaded from the todos.json file."""
29 todo = DATA["todos"][0]
30
31 assert todo["id"] == 1
32 assert todo["title"] == "delectus aut autem"
33 assert not todo["completed"]
34
35def test_modify_state():
36 """Change the DATA data"""
37 todo = DATA["todos"][0]
38 todo["completed"] = True
39
40 assert todo["completed"]
41
42def test_check_modified_state():
43 """Check the same data that was modified above."""
44 todo = DATA["todos"][0]
45
46 assert not todo["completed"]
Now that each test starts with the same known set of data, all the tests pass.
$ pytest -v test_setup_teardown.py
===================== test session starts ======================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0
cachedir: .pytest_cache
rootdir: ~/python-class, configfile: pyproject.toml
plugins: pylama-8.3.7, typeguard-2.13.3
collected 4 items
tests/test_setup_teardown.py .... [100%]
====================== 4 passed in 0.01s =======================
Part 7: Fixtures#
In tests we often need to do similar setup in many tests. As a project becomes larger, it becomes unwieldy to come up with sample data over and over again.
Fixtures are one way to approach this problem. In general terms, fixtures are the shared context of the test suite. In fact, you the setup and teardown functions from Part 6 are one example of fixtures in the most general sense.
Different languages and testing frameworks have different systems for
supporting fixtures, usually involving some form of setup/teardown.
Traditionally though, the term fixture refers to a single record of test
data–for example a single user dictionary from DATA
.
Pytest has a unique and powerful approach to fixtures which can entirely replace setup and teardown functions. It might take a minute to wrap your head around it though.
Part 7.1: Basic Fixture#
In Pytest fixtures are set up as functions decorated with the @pytest.fixture
decorator and the return value is the fixture data itself.
Tests request a fixtures by declaring them as parameters, which can then be used in the function like a variable.
Here’s an example of the “Hello World” of Pytest fixtures.
1import pytest
2
3@pytest.fixture
4def true():
5 return True
6
7def test_truth(true):
8 assert true == True
Let’s take a closer look.
1. First we import pytest
.
1import pytest
2. The decorator @pytest.fixture
tells Pytest to treat the next function as a fixture.
3@pytest.fixture
4def true():
5 return True
3. The return value from the fixture function is the fixture value.
In this case the name of the fixture is true
and the value is True
.
3@pytest.fixture
4def true():
5 return True
4. The test requests a fixture by declaring the fixture name as a parameter in the test function.
In this case the fixture named true
is added as a parameter test_truth()
test.
7def test_truth(true):
8 assert true == True
5. Finally, we use the fixture name in the test the same way we would any other variable.
7def test_truth(true):
8 assert true == True
Example#
Fixtures can be used to store reusable data of all kinds, from strings to
dictionaries to pathlib.Path
objects.
1from pathlib import Path
2
3import pytest
4
5@pytest.fixture
6def true():
7 return True
8
9@pytest.fixture
10def bret():
11 """A user dictionary"""
12 return {
13 "id": 1,
14 "name": "Leanne Graham",
15 "username": "Bret",
16 "email": "Sincere@april.biz",
17 }
18
19@pytest.fixture
20def fixturedir():
21 """Path object to fixture data files"""
22 return Path(__file__).parent / ".data"
23
24def test_truth(true):
25 assert true == True
26
27def test_something_with_a_user(bret):
28 user = User(bret)
29 assert user.id == 1
30
31def test_something_from_fixturedir(fixturedir):
32 user = User(filename=fixturedir/"users.json")
33 assert user.id == 1
Part 7.2: Fixtures Requesting Fixtures#
The same way that a test can request a fixture, a fixture can request another fixture–by declaring it as a parameter to the fixture function.
1from pathlib import Path
2import json
3
4import pytest
5
6
7@pytest.fixture
8def fixturesdir():
9 """Return the Path object to the directory where fixture files are stored."""
10 return Path(__file__).parent / ".data"
11
12
13@pytest.fixture
14def users_file(fixturesdir):
15 return fixturesdir / "users.json"
16
17
18@pytest.fixture
19def users(users_file):
20 """Return """
21 with users_file.open() as fh:
22 users = json.load(fh)
23
24 return users
25
26
27def test_user(users):
28 """Test that the first user was loaded from the users.json file."""
29 user = users[0]
30
31 assert user["id"] == 1
32 assert user["name"] == "Leanne Graham"
Part 7.3: Scope#
In Pytest you can choose what scope a fixture is in–that is, at what
level tests should share the fixture before it is destroyed. To do this pass
scope=SCOPE
to the @pytest.fixture()
decorator.
1@pytest.fixture(scope="module")
2def my_fixture():
3 ...
Scope |
Shared with |
When Fixture is Destroyed |
---|---|---|
function |
a single test function default |
end of each test |
class |
all test methods in a class |
last test in the class |
module |
all tests in a module (file) |
last test in the module |
package |
all tests in the package (directory) |
last test in the package |
session |
all tests to be run |
last test to be run |
The STATE
and DATA
setup and teardown could be written using fixtures instead.
1from copy import deepcopy
2import json
3from pathlib import Path
4
5import pytest
6
7
8@pytest.fixture(scope="module")
9def state():
10 """Load state from json testdata."""
11 data = {}
12
13 for resource in ["users", "todos"]:
14 # file = Path(__file__).parent / ".data" / f"{resource}.json"
15 file = Path.cwd() / f"{resource}.json"
16 with file.open() as fh:
17 data[resource] = json.load(fh)
18
19 return data
20
21
22@pytest.fixture
23def users(state):
24 """Return a fresh set of data for each function."""
25 return deepcopy(state["users"])
26
27
28@pytest.fixture()
29def todos(state):
30 """Return a fresh set of data for each function."""
31 return deepcopy(state["todos"])
32
33
34def test_user(users):
35 """Test that the first user was loaded from the users.json file."""
36 user = users[0]
37
38 assert user["id"] == 1
39 assert user["name"] == "Leanne Graham"
40
41
42def test_todo(todos):
43 """Test that the first todo was loaded from the todos.json file."""
44 todo = todos[0]
45
46 assert todo["id"] == 1
47 assert todo["title"] == "delectus aut autem"
48 assert not todo["completed"]
49
50
51def test_modify_state(todos):
52 """Change the data"""
53 todo = todos[0]
54 todo["completed"] = True
55
56 assert todo["completed"]
57
58
59def test_check_modified_state(todos):
60 """Check the same data that was modified above."""
61 todo = todos[0]
62
63 assert not todo["completed"]
Reference#
Glossary#
Testing#
- test suite#
A collection of tests.
- test case#
In testing a test case is the individual unit of testing that checks for a specific response to a particular set of inputs.
In Pytest parametrization each combination of test and data (or parameters) is a test case. Each set of parameters is stored in a list of tuples passed as the second argument to
@pytest.mark.parametrize
.- fixture#
- fixtures#
In a test suite the fixtures provide a defined, reliable and consistent shared context for the tests. This includes any preperation that is required for one or more tests to run and may include things like setup and teardown such as creating a database or temporary directories; environment setup like configuration; or data to be used in individual test cases.
In Pytest fixtures are defined as functions marked with the
@pytest.fixture
decorator that may or may not return fixture data.When programmers refer to “a fixture” it usually refers to a single record of test data–for example, a single user record.
- fixture scope#
In Pytest the scope of a fixture determines the lifespan of a fixture. That is, at what level a fixture should be shared between tests before it is destroyed, and thus a new instance created the next time it is requested.
The default is that a fixture should exist only for the scope of a single test function. A fixture scope may also be for the class, module, package or session.
- hello world#
- Hello World#
- Hello World!#
A small piece of code used to demonstrate the most basic syntax and setup of a particular language or tool. Most a program to print “Hello World!”