Pytest – why it’s more popular than unittest?

Karol Wybraniec

In my professional career I used to write tests with both frameworks: unittest and pytest. Both are great tools with certain pros and cons, but pytest is significantly more popular these days. In this short blogpost I am going to share with you a couple of pytest’s features, that in my opinion, provide the answer for the heading question.

Experience from unittest in pytest – out-of-the-box

Firstly, it’s worth to say, that pytest supports unittest.TestCase class from the beginning. All behaviors thought during writing tests with unittest (e.g.: writing assertions) remain the same, which gives extremely smooth transition to a new framework. Additionally, most unittest features work too, so pytest executor can run old tests as well (subtests are not supported).

Moreover, there are pytest features that work in unittest.TestCase subclasses like marking tests.

To run existing unittest-based tests means to execute following command:

$ pytest my_unittest_tests

 Below, the example of mixed unittest and pytest features working together:

import pytest
import unittest


class ClassTest(unittest.TestCase):

     @pytest.mark.xfail
     def test_feature_a(self):
        self.assertEqual(2, 3)

     def test_feature_b(self):
        self.assertTrue(True)

Assuming this code is saved in run_class.py, command to execute it is following:

$ pytest run_class.py

The output:

$ pytest run.py
============================= test session starts ==========================
platform win32 -- Python 3.6.3, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
rootdir: path_to_root\examples, inifile:
collected 2 items
run.py x.
[100%]
===================== 1 passed, 1 xfailed in 0.07 seconds =====================

Distributed tests to multiple CPUs with xdist

Pytest comes with a possibility to run tests parallelly by multiple processes using pytest-xdist. The more tests are done – the bigger the advantage is (2 times faster or more).

Firstly, module has to be installed, because it doesn’t come with pytest:

$ pip install pytest-xdist

To run the test of specific module with multiple processes, execute the below command (n stands for number of parallel processes):

$ pytest test_class.py -n 6

There’s one important notice about elapsed time of parallel tests, that needs to be highlightened. Each subprocess specified by “-n” argument builds its own suite, which has its own session-isolated setup (common parts of execution between tests, as well as conftest.py resolving). Therefore, if tests consist of some heavy loading actions at the beginning of suite, the time of the execution of this particular step is going to be multiplied by the value of “-n”, which in fact may decrease the potential benefits in terms of execution time.

Flowless integration with parameterized module

Pytest is appreciated for great integration with various modules. As a matter of fact, pytest brings parametrized test out-of-the-box, but I decided to use 3rd party module (due to limitations of parametrized tests when subclassing unittest.TestCase).

Install parameterized module with the following command:

$ pip install parameterized

This module allows to run a certain test n-times with different test data sets, by simply applying decorator to the test method.

import requests
import unittest
from parameterized import parameterized


class TestRepositories(unittest.TestCase):

    @parameterized.expand(['autorepr', 'django-nose', 'chai-subset'])
    def test_repos_existence_in_wolever_profile(self, repo):
        api_url = 'https://api.github.com/users/wolever/repos'
        response = requests.get().json(api_url)
        self.assertIn(repo, [item['name'] for item in response])

Pytest automatically creates 3 test methods post-fixed with “repo” value. No extra param is needed to run parameterized tests. A clear advantage is a re-usage of existing code for numerous tests.

The output will be like:

test_repos.py::TestRepositories::test_repos_existence_in_wolever_profile_0_autorepr 
test_repos.py::TestRepositories::test_repos_existence_in_wolever_profile_1_django_nose 
test_repos.py::TestRepositories::test_repos_existence_in_wolever_profile_2_chai_subset

Marking tests as a manner of organizing the test suites

The main way of organizing your test suites is to keep them in modules, e.g.: client_registrationshopping, transaction etc., but what if there is a need to build a cross-sections suite (pick up tests from several modules)? Pytest prompts one of the solutions, which I found to be flexible and readable – marks.

Let’s assume that our application is used by several clients, and these clients have implemented different sets of payment methods. By using a mark feature we can easily define which tests are related to a certain client.

As an example, we have two python files: test_auth.py and test_payments.pytest_auth.py

import pytest

@pytest.mark.client_a
def test_authentication_of_client_a():
    pass

@pytest.mark_client_b
def test_authentication_of_client_b():
    pass

test_payments.py

import pytest


@pytest.mark.client_a
def test_paypal_payment():
    pass


@pytest.mark.client_a
@pytest.mark.client_b
def test_credit_card_payment():
    pass


@pytest.mark.client_b
def test_apple_pay_payment():
    pass

To run just client_a or just client_b tests means to execute following command:

$ pytest –m client_a

Pytest finds and selects tests that fit to a specified mark, giving output like:

$ pytest -m client_a
============================= test session starts =============================
platform win32 -- Python 3.6.3, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
rootdir: C:\Repozytorium, inifile:
plugins: xdist-1.25.0, forked-0.2
collected 5 items / 2 deselected
 
examples\test_auth.py .                                                  [ 33%]
examples\test_payments.py ..                                             [100%]
 
=================== 3 passed, 2 deselected in 0.20 seconds ====================

Handy extension – flake8

The next advantage of pytest are numerous extensions, that integrate commonly used tools with pytest framework.

Install the extension by running the following command:

$ pip install pytest-flake8

Then you are able to call flake8 check in current location by running:

$ pytest --flake8

Notice, that installing pytest-flake8 does not handle your version of flake8 dependencies. Anyway, if any, you’re going to be notified, e.g.:

pluggy.manager.PluginValidationError: Plugin 'flake8' could not be loaded: (setuptools 28.8.0 (c:\users\username\appdata\local\programs\python\python36\lib\site-packages), Requirement.parse('setuptools>=30'), {'flake8'})!

 Above snippet tells that setuptools must be in version 30 or higher.

Since running default flake check on your project will likely generate hundreds of warnings, there is a configuration option, where you may adjust flake checks. What’s important, it is not placed in .flake8 as usual for flake, but in setup.cfg or tox.ini under [pytest] clause.

# content of setup.cfg
[pytest]
flake8-max-line-length = 120
flake8-ignore = E201 E231
flake8-ignore =    *.py E201config/app_cfg.py ALL

After you figure out what’s wrong and adjust settings to your flake-check, it will end up with success:

$ pytest example/config.py --flake8
============================= test session starts =============================
platform win32 -- Python 3.6.3, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
rootdir: C:\Repozytorium, inifile:
plugins: xdist-1.25.0, forked-0.2, flake8-1.0.2
collected 1 item
 
example\config.py .                                                      [100%]
 
========================== 1 passed in 0.12 seconds ===========================

I must warn you, that it may be very annoying having it as a git commit hook. For example, when there’s a fire on production and your hotfix is unable to be committed, because you’re fighting with flake8. In other cases, it really helps to keep the same coding standards across project.

Generating HTML tests result

Another great extension available for pytest providing HTML report is pytest-html. Install it by running the following command:

$ pip install pytest-html

Then you’re able to run tests with:

$ pytest –-html=report.html

Then all tests from current directory are collected, executed, and results are saved in HTML report available also in current directory, according to the command.

pytest_blog_j-labs_01

Conclusion

I personally prefer to use pytest instead of unittest. It’s fast and reliable. Despite the fact, that it reduces boilerplate code to the minimum, it still remains readable. Although, it’s not to be found in the standard library (which may be disadvantage for some), it may actually a clear advantage, because new releases of pytest are not bound to the Python official releases (which happens less frequently). Pytest helps me to organize my test suites and show test results to the others.  I highly recommend this tool.

Poznaj mageek of j‑labs i daj się zadziwić, jak może wyglądać praca z j‑People!

Skontaktuj się z nami