Back to blog

Python Testing with pytest: Write Comprehensive Test Suites

pythonpytesttestingtddquality-assurance
Python Testing with pytest: Write Comprehensive Test Suites

Python Testing with pytest: Write Comprehensive Test Suites

Testing is essential for building reliable software. pytest is Python's most popular testing framework—it's simple to start with, yet powerful enough for complex applications. This guide covers everything from basic tests to advanced patterns like fixtures, parametrization, and mocking, giving you the skills to write comprehensive test suites.

What You'll Learn

✅ pytest fundamentals and installation
✅ Writing and organizing test cases
✅ Fixtures for test setup/teardown
✅ Parametrized tests for data-driven testing
✅ Mocking and patching with pytest-mock
✅ Test coverage with pytest-cov
✅ Testing async code
✅ Best practices for maintainable tests

Prerequisites

Before diving into pytest, you should understand:


1. Getting Started with pytest

Installation

# Install pytest
pip install pytest
 
# Optional: useful plugins
pip install pytest-cov      # Coverage reporting
pip install pytest-mock     # Mocking support
pip install pytest-asyncio  # Async testing

Your First Test

Create a file named test_example.py:

# test_example.py
 
def add(a: int, b: int) -> int:
    """Add two numbers."""
    return a + b
 
def test_add():
    """Test the add function."""
    assert add(2, 3) == 5
    assert add(-1, 1) == 0
    assert add(0, 0) == 0
 
def test_add_floats():
    """Test add with floats."""
    result = add(1.5, 2.5)
    assert result == 4.0

Run tests:

# Run all tests
pytest
 
# Run with verbose output
pytest -v
 
# Run specific file
pytest test_example.py
 
# Run specific test
pytest test_example.py::test_add

Test Discovery

pytest automatically discovers tests by:

  • Looking for files named test_*.py or *_test.py
  • Finding functions starting with test_
  • Finding classes starting with Test
# All of these will be discovered:
 
def test_something():
    pass
 
class TestCalculator:
    def test_add(self):
        pass
 
    def test_subtract(self):
        pass

2. Assertions and Test Outcomes

Basic Assertions

pytest uses Python's built-in assert statement:

def test_assertions():
    # Equality
    assert 1 + 1 == 2
 
    # Truthiness
    assert [1, 2, 3]  # Non-empty list is truthy
    assert not []      # Empty list is falsy
 
    # Membership
    assert "hello" in "hello world"
    assert 3 in [1, 2, 3]
 
    # Type checking
    assert isinstance("hello", str)
 
    # Comparisons
    assert 5 > 3
    assert 10 <= 10

Testing Exceptions

import pytest
 
def divide(a: float, b: float) -> float:
    """Divide a by b."""
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b
 
def test_divide_by_zero():
    """Test that dividing by zero raises ValueError."""
    with pytest.raises(ValueError) as exc_info:
        divide(10, 0)
 
    # Check exception message
    assert "Cannot divide by zero" in str(exc_info.value)
 
def test_divide_by_zero_match():
    """Test exception with message matching."""
    with pytest.raises(ValueError, match="Cannot divide by zero"):
        divide(10, 0)

Approximate Comparisons

def test_floating_point():
    """Test floating point comparisons."""
    result = 0.1 + 0.2
 
    # Don't do this - might fail due to floating point precision
    # assert result == 0.3
 
    # Use pytest.approx for floating point comparisons
    assert result == pytest.approx(0.3)
 
    # With relative tolerance
    assert 100.01 == pytest.approx(100, rel=1e-3)
 
    # With absolute tolerance
    assert 0.1 + 0.2 == pytest.approx(0.3, abs=1e-10)

3. Test Organization

Project Structure

my_project/
├── src/
│   └── my_app/
│       ├── __init__.py
│       ├── calculator.py
│       └── utils.py
├── tests/
│   ├── __init__.py
│   ├── conftest.py          # Shared fixtures
│   ├── test_calculator.py
│   └── test_utils.py
├── pyproject.toml
└── pytest.ini               # pytest configuration

Configuration with pytest.ini

# pytest.ini
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts = -v --tb=short
filterwarnings =
    ignore::DeprecationWarning

Or in pyproject.toml:

[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
addopts = "-v --tb=short"

Grouping Tests with Classes

class TestCalculator:
    """Tests for the Calculator class."""
 
    def test_add(self):
        calc = Calculator()
        assert calc.add(2, 3) == 5
 
    def test_subtract(self):
        calc = Calculator()
        assert calc.subtract(5, 3) == 2
 
    def test_multiply(self):
        calc = Calculator()
        assert calc.multiply(4, 3) == 12

Markers for Test Categories

import pytest
 
@pytest.mark.slow
def test_large_dataset():
    """Test that takes a long time."""
    pass
 
@pytest.mark.integration
def test_database_connection():
    """Integration test requiring database."""
    pass
 
@pytest.mark.skip(reason="Feature not implemented yet")
def test_future_feature():
    pass
 
@pytest.mark.skipif(
    sys.platform == "win32",
    reason="Does not run on Windows"
)
def test_unix_only():
    pass
 
@pytest.mark.xfail(reason="Known bug #123")
def test_known_failure():
    assert False  # Expected to fail

Run specific markers:

# Run only slow tests
pytest -m slow
 
# Run all except slow tests
pytest -m "not slow"
 
# Run slow OR integration tests
pytest -m "slow or integration"

4. Fixtures: Setup and Teardown

Basic Fixtures

Fixtures provide test data and resources:

import pytest
 
@pytest.fixture
def sample_data():
    """Provide sample data for tests."""
    return {"name": "Alice", "age": 30, "email": "alice@example.com"}
 
def test_user_name(sample_data):
    """Test using the fixture."""
    assert sample_data["name"] == "Alice"
 
def test_user_age(sample_data):
    """Another test using the same fixture."""
    assert sample_data["age"] == 30

Fixture Scopes

import pytest
 
@pytest.fixture(scope="function")  # Default: new for each test
def function_fixture():
    print("\nSetting up function fixture")
    return "function data"
 
@pytest.fixture(scope="class")  # Shared across tests in a class
def class_fixture():
    print("\nSetting up class fixture")
    return "class data"
 
@pytest.fixture(scope="module")  # Shared across tests in a module
def module_fixture():
    print("\nSetting up module fixture")
    return "module data"
 
@pytest.fixture(scope="session")  # Shared across all tests
def session_fixture():
    print("\nSetting up session fixture")
    return "session data"

Fixtures with Teardown

import pytest
 
@pytest.fixture
def database_connection():
    """Setup and teardown database connection."""
    # Setup
    conn = create_database_connection()
    print("Database connected")
 
    yield conn  # Provide the fixture value
 
    # Teardown (runs after test completes)
    conn.close()
    print("Database disconnected")
 
def test_query(database_connection):
    """Test that uses database connection."""
    result = database_connection.execute("SELECT 1")
    assert result == 1

Fixture Factories

import pytest
 
@pytest.fixture
def make_user():
    """Factory fixture to create users."""
    created_users = []
 
    def _make_user(name: str, age: int = 25):
        user = {"name": name, "age": age}
        created_users.append(user)
        return user
 
    yield _make_user
 
    # Cleanup all created users
    for user in created_users:
        print(f"Cleaning up user: {user['name']}")
 
def test_multiple_users(make_user):
    """Test creating multiple users."""
    alice = make_user("Alice", 30)
    bob = make_user("Bob", 25)
 
    assert alice["name"] == "Alice"
    assert bob["age"] == 25

Shared Fixtures in conftest.py

# tests/conftest.py
import pytest
 
@pytest.fixture
def api_client():
    """Shared API client fixture available to all tests."""
    client = APIClient(base_url="https://api.example.com")
    client.authenticate()
    yield client
    client.logout()
 
@pytest.fixture
def temp_directory(tmp_path):
    """Create a temp directory with some files."""
    test_dir = tmp_path / "test_data"
    test_dir.mkdir()
    (test_dir / "file1.txt").write_text("Hello")
    (test_dir / "file2.txt").write_text("World")
    return test_dir

Built-in Fixtures

pytest provides useful built-in fixtures:

def test_tmp_path(tmp_path):
    """tmp_path provides a temporary directory unique to each test."""
    file = tmp_path / "test.txt"
    file.write_text("Hello, World!")
    assert file.read_text() == "Hello, World!"
 
def test_capsys(capsys):
    """capsys captures stdout and stderr."""
    print("Hello, stdout!")
    captured = capsys.readouterr()
    assert "Hello, stdout!" in captured.out
 
def test_monkeypatch(monkeypatch):
    """monkeypatch modifies objects, dicts, or environment variables."""
    monkeypatch.setenv("API_KEY", "test-key")
    import os
    assert os.environ["API_KEY"] == "test-key"

5. Parametrized Tests

Basic Parametrization

import pytest
 
@pytest.mark.parametrize("input,expected", [
    (2, 4),
    (3, 9),
    (4, 16),
    (-2, 4),
    (0, 0),
])
def test_square(input, expected):
    """Test square function with multiple inputs."""
    assert input ** 2 == expected
 
@pytest.mark.parametrize("a,b,expected", [
    (1, 2, 3),
    (0, 0, 0),
    (-1, 1, 0),
    (100, 200, 300),
])
def test_add(a, b, expected):
    """Test add function with multiple inputs."""
    assert a + b == expected

Parametrize with IDs

import pytest
 
@pytest.mark.parametrize("email,is_valid", [
    pytest.param("user@example.com", True, id="valid_email"),
    pytest.param("invalid", False, id="no_at_symbol"),
    pytest.param("@example.com", False, id="no_username"),
    pytest.param("user@", False, id="no_domain"),
    pytest.param("", False, id="empty_string"),
])
def test_email_validation(email, is_valid):
    """Test email validation with descriptive IDs."""
    assert validate_email(email) == is_valid

Multiple Parametrize Decorators

import pytest
 
@pytest.mark.parametrize("x", [1, 2, 3])
@pytest.mark.parametrize("y", [10, 20])
def test_multiply(x, y):
    """Tests all combinations: (1,10), (1,20), (2,10), (2,20), (3,10), (3,20)"""
    result = x * y
    assert result == x * y

Parametrize Fixtures

import pytest
 
@pytest.fixture(params=["mysql", "postgresql", "sqlite"])
def database(request):
    """Fixture that runs tests against multiple databases."""
    db_type = request.param
    db = create_database(db_type)
    yield db
    db.close()
 
def test_insert(database):
    """This test runs 3 times - once for each database."""
    database.insert({"id": 1, "name": "Test"})
    result = database.get(1)
    assert result["name"] == "Test"

6. Mocking and Patching

Using pytest-mock

import pytest
 
def fetch_user(user_id: int) -> dict:
    """Fetch user from external API."""
    import requests
    response = requests.get(f"https://api.example.com/users/{user_id}")
    return response.json()
 
def test_fetch_user(mocker):
    """Test fetch_user with mocked requests."""
    # Mock the requests.get function
    mock_response = mocker.Mock()
    mock_response.json.return_value = {"id": 1, "name": "Alice"}
 
    mocker.patch("requests.get", return_value=mock_response)
 
    # Test the function
    result = fetch_user(1)
 
    assert result == {"id": 1, "name": "Alice"}

Mocking Methods and Classes

def test_mock_method(mocker):
    """Mock a method on a class."""
    class UserService:
        def get_user(self, user_id: int) -> dict:
            # Complex database logic
            pass
 
    service = UserService()
    mocker.patch.object(
        service,
        "get_user",
        return_value={"id": 1, "name": "Alice"}
    )
 
    result = service.get_user(1)
    assert result["name"] == "Alice"
 
def test_mock_class(mocker):
    """Mock an entire class."""
    MockClient = mocker.patch("myapp.client.APIClient")
    mock_instance = MockClient.return_value
    mock_instance.fetch.return_value = {"data": "test"}
 
    # Now any code that imports APIClient will get the mock

Mocking Environment Variables

def test_config_from_env(monkeypatch):
    """Test configuration loading from environment."""
    monkeypatch.setenv("DATABASE_URL", "postgres://localhost/test")
    monkeypatch.setenv("DEBUG", "true")
 
    config = load_config()
 
    assert config.database_url == "postgres://localhost/test"
    assert config.debug is True
 
def test_missing_env_var(monkeypatch):
    """Test behavior when environment variable is missing."""
    monkeypatch.delenv("API_KEY", raising=False)
 
    with pytest.raises(ConfigError, match="API_KEY"):
        load_config()

Spy to Track Calls

def test_spy(mocker):
    """Spy on a function to track calls without replacing it."""
    class EmailService:
        def send(self, to: str, subject: str, body: str):
            # Actually sends email
            return True
 
    service = EmailService()
    spy = mocker.spy(service, "send")
 
    # Call the real method
    service.send("user@example.com", "Hello", "Body")
 
    # Verify it was called
    spy.assert_called_once_with("user@example.com", "Hello", "Body")

Mocking Time and Dates

from datetime import datetime
import pytest
 
def get_greeting() -> str:
    """Return greeting based on time of day."""
    hour = datetime.now().hour
    if hour < 12:
        return "Good morning"
    elif hour < 18:
        return "Good afternoon"
    else:
        return "Good evening"
 
def test_morning_greeting(mocker):
    """Test morning greeting."""
    mock_datetime = mocker.patch("mymodule.datetime")
    mock_datetime.now.return_value = datetime(2024, 1, 15, 9, 0, 0)
 
    assert get_greeting() == "Good morning"
 
def test_evening_greeting(mocker):
    """Test evening greeting."""
    mock_datetime = mocker.patch("mymodule.datetime")
    mock_datetime.now.return_value = datetime(2024, 1, 15, 20, 0, 0)
 
    assert get_greeting() == "Good evening"

7. Testing Async Code

Setup pytest-asyncio

pip install pytest-asyncio
# pytest.ini or pyproject.toml
[tool.pytest.ini_options]
asyncio_mode = "auto"

Basic Async Tests

import pytest
import asyncio
 
async def fetch_data(url: str) -> dict:
    """Async function to test."""
    await asyncio.sleep(0.1)  # Simulate network delay
    return {"url": url, "status": "ok"}
 
@pytest.mark.asyncio
async def test_fetch_data():
    """Test async function."""
    result = await fetch_data("https://api.example.com")
    assert result["status"] == "ok"
 
@pytest.mark.asyncio
async def test_multiple_async_calls():
    """Test multiple async operations."""
    results = await asyncio.gather(
        fetch_data("https://api1.example.com"),
        fetch_data("https://api2.example.com"),
    )
    assert len(results) == 2

Async Fixtures

import pytest
import asyncio
 
@pytest.fixture
async def async_client():
    """Async fixture for API client."""
    client = AsyncAPIClient()
    await client.connect()
    yield client
    await client.disconnect()
 
@pytest.mark.asyncio
async def test_with_async_fixture(async_client):
    """Test using async fixture."""
    result = await async_client.get("/users")
    assert result["status"] == 200

Testing Async Context Managers

import pytest
 
class AsyncDatabase:
    async def __aenter__(self):
        await self.connect()
        return self
 
    async def __aexit__(self, *args):
        await self.disconnect()
 
    async def connect(self):
        pass
 
    async def disconnect(self):
        pass
 
    async def query(self, sql: str) -> list:
        return [{"id": 1}]
 
@pytest.mark.asyncio
async def test_async_context_manager():
    """Test async context manager."""
    async with AsyncDatabase() as db:
        results = await db.query("SELECT * FROM users")
        assert len(results) == 1

8. Test Coverage

Setup pytest-cov

pip install pytest-cov

Running with Coverage

# Basic coverage report
pytest --cov=myapp
 
# With HTML report
pytest --cov=myapp --cov-report=html
 
# With terminal report showing missing lines
pytest --cov=myapp --cov-report=term-missing
 
# Fail if coverage is below threshold
pytest --cov=myapp --cov-fail-under=80

Coverage Configuration

# pyproject.toml
[tool.coverage.run]
source = ["src"]
branch = true
omit = [
    "*/tests/*",
    "*/__init__.py",
]
 
[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "if TYPE_CHECKING:",
    "raise NotImplementedError",
]
fail_under = 80

Interpreting Coverage Reports

Name                    Stmts   Miss  Cover   Missing
-----------------------------------------------------
myapp/__init__.py           5      0   100%
myapp/calculator.py        20      2    90%   15-16
myapp/utils.py             30      5    83%   22-26
-----------------------------------------------------
TOTAL                      55      7    87%
  • Stmts: Total statements
  • Miss: Statements not covered by tests
  • Cover: Percentage covered
  • Missing: Line numbers not covered

9. Real-World Example: Testing a User Service

# src/user_service.py
from dataclasses import dataclass
from typing import Optional
import hashlib
 
@dataclass
class User:
    id: int
    email: str
    password_hash: str
    is_active: bool = True
 
class UserService:
    def __init__(self, db):
        self.db = db
 
    def create_user(self, email: str, password: str) -> User:
        """Create a new user."""
        if not self._validate_email(email):
            raise ValueError("Invalid email format")
 
        if len(password) < 8:
            raise ValueError("Password must be at least 8 characters")
 
        password_hash = self._hash_password(password)
        user = User(
            id=self.db.next_id(),
            email=email,
            password_hash=password_hash
        )
        self.db.save(user)
        return user
 
    def authenticate(self, email: str, password: str) -> Optional[User]:
        """Authenticate a user."""
        user = self.db.find_by_email(email)
        if not user:
            return None
 
        if not user.is_active:
            return None
 
        if user.password_hash != self._hash_password(password):
            return None
 
        return user
 
    def _validate_email(self, email: str) -> bool:
        return "@" in email and "." in email.split("@")[-1]
 
    def _hash_password(self, password: str) -> str:
        return hashlib.sha256(password.encode()).hexdigest()
# tests/test_user_service.py
import pytest
from user_service import UserService, User
 
@pytest.fixture
def mock_db(mocker):
    """Mock database for testing."""
    db = mocker.Mock()
    db.next_id.return_value = 1
    return db
 
@pytest.fixture
def user_service(mock_db):
    """User service with mocked database."""
    return UserService(mock_db)
 
class TestCreateUser:
    """Tests for user creation."""
 
    def test_create_user_success(self, user_service, mock_db):
        """Test successful user creation."""
        user = user_service.create_user("alice@example.com", "securepass123")
 
        assert user.id == 1
        assert user.email == "alice@example.com"
        assert user.is_active is True
        mock_db.save.assert_called_once()
 
    @pytest.mark.parametrize("invalid_email", [
        "invalid",
        "no-at-symbol.com",
        "@nodomain",
        "missing@dotcom",
    ])
    def test_create_user_invalid_email(self, user_service, invalid_email):
        """Test user creation with invalid emails."""
        with pytest.raises(ValueError, match="Invalid email"):
            user_service.create_user(invalid_email, "securepass123")
 
    def test_create_user_short_password(self, user_service):
        """Test user creation with short password."""
        with pytest.raises(ValueError, match="at least 8 characters"):
            user_service.create_user("alice@example.com", "short")
 
class TestAuthenticate:
    """Tests for user authentication."""
 
    @pytest.fixture
    def existing_user(self, user_service):
        """Create an existing user for authentication tests."""
        return User(
            id=1,
            email="alice@example.com",
            password_hash=user_service._hash_password("correctpass"),
            is_active=True
        )
 
    def test_authenticate_success(self, user_service, mock_db, existing_user):
        """Test successful authentication."""
        mock_db.find_by_email.return_value = existing_user
 
        result = user_service.authenticate("alice@example.com", "correctpass")
 
        assert result == existing_user
 
    def test_authenticate_wrong_password(self, user_service, mock_db, existing_user):
        """Test authentication with wrong password."""
        mock_db.find_by_email.return_value = existing_user
 
        result = user_service.authenticate("alice@example.com", "wrongpass")
 
        assert result is None
 
    def test_authenticate_user_not_found(self, user_service, mock_db):
        """Test authentication when user doesn't exist."""
        mock_db.find_by_email.return_value = None
 
        result = user_service.authenticate("unknown@example.com", "anypass")
 
        assert result is None
 
    def test_authenticate_inactive_user(self, user_service, mock_db, existing_user):
        """Test authentication for inactive user."""
        existing_user.is_active = False
        mock_db.find_by_email.return_value = existing_user
 
        result = user_service.authenticate("alice@example.com", "correctpass")
 
        assert result is None

10. Best Practices

✅ Do's

# ✅ Use descriptive test names
def test_user_creation_fails_with_duplicate_email():
    pass
 
# ✅ One assertion per test (when practical)
def test_user_email():
    user = create_user()
    assert user.email == "expected@email.com"
 
def test_user_is_active_by_default():
    user = create_user()
    assert user.is_active is True
 
# ✅ Use fixtures for setup
@pytest.fixture
def user():
    return create_user("test@example.com")
 
# ✅ Use parametrize for multiple similar tests
@pytest.mark.parametrize("input,expected", [...])
def test_validation(input, expected):
    pass
 
# ✅ Keep tests isolated - no shared state
def test_one():
    data = create_data()  # Create fresh data
    # ...
 
def test_two():
    data = create_data()  # Create fresh data again
    # ...

❌ Don'ts

# ❌ Don't test implementation details
def test_internal_method():
    obj = MyClass()
    assert obj._private_method() == expected  # Testing private method
 
# ❌ Don't use magic numbers without context
def test_something():
    assert result == 42  # What is 42?
 
# ✅ Better
EXPECTED_USER_COUNT = 42
def test_user_count():
    assert result == EXPECTED_USER_COUNT
 
# ❌ Don't have tests depend on each other
def test_create_user():
    global user_id
    user_id = create_user()
 
def test_get_user():
    get_user(user_id)  # Depends on test_create_user running first!
 
# ❌ Don't ignore test failures
@pytest.mark.skip  # Why is this skipped? When will it be fixed?
def test_important_feature():
    pass

Test Naming Conventions

# Pattern: test_<what>_<condition>_<expected>
 
def test_login_with_valid_credentials_returns_token():
    pass
 
def test_login_with_invalid_password_raises_auth_error():
    pass
 
def test_create_user_with_duplicate_email_returns_conflict():
    pass

11. Common Pitfalls

Pitfall 1: Flaky Tests

# ❌ Flaky - depends on timing
def test_async_operation():
    start_async_operation()
    time.sleep(1)  # Hope it's done by now...
    assert get_result() is not None
 
# ✅ Better - wait for completion
def test_async_operation():
    result = start_async_operation()
    result.wait(timeout=5)  # Wait properly
    assert result.value is not None

Pitfall 2: Over-Mocking

# ❌ Testing the mock, not the code
def test_calculator(mocker):
    mock_add = mocker.patch.object(calculator, 'add', return_value=5)
    result = calculator.add(2, 3)
    assert result == 5  # This just tests the mock!
 
# ✅ Test actual behavior
def test_calculator():
    result = calculator.add(2, 3)
    assert result == 5  # Tests the real implementation

Pitfall 3: Not Cleaning Up

# ❌ Leaves files/resources behind
def test_file_processing():
    with open("test_output.txt", "w") as f:
        f.write("test")
    process_file("test_output.txt")
    # File is left behind!
 
# ✅ Use fixtures or tmp_path
def test_file_processing(tmp_path):
    test_file = tmp_path / "test_output.txt"
    test_file.write_text("test")
    process_file(test_file)
    # tmp_path is automatically cleaned up

Summary

In this guide, you learned:

✅ pytest fundamentals: installation, test discovery, assertions
✅ Test organization with classes, markers, and configuration
✅ Fixtures for setup/teardown with different scopes
✅ Parametrized tests for data-driven testing
✅ Mocking and patching with pytest-mock
✅ Testing async code with pytest-asyncio
✅ Test coverage with pytest-cov
✅ Best practices for maintainable tests

Testing is a skill that improves with practice. Start with simple unit tests, then gradually add more sophisticated patterns as your codebase grows.

Next Steps

Now that you understand testing, explore related topics:

More Python Deep Dives:

Apply Testing in Web Development:

Back to the Roadmap:


Part of the Python Learning Roadmap series

📬 Subscribe to Newsletter

Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.

We respect your privacy. Unsubscribe at any time.

💬 Comments

Sign in to leave a comment

We'll never post without your permission.