You know testing is important, but your tests feel brittle, hard to maintain, or don’t catch real bugs.
This guide provides a systematic workflow for writing unit tests that verify behavior, not implementation. By following these steps, you’ll write tests that catch bugs early and enable confident refactoring.
Before diving into the workflow, read Fundamentals of Software Testing to understand why testing matters and how it fits into the development process.
Prerequisites
Before writing effective unit tests, ensure you have:
- A testing framework installed - pytest (Python), Jest (JavaScript), JUnit (Java), or similar
- Basic programming knowledge - Understanding of functions, classes, and control flow
- Code to test - Either existing code or a clear understanding of what you’re building
- A test runner - Ability to execute tests and see results
The Unit Testing Workflow
Follow this systematic process to write effective unit tests:
Step 1: Understand What You’re Testing
Before writing any test code, clarify what behavior you’re verifying.
Ask yourself:
- What should this function/method do?
- What are the valid inputs?
- What should happen with invalid inputs?
- What are the edge cases?
Example:
For a function calculate_shipping_cost(weight, distance):
- Valid inputs: Positive numbers for weight and distance
- Expected output: Cost in dollars (positive number)
- Edge cases: Zero weight, zero distance, negative numbers, very large numbers
- Error cases: Invalid input types (strings, None, etc.)
Why this matters: Understanding requirements prevents writing tests that verify the wrong behavior.
Step 2: Write the Test Case
Write a test that describes the expected behavior using the Arrange-Act-Assert pattern.
The Arrange-Act-Assert (AAA) Pattern:
def test_calculate_shipping_cost_for_normal_inputs():
# Arrange: Set up test data and conditions
weight = 5.0
distance = 100.0
expected_cost = 10.50
# Act: Execute the code being tested
actual_cost = calculate_shipping_cost(weight, distance)
# Assert: Verify the results
assert actual_cost == expected_costWhy this pattern works:
- Arrange - Makes test setup explicit and clear
- Act - Isolates the behavior being tested
- Assert - Shows exactly what you’re verifying
Step 3: Run the Test (Red Phase)
Run the test and watch it fail.
pytest test_shipping.pyExpected output:
FAILED test_shipping.py::test_calculate_shipping_cost_for_normal_inputsWhy this matters: Seeing the test fail confirms it’s actually testing something. If a test passes before you write the implementation, it’s not testing what you think it is.
Step 4: Write Minimum Code to Pass (Green Phase)
Write the simplest code that makes the test pass.
def calculate_shipping_cost(weight, distance):
"""Calculate shipping cost based on weight and distance."""
if weight <= 0:
raise ValueError("Weight must be positive")
if distance <= 0:
raise ValueError("Distance must be positive")
return weight * distance * 0.021Run the test again:
pytest test_shipping.pyExpected output:
PASSED test_shipping.py::test_calculate_shipping_cost_for_normal_inputsStep 5: Add Edge Case Tests
Now that the happy path works, test edge cases and error conditions.
def test_zero_weight_raises_error():
with pytest.raises(ValueError, match="Weight must be positive"):
calculate_shipping_cost(weight=0, distance=100)
def test_negative_distance_raises_error():
with pytest.raises(ValueError, match="Distance must be positive"):
calculate_shipping_cost(weight=5, distance=-10)
def test_very_large_values():
# Test that function handles large numbers correctly
weight = 10000.0
distance = 5000.0
result = calculate_shipping_cost(weight, distance)
assert result == 1050000.0 # 10000 * 5000 * 0.021Run all tests:
pytest test_shipping.py -vExpected output:
PASSED test_shipping.py::test_calculate_shipping_cost_for_normal_inputs
PASSED test_shipping.py::test_zero_weight_raises_error
PASSED test_shipping.py::test_negative_distance_raises_error
PASSED test_shipping.py::test_very_large_valuesStep 6: Refactor with Confidence
Once tests pass, improve the code while keeping tests green.
Before refactoring:
def calculate_shipping_cost(weight, distance):
if weight <= 0:
raise ValueError("Weight must be positive")
if distance <= 0:
raise ValueError("Distance must be positive")
return weight * distance * 0.021After refactoring (extracting validation):
def validate_positive(value, name):
"""Validate that a value is positive."""
if value <= 0:
raise ValueError(f"{name} must be positive")
def calculate_shipping_cost(weight, distance):
"""Calculate shipping cost based on weight and distance."""
validate_positive(weight, "Weight")
validate_positive(distance, "Distance")
RATE_PER_UNIT = 0.021
return weight * distance * RATE_PER_UNITRun tests to verify refactoring didn’t break anything:
pytest test_shipping.pyAll tests should still pass. This confirms your refactoring preserved behavior.
Writing Tests for Different Scenarios
Testing Error Conditions
Always test that your code fails correctly.
def test_invalid_input_type_raises_error():
with pytest.raises(TypeError):
calculate_shipping_cost(weight="heavy", distance=100)
def test_none_input_raises_error():
with pytest.raises(TypeError):
calculate_shipping_cost(weight=None, distance=100)Testing Boundary Conditions
Test at the edges of valid input ranges.
def test_minimum_valid_values():
# Smallest positive values
result = calculate_shipping_cost(weight=0.001, distance=0.001)
assert result > 0
assert result < 0.001 # Very small result
def test_maximum_reasonable_values():
# Large but realistic values
result = calculate_shipping_cost(weight=1000, distance=10000)
assert result == 210000 # 1000 * 10000 * 0.021Testing with Multiple Inputs
Use parameterized tests to test multiple scenarios efficiently.
Python (pytest):
import pytest
@pytest.mark.parametrize("weight,distance,expected", [
(5, 100, 10.50),
(10, 200, 42.00),
(1, 1, 0.021),
(100, 50, 105.00),
])
def test_calculate_shipping_cost_parameterized(weight, distance, expected):
result = calculate_shipping_cost(weight, distance)
assert result == expectedJavaScript (Jest):
describe('calculate_shipping_cost', () => {
test.each([
[5, 100, 10.50],
[10, 200, 42.00],
[1, 1, 0.021],
[100, 50, 105.00],
])('calculates cost for weight=%i, distance=%i', (weight, distance, expected) => {
expect(calculateShippingCost(weight, distance)).toBe(expected);
});
});Best Practices
1. Test Behavior, Not Implementation
Bad: Testing implementation details
def test_uses_specific_algorithm():
calc = ShippingCalculator()
# Don't test private methods or internal details
assert calc._internal_rate == 0.021 # ❌ Implementation detailGood: Testing behavior
def test_calculates_correct_cost():
calc = ShippingCalculator()
# Test the public interface and results
assert calc.calculate(weight=5, distance=100) == 10.50 # ✓ Behavior2. Use Descriptive Test Names
Bad:
def test_1(): # ❌ Unclear what this tests
assert calculate_shipping_cost(5, 100) == 10.50Good:
def test_calculate_shipping_cost_returns_correct_value_for_standard_inputs(): # ✓ Clear
assert calculate_shipping_cost(5, 100) == 10.503. Keep Tests Independent
Bad: Tests depend on each other
# ❌ Test order matters - fragile
def test_create_user():
global user
user = create_user("test@example.com")
def test_user_email():
assert user.email == "test@example.com" # Depends on previous testGood: Each test is self-contained
# ✓ Each test stands alone
def test_create_user_returns_user_object():
user = create_user("test@example.com")
assert user is not None
def test_created_user_has_correct_email():
user = create_user("test@example.com")
assert user.email == "test@example.com"4. Keep Tests Fast
Slow tests that hit real databases:
# ❌ Slow - hits real database
def test_user_creation():
db = connect_to_database()
user = create_user("test@example.com", db)
assert user.email == "test@example.com"Fast tests with mocks:
# ✓ Fast - uses mock
from unittest.mock import Mock
def test_user_creation():
mock_db = Mock()
user = create_user("test@example.com", mock_db)
assert user.email == "test@example.com"
mock_db.save.assert_called_once()5. One Assertion Per Test (When Practical)
Acceptable: Multiple related assertions
def test_create_user_sets_all_properties():
user = create_user(email="test@example.com", name="Test User")
# Related assertions about the same object
assert user.email == "test@example.com"
assert user.name == "Test User"
assert user.is_active == TrueBetter: Separate tests for different behaviors
def test_create_user_sets_email():
user = create_user(email="test@example.com", name="Test User")
assert user.email == "test@example.com"
def test_create_user_sets_name():
user = create_user(email="test@example.com", name="Test User")
assert user.name == "Test User"
def test_new_user_is_active_by_default():
user = create_user(email="test@example.com", name="Test User")
assert user.is_active == TrueCommon Pitfalls to Avoid
Pitfall 1: Testing Too Much in One Test
Problem:
def test_user_workflow(): # ❌ Tests too many things
user = create_user("test@example.com")
assert user.email == "test@example.com"
user.update_name("New Name")
assert user.name == "New Name"
user.deactivate()
assert user.is_active == False
result = user.can_login()
assert result == FalseSolution: Split into focused tests for each behavior.
Pitfall 2: Not Testing Error Cases
Problem:
def test_division(): # ❌ Only tests happy path
assert divide(10, 2) == 5Solution: Test error conditions too.
def test_division_by_zero_raises_error(): # ✓ Tests error case
with pytest.raises(ZeroDivisionError):
divide(10, 0)Pitfall 3: Tests That Don’t Actually Test
Problem:
def test_process_data(): # ❌ Doesn't verify anything
process_data([1, 2, 3]) # Just calls functionSolution: Always assert expected outcomes.
def test_process_data_returns_correct_result(): # ✓ Verifies behavior
result = process_data([1, 2, 3])
assert result == [2, 4, 6] # Doubled valuesRunning Your Tests
Run All Tests
# Python (pytest)
pytest
# JavaScript (Jest)
npm test
# Java (Maven)
mvn test
# Java (Gradle)
gradle testRun Specific Tests
# Python - run specific file
pytest test_shipping.py
# Python - run specific test
pytest test_shipping.py::test_calculate_shipping_cost_for_normal_inputs
# JavaScript - run specific file
npm test -- shipping.test.js
# JavaScript - run specific test
npm test -- -t "calculate shipping cost"Run Tests with Coverage
# Python
pytest --cov=src --cov-report=html
# JavaScript
npm test -- --coverage
# Java (Maven)
mvn test jacoco:reportTroubleshooting
Problem: Tests Are Flaky (Pass Sometimes, Fail Sometimes)
Causes:
- Tests depend on external systems (databases, APIs, time)
- Tests share state
- Tests depend on execution order
Solutions:
- Use mocks for external dependencies
- Reset state before each test
- Ensure tests are independent
Problem: Tests Are Slow
Causes:
- Hitting real databases or external services
- Creating too much test data
- Running integration tests as unit tests
Solutions:
- Use mocks and stubs
- Minimize test data setup
- Separate unit tests from integration tests
Problem: Tests Don’t Catch Real Bugs
Causes:
- Testing implementation instead of behavior
- Not testing edge cases
- Not testing error conditions
Solutions:
- Focus on testing outcomes, not code structure
- Add tests for boundary conditions
- Test both success and failure paths
Next Steps
Now that you know how to write effective unit tests:
- Practice the workflow - Write tests for your next feature using the steps above
- Add tests to existing code - See How to Add Tests to an Existing Codebase
- Review your testing practices - Use the Reference: Testing Checklist
- Explore TDD - Try writing tests before code for your next feature
Quick Reference
The Unit Testing Workflow
- Understand what you’re testing (requirements, inputs, outputs, edge cases)
- Write the test case (Arrange-Act-Assert)
- Run the test (Red - watch it fail)
- Implement minimum code to pass (Green)
- Add edge case tests
- Refactor with confidence (tests stay green)
Arrange-Act-Assert Template
def test_descriptive_name():
# Arrange: Set up test data
input_data = ...
expected_output = ...
# Act: Execute code under test
actual_output = function_under_test(input_data)
# Assert: Verify results
assert actual_output == expected_outputWhat to Test
- ✓ Valid inputs (happy path)
- ✓ Invalid inputs (error cases)
- ✓ Boundary conditions (min, max, zero)
- ✓ Edge cases (empty, null, very large)
- ✓ Error handling (exceptions, failures)
What NOT to Test
- ✗ Private implementation details
- ✗ External library code
- ✗ Trivial getters/setters
- ✗ Framework code
References
- Fundamentals of Software Testing - Core testing concepts and principles
- How to Add Tests to an Existing Codebase - Step-by-step guide for adding tests to legacy code
- Reference: Testing Checklist - Comprehensive checklist for effective testing
- pytest Documentation - Python testing framework
- Jest Documentation - JavaScript testing framework
- JUnit 5 Documentation - Java testing framework
- Martin Fowler’s Unit Test - Industry perspective on unit testing
This guide provides a systematic approach to writing unit tests. Practice these steps with your code, and testing will become a natural part of your development workflow.

Comments #