You know testing is important, but your tests feel brittle, hard to maintain, or don't catch real bugs.

This guide provides a systematic workflow for writing unit tests that verify behavior, not implementation. By following these steps, you'll write tests that catch bugs early and enable confident refactoring.

Before diving into the workflow, read [Fundamentals of Software Testing]({{< ref "fundamentals-of-software-testing" >}}) to understand why testing matters and how it fits into the development process.

## Prerequisites

Before writing effective unit tests, ensure you have:

* **A testing framework installed** - pytest (Python), Jest (JavaScript), JUnit (Java), or similar
* **Basic programming knowledge** - Understanding of functions, classes, and control flow
* **Code to test** - Either existing code or a clear understanding of what you're building
* **A test runner** - Ability to execute tests and see results

## The Unit Testing Workflow

Follow this systematic process to write effective unit tests:

### Step 1: Understand What You're Testing

Before writing any test code, clarify what behavior you're verifying.

**Ask yourself:**

* What should this function/method do?
* What are the valid inputs?
* What should happen with invalid inputs?
* What are the edge cases?

**Example:**

For a function `calculate_shipping_cost(weight, distance)`:

* **Valid inputs:** Positive numbers for weight and distance
* **Expected output:** Cost in dollars (positive number)
* **Edge cases:** Zero weight, zero distance, negative numbers, very large numbers
* **Error cases:** Invalid input types (strings, None, etc.)

**Why this matters:** Understanding requirements prevents writing tests that verify the wrong behavior.

### Step 2: Write the Test Case

Write a test that describes the expected behavior using the Arrange-Act-Assert pattern.

**The Arrange-Act-Assert (AAA) Pattern:**

```python
def test_calculate_shipping_cost_for_normal_inputs():
    # Arrange: Set up test data and conditions
    weight = 5.0
    distance = 100.0
    expected_cost = 10.50

    # Act: Execute the code being tested
    actual_cost = calculate_shipping_cost(weight, distance)

    # Assert: Verify the results
    assert actual_cost == expected_cost
```

**Why this pattern works:**

* **Arrange** - Makes test setup explicit and clear
* **Act** - Isolates the behavior being tested
* **Assert** - Shows exactly what you're verifying

### Step 3: Run the Test (Red Phase)

Run the test and watch it fail.

```bash
pytest test_shipping.py
```

**Expected output:**

```text
FAILED test_shipping.py::test_calculate_shipping_cost_for_normal_inputs
```

**Why this matters:** Seeing the test fail confirms it's actually testing something. If a test passes before you write the implementation, it's not testing what you think it is.

### Step 4: Write Minimum Code to Pass (Green Phase)

Write the simplest code that makes the test pass.

```python
def calculate_shipping_cost(weight, distance):
    """Calculate shipping cost based on weight and distance."""
    if weight <= 0:
        raise ValueError("Weight must be positive")
    if distance <= 0:
        raise ValueError("Distance must be positive")
    return weight * distance * 0.021
```

Run the test again:

```bash
pytest test_shipping.py
```

**Expected output:**

```text
PASSED test_shipping.py::test_calculate_shipping_cost_for_normal_inputs
```

### Step 5: Add Edge Case Tests

Now that the happy path works, test edge cases and error conditions.

```python
def test_zero_weight_raises_error():
    with pytest.raises(ValueError, match="Weight must be positive"):
        calculate_shipping_cost(weight=0, distance=100)

def test_negative_distance_raises_error():
    with pytest.raises(ValueError, match="Distance must be positive"):
        calculate_shipping_cost(weight=5, distance=-10)

def test_very_large_values():
    # Test that function handles large numbers correctly
    weight = 10000.0
    distance = 5000.0
    result = calculate_shipping_cost(weight, distance)
    assert result == 1050000.0  # 10000 * 5000 * 0.021
```

**Run all tests:**

```bash
pytest test_shipping.py -v
```

**Expected output:**

```text
PASSED test_shipping.py::test_calculate_shipping_cost_for_normal_inputs
PASSED test_shipping.py::test_zero_weight_raises_error
PASSED test_shipping.py::test_negative_distance_raises_error
PASSED test_shipping.py::test_very_large_values
```

### Step 6: Refactor with Confidence

Once tests pass, improve the code while keeping tests green.

**Before refactoring:**

```python
def calculate_shipping_cost(weight, distance):
    if weight <= 0:
        raise ValueError("Weight must be positive")
    if distance <= 0:
        raise ValueError("Distance must be positive")
    return weight * distance * 0.021
```

**After refactoring** (extracting validation):

```python
def validate_positive(value, name):
    """Validate that a value is positive."""
    if value <= 0:
        raise ValueError(f"{name} must be positive")

def calculate_shipping_cost(weight, distance):
    """Calculate shipping cost based on weight and distance."""
    validate_positive(weight, "Weight")
    validate_positive(distance, "Distance")

    RATE_PER_UNIT = 0.021
    return weight * distance * RATE_PER_UNIT
```

**Run tests to verify refactoring didn't break anything:**

```bash
pytest test_shipping.py
```

All tests should still pass. This confirms your refactoring preserved behavior.

## Writing Tests for Different Scenarios

### Testing Error Conditions

Always test that your code fails correctly.

```python
def test_invalid_input_type_raises_error():
    with pytest.raises(TypeError):
        calculate_shipping_cost(weight="heavy", distance=100)

def test_none_input_raises_error():
    with pytest.raises(TypeError):
        calculate_shipping_cost(weight=None, distance=100)
```

### Testing Boundary Conditions

Test at the edges of valid input ranges.

```python
def test_minimum_valid_values():
    # Smallest positive values
    result = calculate_shipping_cost(weight=0.001, distance=0.001)
    assert result > 0
    assert result < 0.001  # Very small result

def test_maximum_reasonable_values():
    # Large but realistic values
    result = calculate_shipping_cost(weight=1000, distance=10000)
    assert result == 210000  # 1000 * 10000 * 0.021
```

### Testing with Multiple Inputs

Use parameterized tests to test multiple scenarios efficiently.

**Python (pytest):**

```python
import pytest

@pytest.mark.parametrize("weight,distance,expected", [
    (5, 100, 10.50),
    (10, 200, 42.00),
    (1, 1, 0.021),
    (100, 50, 105.00),
])
def test_calculate_shipping_cost_parameterized(weight, distance, expected):
    result = calculate_shipping_cost(weight, distance)
    assert result == expected
```

**JavaScript (Jest):**

```javascript
describe('calculate_shipping_cost', () => {
  test.each([
    [5, 100, 10.50],
    [10, 200, 42.00],
    [1, 1, 0.021],
    [100, 50, 105.00],
  ])('calculates cost for weight=%i, distance=%i', (weight, distance, expected) => {
    expect(calculateShippingCost(weight, distance)).toBe(expected);
  });
});
```

## Best Practices

### 1. Test Behavior, Not Implementation

**Bad: Testing implementation details**

```python
def test_uses_specific_algorithm():
    calc = ShippingCalculator()
    # Don't test private methods or internal details
    assert calc._internal_rate == 0.021  # ❌ Implementation detail
```

**Good: Testing behavior**

```python
def test_calculates_correct_cost():
    calc = ShippingCalculator()
    # Test the public interface and results
    assert calc.calculate(weight=5, distance=100) == 10.50  # ✓ Behavior
```

### 2. Use Descriptive Test Names

**Bad:**

```python
def test_1():  # ❌ Unclear what this tests
    assert calculate_shipping_cost(5, 100) == 10.50
```

**Good:**

```python
def test_calculate_shipping_cost_returns_correct_value_for_standard_inputs():  # ✓ Clear
    assert calculate_shipping_cost(5, 100) == 10.50
```

### 3. Keep Tests Independent

**Bad: Tests depend on each other**

```python
# ❌ Test order matters - fragile
def test_create_user():
    global user
    user = create_user("test@example.com")

def test_user_email():
    assert user.email == "test@example.com"  # Depends on previous test
```

**Good: Each test is self-contained**

```python
# ✓ Each test stands alone
def test_create_user_returns_user_object():
    user = create_user("test@example.com")
    assert user is not None

def test_created_user_has_correct_email():
    user = create_user("test@example.com")
    assert user.email == "test@example.com"
```

### 4. Keep Tests Fast

**Slow tests that hit real databases:**

```python
# ❌ Slow - hits real database
def test_user_creation():
    db = connect_to_database()
    user = create_user("test@example.com", db)
    assert user.email == "test@example.com"
```

**Fast tests with mocks:**

```python
# ✓ Fast - uses mock
from unittest.mock import Mock

def test_user_creation():
    mock_db = Mock()
    user = create_user("test@example.com", mock_db)
    assert user.email == "test@example.com"
    mock_db.save.assert_called_once()
```

### 5. One Assertion Per Test (When Practical)

**Acceptable: Multiple related assertions**

```python
def test_create_user_sets_all_properties():
    user = create_user(email="test@example.com", name="Test User")
    # Related assertions about the same object
    assert user.email == "test@example.com"
    assert user.name == "Test User"
    assert user.is_active == True
```

**Better: Separate tests for different behaviors**

```python
def test_create_user_sets_email():
    user = create_user(email="test@example.com", name="Test User")
    assert user.email == "test@example.com"

def test_create_user_sets_name():
    user = create_user(email="test@example.com", name="Test User")
    assert user.name == "Test User"

def test_new_user_is_active_by_default():
    user = create_user(email="test@example.com", name="Test User")
    assert user.is_active == True
```

## Common Pitfalls to Avoid

### Pitfall 1: Testing Too Much in One Test

**Problem:**

```python
def test_user_workflow():  # ❌ Tests too many things
    user = create_user("test@example.com")
    assert user.email == "test@example.com"

    user.update_name("New Name")
    assert user.name == "New Name"

    user.deactivate()
    assert user.is_active == False

    result = user.can_login()
    assert result == False
```

**Solution:** Split into focused tests for each behavior.

### Pitfall 2: Not Testing Error Cases

**Problem:**

```python
def test_division():  # ❌ Only tests happy path
    assert divide(10, 2) == 5
```

**Solution:** Test error conditions too.

```python
def test_division_by_zero_raises_error():  # ✓ Tests error case
    with pytest.raises(ZeroDivisionError):
        divide(10, 0)
```

### Pitfall 3: Tests That Don't Actually Test

**Problem:**

```python
def test_process_data():  # ❌ Doesn't verify anything
    process_data([1, 2, 3])  # Just calls function
```

**Solution:** Always assert expected outcomes.

```python
def test_process_data_returns_correct_result():  # ✓ Verifies behavior
    result = process_data([1, 2, 3])
    assert result == [2, 4, 6]  # Doubled values
```

## Running Your Tests

### Run All Tests

```bash
# Python (pytest)
pytest

# JavaScript (Jest)
npm test

# Java (Maven)
mvn test

# Java (Gradle)
gradle test
```

### Run Specific Tests

```bash
# Python - run specific file
pytest test_shipping.py

# Python - run specific test
pytest test_shipping.py::test_calculate_shipping_cost_for_normal_inputs

# JavaScript - run specific file
npm test -- shipping.test.js

# JavaScript - run specific test
npm test -- -t "calculate shipping cost"
```

### Run Tests with Coverage

```bash
# Python
pytest --cov=src --cov-report=html

# JavaScript
npm test -- --coverage

# Java (Maven)
mvn test jacoco:report
```

## Troubleshooting

### Problem: Tests Are Flaky (Pass Sometimes, Fail Sometimes)

**Causes:**

* Tests depend on external systems (databases, APIs, time)
* Tests share state
* Tests depend on execution order

**Solutions:**

* Use mocks for external dependencies
* Reset state before each test
* Ensure tests are independent

### Problem: Tests Are Slow

**Causes:**

* Hitting real databases or external services
* Creating too much test data
* Running integration tests as unit tests

**Solutions:**

* Use mocks and stubs
* Minimize test data setup
* Separate unit tests from integration tests

### Problem: Tests Don't Catch Real Bugs

**Causes:**

* Testing implementation instead of behavior
* Not testing edge cases
* Not testing error conditions

**Solutions:**

* Focus on testing outcomes, not code structure
* Add tests for boundary conditions
* Test both success and failure paths

## Next Steps

Now that you know how to write effective unit tests:

1. **Practice the workflow** - Write tests for your next feature using the steps above
2. **Add tests to existing code** - See [How to Add Tests to an Existing Codebase]({{< ref "how-to-add-tests-to-existing-codebase" >}})
3. **Review your testing practices** - Use the [Reference: Testing Checklist]({{< ref "reference-testing-checklist" >}})
4. **Explore TDD** - Try writing tests before code for your next feature

## Quick Reference

### The Unit Testing Workflow

1. **Understand** what you're testing (requirements, inputs, outputs, edge cases)
2. **Write** the test case (Arrange-Act-Assert)
3. **Run** the test (Red - watch it fail)
4. **Implement** minimum code to pass (Green)
5. **Add** edge case tests
6. **Refactor** with confidence (tests stay green)

### Arrange-Act-Assert Template

```python
def test_descriptive_name():
    # Arrange: Set up test data
    input_data = ...
    expected_output = ...

    # Act: Execute code under test
    actual_output = function_under_test(input_data)

    # Assert: Verify results
    assert actual_output == expected_output
```

### What to Test

* ✓ Valid inputs (happy path)
* ✓ Invalid inputs (error cases)
* ✓ Boundary conditions (min, max, zero)
* ✓ Edge cases (empty, null, very large)
* ✓ Error handling (exceptions, failures)

### What NOT to Test

* ✗ Private implementation details
* ✗ External library code
* ✗ Trivial getters/setters
* ✗ Framework code

## References

* [Fundamentals of Software Testing]({{< ref "fundamentals-of-software-testing" >}}) - Core testing concepts and principles
* [How to Add Tests to an Existing Codebase]({{< ref "how-to-add-tests-to-existing-codebase" >}}) - Step-by-step guide for adding tests to legacy code
* [Reference: Testing Checklist]({{< ref "reference-testing-checklist" >}}) - Comprehensive checklist for effective testing
* [pytest Documentation](https://docs.pytest.org/) - Python testing framework
* [Jest Documentation](https://jestjs.io/) - JavaScript testing framework
* [JUnit 5 Documentation](https://junit.org/junit5/) - Java testing framework
* Martin Fowler's [Unit Test](https://martinfowler.com/bliki/UnitTest.html) - Industry perspective on unit testing

This guide provides a systematic approach to writing unit tests. Practice these steps with your code, and testing will become a natural part of your development workflow.