Use this checklist to verify your tests follow best practices and effectively catch bugs.

This reference provides comprehensive checklists for different testing scenarios. For conceptual understanding, see [Fundamentals of Software Testing]({{< ref "fundamentals-of-software-testing" >}}). For step-by-step instructions, see [How to Write Effective Unit Tests]({{< ref "how-to-write-effective-unit-tests" >}}).

## Quick Reference

Use these checklists during code reviews, before merging pull requests, or when evaluating test quality.

## Unit Test Checklist

### Test Design

- [ ] **Tests verify behavior, not implementation**
  - Tests check what the code does (inputs/outputs), not how it's structured internally
  - Tests remain valid when refactoring internal code structure
  - Tests don't access private methods or internal state

- [ ] **Each test verifies one specific behavior**
  - Test name clearly describes the scenario being tested
  - Failure points to exactly what went wrong
  - Test can be understood in isolation

- [ ] **Tests use descriptive names**
  - Name follows pattern: `test_<function>_<scenario>_<expected_result>`
  - Name reads like documentation
  - Examples: `test_calculate_discount_with_premium_user_returns_10_percent_off`

- [ ] **Tests follow Arrange-Act-Assert pattern**
  - Setup is clear and minimal (Arrange)
  - Action under test is explicit (Act)
  - Verification is obvious (Assert)

- [ ] **Tests are independent**
  - Tests don't depend on execution order
  - Each test sets up its own data
  - Tests don't share mutable state

- [ ] **Tests are fast**
  - Tests run in milliseconds, not seconds
  - External dependencies are mocked
  - No network calls or file I/O in unit tests

### Test Coverage

- [ ] **Happy path is tested**
  - Valid inputs produce expected outputs
  - Core functionality works correctly

- [ ] **Edge cases are tested**
  - Boundary values (min, max, zero)
  - Empty inputs (empty arrays, empty strings, null)
  - Very large inputs
  - Special characters and unicode

- [ ] **Error cases are tested**
  - Invalid inputs raise appropriate errors
  - Error messages are clear and actionable
  - Exceptions are the correct type

- [ ] **State changes are verified**
  - Side effects are tested (database updates, file writes, etc.)
  - State before and after is verified
  - Cleanup happens correctly

### Code Quality

- [ ] **Tests are easy to read**
  - No complex logic in tests
  - Test data is clear and minimal
  - Magic numbers are explained with constants

- [ ] **Tests avoid duplication**
  - Common setup extracted to fixtures/helpers
  - Parameterized tests used for similar scenarios
  - Test utilities are reusable

- [ ] **Tests are maintainable**
  - No brittle assertions (implementation details)
  - Changes to code don't require updating many tests
  - Test failures are easy to diagnose

## Integration Test Checklist

### Test Scope

- [ ] **Tests verify component interactions**
  - Multiple components work together correctly
  - Data flows between components as expected
  - Integration points are exercised

- [ ] **Tests use realistic data**
  - Test data resembles production data
  - Data relationships are maintained
  - Edge cases in data are included

- [ ] **External dependencies are controlled**
  - Test databases are isolated
  - External services are stubbed or use test instances
  - Tests don't affect production systems

### Test Reliability

- [ ] **Tests are repeatable**
  - Tests produce same results on every run
  - Tests clean up after themselves
  - No dependency on external state

- [ ] **Tests are isolated**
  - Each test creates its own test data
  - Tests don't interfere with each other
  - Shared resources are properly managed

- [ ] **Tests handle timing issues**
  - Asynchronous operations complete before assertions
  - Appropriate timeouts are set
  - Race conditions are avoided

### Test Maintenance

- [ ] **Setup and teardown are efficient**
  - Database is seeded efficiently
  - Cleanup is automatic
  - Test data is minimal but sufficient

- [ ] **Tests run in reasonable time**
  - Integration tests complete within minutes
  - Slow tests are marked and run separately
  - Parallel execution is possible

## End-to-End (E2E) Test Checklist

### Test Coverage

- [ ] **Critical user journeys are tested**
  - Most important features have E2E tests
  - User registration, login, core workflows
  - Payment and sensitive operations

- [ ] **Tests represent real user behavior**
  - Tests follow actual user paths
  - Tests use realistic data and timing
  - Tests verify user-visible results

### Test Reliability

- [ ] **Tests are stable**
  - Tests don't fail randomly (no flakiness)
  - Waits are explicit, not arbitrary sleeps
  - Page elements are reliably located

- [ ] **Tests handle asynchrony correctly**
  - Tests wait for elements to be ready
  - Network requests complete before assertions
  - Animations and transitions are handled

- [ ] **Tests are debuggable**
  - Screenshots on failure
  - Clear error messages
  - Test data is logged

### Test Strategy

- [ ] **E2E tests are minimal**
  - Only critical paths use E2E tests
  - Most testing happens at lower levels
  - E2E tests complement, not replace, unit tests

- [ ] **Tests are maintained**
  - Broken tests are fixed immediately
  - Tests are updated with product changes
  - Obsolete tests are removed

## Test-Driven Development (TDD) Checklist

### Red Phase

- [ ] **Test is written first**
  - Test written before implementation
  - Test describes desired behavior
  - Test fails for the right reason

- [ ] **Test failure is verified**
  - Test actually runs and fails
  - Failure message is clear
  - Failure indicates missing functionality

### Green Phase

- [ ] **Minimum code to pass is written**
  - Simplest implementation that works
  - No premature optimization
  - No extra features

- [ ] **Test passes**
  - All assertions pass
  - No test errors or warnings
  - Test runs reliably

### Refactor Phase

- [ ] **Code is improved**
  - Duplication is removed
  - Names are clarified
  - Structure is simplified

- [ ] **Tests still pass**
  - Refactoring doesn't break tests
  - All tests remain green
  - No new failing tests

## Test Code Review Checklist

### Before Requesting Review

- [ ] **All tests pass locally**
  - Unit tests pass
  - Integration tests pass
  - No failing or skipped tests

- [ ] **Tests follow project conventions**
  - Naming follows team standards
  - File structure matches project layout
  - Test framework is used correctly

- [ ] **Code coverage is adequate**
  - New code has tests
  - Critical paths are covered
  - Coverage didn't decrease

### During Code Review

- [ ] **Tests verify correct behavior**
  - Tests check business requirements
  - Tests validate user expectations
  - Tests catch realistic bugs

- [ ] **Tests are well-designed**
  - Tests are easy to understand
  - Test data is clear and minimal
  - Tests don't test implementation details

- [ ] **Tests are complete**
  - Happy path is tested
  - Edge cases are covered
  - Error conditions are verified

- [ ] **Tests are maintainable**
  - Tests will be easy to update
  - Tests don't couple to implementation
  - Tests are clearly documented when needed

## Continuous Integration (CI) Checklist

### Test Execution

- [ ] **Tests run on every commit**
  - CI pipeline runs all tests
  - Tests run before merge
  - Failed tests block deployment

- [ ] **Tests run in isolation**
  - Tests don't depend on local environment
  - Tests use fresh test database
  - Tests clean up after themselves

- [ ] **Test failures are visible**
  - Failed tests are reported clearly
  - Developers are notified of failures
  - Failure logs are accessible

### Performance

- [ ] **Tests run in reasonable time**
  - Fast tests run on every commit (< 5 minutes)
  - Slower tests run less frequently
  - Tests can run in parallel

- [ ] **Test infrastructure is reliable**
  - Test environment is stable
  - CI doesn't fail for infrastructure reasons
  - Test data is properly seeded

## Test Maintenance Checklist

### Regular Maintenance

- [ ] **Failing tests are fixed immediately**
  - No ignored failing tests
  - Root cause is identified and fixed
  - Tests aren't disabled without good reason

- [ ] **Obsolete tests are removed**
  - Tests for deleted features are removed
  - Redundant tests are eliminated
  - Test count stays manageable

- [ ] **Tests are refactored**
  - Test code follows same quality standards as production code
  - Duplication in tests is removed
  - Test helpers are created for common patterns

### Code Changes

- [ ] **Tests are updated with code changes**
  - Feature changes include test updates
  - Bug fixes include regression tests
  - Refactoring keeps tests green

- [ ] **New tests are added for new features**
  - Every feature has tests
  - Tests written before or with code
  - Tests cover happy path and edge cases

## Testing Anti-Patterns to Avoid

### What NOT to Do

- [ ] **Avoid testing implementation details**
  - ❌ Don't test private methods
  - ❌ Don't verify internal state
  - ❌ Don't assert on method call order (unless it's the behavior)

- [ ] **Avoid brittle tests**
  - ❌ Don't hard-code dates or times
  - ❌ Don't depend on test execution order
  - ❌ Don't use sleeps instead of explicit waits

- [ ] **Avoid slow tests**
  - ❌ Don't hit real databases in unit tests
  - ❌ Don't make network calls in unit tests
  - ❌ Don't create excessive test data

- [ ] **Avoid incomplete tests**
  - ❌ Don't only test happy path
  - ❌ Don't ignore error cases
  - ❌ Don't skip edge cases

- [ ] **Avoid unclear tests**
  - ❌ Don't use vague test names
  - ❌ Don't write complex test logic
  - ❌ Don't use magic numbers without explanation

## Quality Gates Checklist

### Pre-Merge Requirements

- [ ] **All tests pass**
  - Unit tests: 100% passing
  - Integration tests: 100% passing
  - E2E tests (if applicable): 100% passing

- [ ] **Code coverage meets threshold**
  - Coverage doesn't decrease
  - New code has adequate coverage
  - Critical paths are covered

- [ ] **No test warnings or errors**
  - No deprecation warnings
  - No skipped tests without justification
  - No flaky tests

### Pre-Deployment Requirements

- [ ] **All CI checks pass**
  - Tests pass in CI environment
  - Code quality checks pass
  - Security scans pass

- [ ] **Performance tests pass**
  - Response times within acceptable range
  - Resource usage is reasonable
  - Load tests pass (if applicable)

- [ ] **Regression tests pass**
  - Existing functionality still works
  - No unexpected behavior changes
  - User workflows still complete

## Test Metrics Checklist

### What to Measure

- [ ] **Code coverage**
  - Track line coverage
  - Track branch coverage
  - Identify untested code paths

- [ ] **Test execution time**
  - Monitor test suite duration
  - Identify slow tests
  - Track trends over time

- [ ] **Test reliability**
  - Track flaky tests
  - Monitor failure rates
  - Measure time to fix failures

### What Metrics Mean

- [ ] **Coverage is a guide, not a goal**
  - High coverage doesn't mean high quality
  - Focus on testing important code
  - 100% coverage is not always necessary

- [ ] **Speed affects developer productivity**
  - Fast tests run more often
  - Slow tests get skipped
  - Balance speed with thoroughness

- [ ] **Reliability builds confidence**
  - Flaky tests destroy trust
  - Reliable tests enable fast delivery
  - Broken tests must be fixed immediately

## Testing Documentation Checklist

### Test Documentation

- [ ] **Test purpose is clear**
  - Why this test exists
  - What behavior it verifies
  - When it should run

- [ ] **Test data is explained**
  - Why specific test values were chosen
  - What edge cases are being tested
  - What realistic scenarios are covered

- [ ] **Test failures are debuggable**
  - Error messages are clear
  - Failure output shows relevant data
  - Steps to reproduce are obvious

### Project Documentation

- [ ] **Testing strategy is documented**
  - What types of tests exist
  - When to write each type
  - How to run tests locally

- [ ] **Test conventions are clear**
  - Naming conventions
  - File organization
  - Framework usage patterns

- [ ] **CI/CD process is documented**
  - When tests run
  - What failures block
  - How to debug CI failures

## Using This Checklist

### During Development

1. **Before writing code** - Review relevant sections to plan your tests
2. **While writing tests** - Check items as you complete them
3. **Before committing** - Verify all applicable items are checked

### During Code Review

1. **For reviewers** - Use checklist to evaluate test quality
2. **For authors** - Use checklist to self-review before requesting review
3. **For both** - Discuss any unchecked items

### Regular Review

1. **Weekly** - Review test metrics and health
2. **Monthly** - Review test maintenance and coverage
3. **Quarterly** - Review testing strategy and practices

## Related Resources

* [Fundamentals of Software Testing]({{< ref "fundamentals-of-software-testing" >}}) - Core testing concepts and principles
* [How to Write Effective Unit Tests]({{< ref "how-to-write-effective-unit-tests" >}}) - Step-by-step guide to writing unit tests
* [How to Add Tests to an Existing Codebase]({{< ref "how-to-add-tests-to-existing-codebase" >}}) - Adding tests to legacy code

## Customizing This Checklist

Adapt this checklist to your team's needs:

* **Add project-specific requirements** - Include team conventions and standards
* **Remove irrelevant items** - Not all items apply to all projects
* **Adjust thresholds** - Set coverage targets and performance limits based on your context
* **Create focused checklists** - Extract relevant sections for specific tasks

This checklist is a living document. Update it as your team's practices evolve and you learn what works best for your project.