Introduction

Software can behave correctly in staging and still fail in production. The environments differ, the load differs, and the edge cases show up at the worst time.

Software quality assurance exists because “it worked last time” is not a quality strategy. Teams that treat quality as a phase often see it as a consequence.

Quality assurance involves creating systems that increase the chance of good outcomes. It isn’t a job title or just testing.

This article explains why quality assurance exists, what it changes (and cannot change), and its relationship to testing, reviews, metrics, and incident fundamentals.

Cover: conceptual feedback loop for software quality assurance

Type: Explanation (understanding-oriented).
Primary audience: beginner developers and leaders who want a practical mental model for QA (quality assurance), not a checklist.

What you’ll learn

By the end, you’ll be able to:

  • Explain why quality assurance exists and how it differs from testing.
  • Describe the difference between quality assurance and quality control, and why both are necessary.
  • Use a feedback-loop model to evaluate quality, speed, and risk.
  • Identify quality misconceptions that lead to “busy work” and production surprises.

Scope and audience

Scope: Quality assurance in software teams involves practices and feedback loops that prevent defects and reduce risks over time.

Not a how-to: I mention standard practices like tests, reviews, and continuous integration, but avoid step-by-step recipes.

Prerequisites: Basic familiarity with shipping software, even for small projects.

TL;DR: quality assurance in one pass

Quality assurance works when viewing it as a learning loop.

  • Define what “good” means for this system (correctness, reliability, usability, security, performance).
  • Design work to make defects harder to create, such as through reviews, small changes, and clear definitions.
  • Detect problems early through automated tests, continuous integration, static analysis, and monitoring signals.
  • Learn from failures (bug reviews, incident postmortems, trend analysis).
  • Change the work system to reduce the same failure type next time.

Doing only detection may leave you busy and still produce surprises.

A mental model: quality is an output of a system

When a team requests more quality assurance, it often indicates frequent surprises.

A manufacturing analogy clarifies this. Adding more inspectors at the end of a factory producing inconsistent parts doesn’t fix the root issue; it catches more defects but still incurs rework and scrap costs. Quality assurance aims to change the factory to prevent defects.

That surprise comes from a mismatch:

  • The team’s process produces changes outpacing validation.
  • The system’s behavior is more complex than the team’s mental model.
  • The feedback loop is slower than the rate of change.

Quality assurance is the work of fixing that mismatch. I think of it as designing a feedback system.

stateDiagram-v2 [*] --> DefineGood DefineGood --> Prevent Prevent --> Detect Detect --> Learn Learn --> ChangeSystem ChangeSystem --> Prevent DefineGood: Define what “good” means Prevent: Prevent defects Detect: Detect problems early Learn: Learn from failures ChangeSystem: Change the work system

Fast, honest loops improve quality; slow or reactive ones increase cost.

Software quality vs quality assurance vs quality control

These terms get mixed up, which matters because each suggests a different lever.

Software quality

Software quality measures how well software remains fit for use over time, in real-world conditions, and with real people.

Quality is multi-dimensional. Even if your system is “correct” but unusable, users still see it as low quality.

Quality assurance (QA)

Quality assurance shapes the software development process, focusing on prevention.

Examples include:

  • Choosing definitions of done that include tests and documentation.
  • Making small changes for reviewability.
  • Using continuous integration for fast feedback.
  • Treat incidents as learning, not failures.

Quality control (QC)

Quality control involves evaluating products for defects and gaps, with a focus on detection.

Examples include:

  • Running automated test suites.
  • Exploratory testing.
  • Release checks and acceptance criteria.

Software testing is a standard quality control method. It improves when quality assurance minimizes changes, provides quick feedback, and clarifies expectations. For a deeper understanding, see Fundamentals of software testing.

Quality assurance and control rely on each other. Prevention alone is futile; detection alone is costly.

Why quality assurance exists (the root problem)

The root problem isn’t developers’ carelessness; it’s that software evolves faster than humans can reliably understand.

As systems grow:

  • The number of possible states grows.
  • The number of interactions grows.
  • The blast radius of small changes grows.

Quality assurance ensures safe change, allowing teams to move fast without risking user trust.

What “quality” means in practice (attributes and tensions)

People often see quality as a single thing, but it comprises conflicting properties.

I like ISO/IEC 25002:2024(en) because it reminds me that “quality” includes more than correctness.

This standard exists to:

  • create a shared mental model of “quality” across stakeholders,
  • ensure non-functional qualities are explicit, structured, and measurable,
  • align requirements, measurement, and evaluation around a single quality language,
  • make high-quality models usable throughout the entire lifecycle, not just during testing.

Here are a few attributes from the model that show up in real teams:

  • Functional suitability: Does it provide the functions stakeholders need, correctly and completely?
  • Performance efficiency: Does it meet latency, throughput, and resource expectations under real load?
  • Compatibility: Does it coexist and interoperate cleanly with other systems in its environment?
  • Usability: Can users achieve goals effectively and without unnecessary friction?
  • Reliability: Does it behave consistently over time, including when things go wrong?
  • Security: Does it protect confidentiality, integrity, and access appropriately?
  • Maintainability: Can the team understand, modify, and validate changes without fear?
  • Portability: Can it be deployed or adapted across environments without rework?

The tension is that improving one attribute can hurt another. Moving faster can reduce maintainability. Locking down security can hurt usability. Adding checks can slow delivery.

Quality assurance is the deliberate management of these trade-offs.

The mechanisms that make QA work

We can strip QA down to three mechanisms.

1. Short feedback loops

Fast feedback reduces the cost of mistakes; a failing unit test in a pull request is cheaper than a production incident.

This is why continuous integration matters: it turns “find out later” into “find out now.”

2. Constraints that prevent easy mistakes

Teams improve quality by removing choices that routinely cause defects.

Examples:

  • Linters and formatters eliminate style debates and minimize low-value errors.
  • Type systems prevent runtime surprises.
  • Guardrails in deployment prevent accidental changes.

The goal is not to introduce bureaucracy, but to ensure the safe route is the simplest option.

3. Learning from reality, not stories

Quality improves when the team believes production signals.

If monitoring says “users are failing checkout,” and the team argues that “it should work,” quality assurance is already broken. Reality wins.

This links directly to:

Trade-offs and failure modes

Quality assurance fails in predictable ways. Most of them are trade-offs mishandled.

Trade-off: speed now vs speed later

Skipping checks feels fast this week. It creates rework and fear next month.

To see this trade-off, examine lead time. Teams that ship quickly over time invest in QA early.

Trade-off: more checks vs more noise

Adding checks can reduce defects, but may create noise that people ignore.

If your test suite is flaky, developers stop trusting it. That is not a testing problem; it is a quality assurance problem.

Trade-off: metrics vs gaming

Metrics help, but teams can optimize quantity over outcome.

Rewarding “test coverage" alone leads to coverage theater and tests optimized for the metric, which is useful but not proof of quality.

Common misconceptions I see a lot

These show up everywhere, especially in early-career teams.

  • “QA is a person or team that tests at the end.” QA is a system involving the entire team.
  • “More testing means higher quality.” Tests boost confidence but don’t fix a broken process.
  • “If it passed continuous integration, it is good.” Continuous integration catches some failures, not all.
  • “Quality means no bugs.” Quality means the software works well enough for its purpose in the real world at an acceptable risk level.

A concrete example: a checkout bug, two different outcomes

Imagine a checkout change causing a subtle pricing error for some discount combos.

In a weak QA system:

  • The change is significant, the review is superficial, and the tests are limited.
  • The bug ships.
  • A support ticket arrives, an incident occurs, then someone applies a hotfix.
  • The team moves on, and a similar bug returns later.

In a stronger QA system:

  • The change is small and reviewable.
  • A unit test encodes the tricky discount rule.
  • Continuous integration catches regressions on the next change.
  • If it still escapes, monitoring detects abnormal refund rates, and the postmortem produces a process change, not just a patch.

The difference is not talent. The difference is the quality system.

Synthesis: what to remember

Quality assurance designs work to ensure quality is intentional, not accidental.

If you want one sentence to keep, use this:

Quality assurance is a feedback system that prevents defects, detects ongoing ones, and learns quickly to reduce recurrence.

Key takeaways

  • Quality assurance is about prevention and learning, not just “testing at the end.”
  • Quality control is detection; quality assurance prevents a treadmill.
  • Short feedback loops create leverage.
  • Constraints and guardrails remove common failure paths.
  • Metrics help when reflecting reality, and hurt when they reward theater.

Next steps

If you want to go deeper on adjacent fundamentals:

Glossary

Quality assurance (QA): Practices that shape software production to prevent defects and reduce risk.

Quality control (QC): Practices that evaluate software for defects and gaps.

Continuous integration (CI): A practice of frequent code integration and automated validation.

Defect: A software problem that can cause incorrect behavior, user harm, or operational pain.

Regression: A defect where something that used to work stops working after a change.

References