Introduction

Why do teams invest in failing projects and choose poor options when better ones exist? Why do they trust the loudest voice over evidence?

Software development decisions happen under uncertainty, incomplete info, and time pressure. The brain relies on shortcuts for quick decisions, causing errors that compound in team discussions and architecture choices, leading to avoidable failures.

Software development amplifies errors from long-term decisions, numerous stakeholders, and complex systems with hard-to-trace cause and effect. As systems and teams became more complex and distributed, errors caught in small teams began causing bigger problems.

A fallacy is a flawed reasoning that seems persuasive but invalidates arguments. In software development, they happen in architecture, tech, planning, and discussions. They feel natural due to brain patterns but can mislead.

What this is (and isn’t): This article explains common logical fallacies in software development, emphasizing their importance and recognition, but excludes formal logic theory or decision-making frameworks.

Why logical fallacy fundamentals matter:

  • Better decisions - Recognizing flawed reasoning helps you choose better options.
  • Faster course correction - You can spot when a project is heading wrong and pivot sooner.
  • Clearer communication - You can identify when arguments are persuasive but unsound.
  • Reduced waste - Avoiding fallacy-driven choices saves time and money.

This article outlines a mental framework for recognizing fallacies:

  1. Identify the reasoning pattern - Spot the type of fallacy in play.
  2. Question the assumptions - Challenge what’s being taken for granted.
  3. Seek evidence - Look for data that supports or contradicts the claim.
  4. Consider alternatives - Explore options beyond the presented choice.
Cover: Common logical fallacies as reasoning traps in software development

Type: Explanation (understanding-oriented).
Primary audience: beginner to intermediate developers, engineers, and project managers.

Prerequisites & Audience

Prerequisites: Basic experience with software projects, team discussions, and technical decision-making.

Primary audience: Developers, engineers, technical leads, product managers, and project managers who want to improve their reasoning and decision-making in software projects.

Jump to: Section 1: Sunk Cost FallacySection 2: False DichotomySection 3: Appeal to AuthoritySection 4: Post Hoc Ergo Propter HocSection 5: Confirmation BiasSection 6: Strawman FallacySection 7: Planning FallacySection 8: Common MisconceptionsSection 9: When NOT to Worry About FallaciesFuture TrendsLimitations & SpecialistsGlossary

TL;DR - Logical Fallacies in One Pass

If I only remember one framework, make it this:

  • Question sunk costs so past investment doesn’t trap future decisions.
  • Reject false choices so you can explore better alternatives.
  • Verify authority claims so expertise doesn’t replace evidence.
  • Separate correlation from causation so you fix the right problems.
  • Engage with actual arguments so you address what was said, not distorted versions.

The Fallacy Recognition Workflow:

When you spot a reasoning pattern that might be a fallacy:

  1. Identify the pattern type - What fallacy might this be?
  2. Question the assumptions - What’s being taken for granted?
  3. Seek evidence - What data supports or contradicts the claim?
  4. Consider alternatives - What options exist beyond the presented choice?

Learning Outcomes

By the end of this article, I will be able to:

  • Explain why the sunk cost fallacy traps teams in failing projects and how to recognize when to cut losses.
  • Explain why false dichotomies limit options and how to spot better alternatives.
  • Explain why appeals to authority can mislead and when expert opinion needs verification.
  • Explain how correlation and causation differ and why confusing them leads to wrong fixes.
  • Explain how confirmation bias influences technical decisions and ways to counteract it.
  • Explain how the strawman fallacy derails discussions and how to engage with actual arguments.
  • Explain how the planning fallacy influences estimates and what realistic planning entails.

Section 1: Sunk Cost Fallacy

The sunk cost fallacy is investing more because of previous investments, even when stopping would be better.

Imagine this: you’ve spent three hours debugging a failing test that tests the wrong thing. Rewriting it would take thirty minutes, but you keep debugging because you’ve already invested three hours. Those hours are gone regardless. The key question is: what’s the best way forward?

Understanding Sunk Costs

Sunk costs are past investments like time, money, or effort spent and unrecoverable.

The fallacy is making decisions based on past costs instead of future benefits.

The trap: “We’ve already spent X, so we should keep going” ignores that X is gone.

Why the Sunk Cost Fallacy Works

The sunk cost fallacy feels natural because loss aversion makes abandoning past investments seem like losing, even though they’re gone. The brain treats sunk costs as recoverable, but past costs are irrelevant to future decisions. What matters is whether continuing creates more value than stopping.

In software development, sunk costs show up everywhere:

  • Failing projects: “We’ve spent six months on this, we can’t abandon it now.”
  • Bad technology choices: “We’ve built so much on this framework, we have to stick with it.”
  • Broken architectures: “We’ve invested too much to refactor this.”

These ignore that past investment is lost; the question is whether future investment will pay off.

Examples of Sunk Cost Fallacy

Example 1: The Chat Application

Early in my career, I saw a colleague build a chat app from scratch. After months, it was slow, couldn’t handle real-time chats, and business needs changed. The product was scrapped, wasting developer hours.

The developer knew it wasn’t working after the first month, but continued because of prior investment. This lost time led to burnout, missed opportunities, and wasted engineering capacity over the course of six months.

Example 2: The Framework Migration

A team selected a promising framework, but after a year, it faced performance issues, poor documentation, and a declining community. A three-month migration was needed, yet they continued with the flawed choice, citing “we’ve invested too much to switch.”

The invested year was spent. The real question was whether three months of migration could add more value than fixing a flawed foundation. In this case, yes. However, the team couldn’t see past the sunk cost.

Trade-offs and Limitations of Sunk Cost Recognition

Sometimes continuing is the right choice. If you’re close to a breakthrough or stopping would cause more problems, keep going. Make decisions based on future value, not past cost.

The sunk cost fallacy is not about never abandoning projects but about making abandonment decisions for the right reasons.

When Recognizing Sunk Costs Isn’t Enough

Recognizing sunk costs is only the first step. Making the right decision requires estimating future costs, honestly assessing value, and fairly comparing options. The sunk cost fallacy often combines with confirmation bias (seeing only evidence that continuing will work) and false dichotomy (believing only to continue or abandon, when pivoting might be possible).

Quick Check: Sunk Cost Fallacy

Before moving on, test your understanding:

  • Can I identify when a past investment is influencing a current decision?
  • Can I separate “we’ve already spent X” from “continuing is the best path forward”?
  • Can I evaluate future value independently of past costs?

You’re on track if you recognize when past investment influences decisions and assess if continuing adds more value than alternatives.

If the answer is unclear, practice by asking: “If I hadn’t invested anything yet, would I start this project today?”

Section 2: False Dichotomy

A false dichotomy presents two options as the only choices when others exist. It forces a binary decision in a nuanced reality.

Imagine a restaurant menu that only lists “steak or chicken” when there are ten more dishes. You might pick steak without seeing all options.

Understanding False Dichotomies

A false dichotomy asserts “We must choose A or B” while ignoring C, D, and E.

The trap: limiting options simplifies decisions but hides better solutions.

Common patterns include “Build vs buy,” “rewrite vs refactor,” and “microservices vs monolith.”

Why False Dichotomies Work

False dichotomies seem natural because binary choices reduce mental effort. The brain prefers simple yes/no decisions over exploring multiple options, which are more demanding. However, software development rarely involves just two options. Most problems have several solutions, and the best isn’t always among the given choices.

In software development, false dichotomies show up in:

  • Architecture decisions: “We need microservices, or we’ll fail to scale.”
  • Technology choices: “We must use React or Vue, nothing else works.”
  • Process decisions: “We need Agile or Waterfall, there’s no middle ground.”

Each ignores that hybrid, alternative, and middle-ground approaches often work better.

Examples of False Dichotomies

Example 1: The Monolith vs Microservices Debate

A team faced a difficult-to-deploy monolith. The debate was whether to break it into microservices or keep the monolith. Microservices would take months and increase complexity, while the monolith was already problematic.

The false dichotomy obscured better options: modularize the monolith, extract key services, improve deployment tooling, or adopt a hybrid approach. After two weeks of debate, the team opted for a hybrid, keeping most of the monolith while extracting two services with different scaling needs. This was faster than full microservices and addressed immediate issues, but the debate delayed other work.

Example 2: The Framework Choice

A team debated choosing between Framework A or B, overlooking other options like a different framework, building custom components, or using a library for the specific problem without a full framework.

The team chose Framework A due to a false dichotomy, but later found issues. Considering all options might have led them to a simpler, better-fitting library.

Trade-offs and Limitations of False Dichotomy Recognition

Sometimes there are only two options, like choosing between two vendors for a critical dependency. The key is recognizing when more options exist and exploring them.

False dichotomies aren’t always malicious. Sometimes, people offer two options because they haven’t considered others or want to simplify a complex decision. The solution: ask “what other options exist?”

When Recognizing False Dichotomies Isn’t Enough

Recognizing false dichotomies opens possibilities, but you must generate alternatives, fairly evaluate options, and consider hybrid approaches. They often appear with appeals to authority (expert says “you must choose A or B”) or confirmation bias (seeing evidence only for the two options).

Quick Check: False Dichotomy

Before moving on, test your understanding:

  • Can I tell when a decision is framed as “A or B” without considering other options?
  • Can I generate alternative options beyond the choices?
  • Can I tell when a binary choice is genuine or artificially limited?

You’re on track if you identify when options are limited and explore alternatives before deciding.

If the answer is unclear, ask: “What other ways could we solve this problem?”

Section 3: Appeal to Authority

An appeal to authority relies on an expert’s opinion as proof without verifying their correctness or relevance to the situation.

Think of trusting a famous chef’s opinion on car repair. The chef is an expert but not about cars. Even if they’re right, it’s not because of their expertise.

Understanding Appeals to Authority

An appeal to authority claims “Expert X says Y, so Y must be true” as proof.

The trap: expertise doesn’t guarantee correctness, as experts can be wrong or have a conflict of interest.

It’s valid when the expert’s domain matches the question, their reasoning is sound, and their claims are verifiable.

Why Appeals to Authority Work

Appeals to authority seem natural since people trust experts for complex topics they don’t understand. While this can be efficient, in software development, such appeals often avoid verification. Teams may adopt a technology based on a developer’s recommendation without assessing its fit or understanding why it was suggested.

In software development, appeals to authority appear in:

  • Technology adoption: “Company X uses this, so we should too.”
  • Architecture patterns: “Expert Y recommends microservices, so we need microservices.”
  • Process decisions: “Thought leader Z says we need this process, so we’ll adopt it.”

Each of these might be right, but they’re not because an authority said so. They’re right if reasoning and evidence support them.

Examples of Appeals to Authority

Example 1: The Microservices Bandwagon

A team adopted microservices because “Netflix uses them.” Netflix’s architecture suits its scale, traffic, and team. The adopting team was smaller with different traffic patterns, so microservices added complexity without benefits.

The appeal to authority skips the reasoning: why does Netflix use microservices? Does that reasoning apply here? The team spent six months building a complex microservices architecture that was hard to deploy, debug, and develop. They eventually simplified it and improved results but wasted months of engineering time.

Example 2: The Framework Recommendation

A team chose a framework based on a renowned developer’s blog post, which focused on high-performance real-time applications. However, the team was building a content management system with different needs.

The framework was too complex for their needs, and a simpler one would have sufficed. However, reliance on authority led them astray.

Trade-offs and Limitations of Authority Appeals

Expert opinion is valuable when the domain matches, reasoning is sound, and claims are verifiable. The issue isn’t listening to experts but treating their opinions as proof without verification.

Appeals to authority aren’t always wrong. Experts can be correct, and their knowledge saves time. The key is verifying the expertise applies and the reasoning is sound.

When Recognizing Appeals to Authority Isn’t Enough

Recognizing appeals to authority is important, but verification matters more. Verify the expert’s domain aligns with your question, check their reasoning, and seek independent evidence. Appeals to authority can combine with confirmation bias (only noticing opinions that support existing beliefs) or with the false dichotomy (experts suggesting “A or B” when other options exist).

Quick Check: Appeal to Authority

Before moving on, test your understanding:

  • Can I tell when an argument relies on authority rather than evidence?
  • Can I verify if an expert’s domain matches the question?
  • Can I distinguish valid expert guidance from blind authority appeals?

You’re on track if you recognize when authority is used as proof, verify expertise, and ensure reasoning is sound.

If the answer is unclear, ask: “Why does the expert say this, and does that reasoning apply to our situation?”

Section 4: Post Hoc Ergo Propter Hoc

Post hoc ergo propter hoc means “after this, therefore because of this.” It assumes A caused B because B happened after A. Correlation doesn’t imply causation.

Think of deploying a new feature that coincides with increased server errors. You might assume the feature caused the errors, but other factors like database migrations, traffic spikes, or coincidence could be involved. Evidence is needed to determine the true cause.

Understanding Post Hoc Reasoning

The post hoc fallacy assumes “A happened, then B happened, so A caused B.”

The trap: temporal sequence doesn’t prove causation. Many events occur simultaneously, and correlation often exists without causation.

Common patterns include “We changed X and Y got worse, so X caused Y,” or “We did Z and performance improved, so Z fixed performance.”

Why Post Hoc Reasoning Works

Post hoc reasoning is natural because humans seek patterns and causality. When two events occur together, we assume causation. This helps us learn but can be misleading in complex systems where many factors change at once, and correlation does not imply causation.

In software development, post hoc fallacies occur in:

  • Performance debugging: “We added caching, improving performance; fixed caching issues it.”
  • Bug attribution: “We deployed this change, causing bugs bugs.”
  • Process changes: “We adopting this process increased velocity and improved the process velocity.”

Each might be true, but evidence is needed to confirm. Performance could have improved due to less traffic, bugs may have appeared from a database issue, or velocity might have risen as the team gained experience.

Examples of Post Hoc Reasoning

Example 1: The Caching Fix

A team added caching to improve performance. Performance improved, so they assumed caching fixed it. However, investigation showed the improvement was due to decreased traffic during a holiday week. Caching helped but wasn’t the main cause.

Post hoc reasoning caused over-investment in caching, missing the real issue: performance degraded under high load, which caching alone couldn’t fix.

Example 2: The Deployment Blame

A team deployed a new feature, then errors spiked. They thought the feature caused the errors and rolled it back, but errors persisted. Investigation revealed a database migration at the same time was the cause.

Post hoc reasoning delayed fixing the real cause and wasted time rolling back the feature.

Trade-offs and Limitations of Post Hoc Recognition

Sometimes A causes B. If a bug appears after changing code, it’s often causation. Evidence, not just sequence, is key.

Post hoc reasoning isn’t always wrong; it’s a starting point. The issue is treating correlation as proof without verification.

When Recognizing Post Hoc Reasoning Isn’t Enough

Recognizing post hoc reasoning is a start, but you need evidence to identify causes. Gather evidence about changes, mechanisms, control variables, and consider alternatives. Post hoc reasoning often pairs with confirmation bias, where you assume your change caused the improvement because you want it to be true.

Quick Check: Post Hoc Ergo Propter Hoc

Before moving on, test your understanding:

  • Can I tell when temporal sequence proves causation?
  • Can I distinguish correlation from causation in technical decisions?
  • Can I tell when I need more than “A happened, then B happened” evidence?

You’re on track if you recognize when correlation is mistaken for causation and look for evidence to verify it.

If unsure, ask: “What evidence shows A caused B, aside from B happening after A?”

Section 5: Confirmation Bias

Confirmation bias involves seeking and interpreting information that supports existing beliefs while ignoring evidence that contradicts them.

Think of it like this: you see a framework as slow, notice performance issues, and blame it, ignoring when it does well. This confirms your belief, but it’s only part of the picture.

Understanding Confirmation Bias

Confirmation bias is favoring supporting information and dismissing opposing evidence.

The trap: you think you’re being objective, but you’re selectively gathering and interpreting evidence.

Common patterns include “This technology is bad” (only problems) or “This approach works” (ignoring failures).

Why Confirmation Bias Works

Confirmation bias seems natural as it requires less effort to hold existing beliefs than to re-evaluate them. The brain favors information that confirms what you believe, lowering cognitive strain but causing poor decisions by obscuring the full picture.

In software development, confirmation bias shows up in:

  • Technology choices: “I know this framework is bad” (ignoring evidence it works well in some cases).
  • Architecture decisions: “Microservices are the right choice” (discounting evidence they’re causing problems here).
  • Process adoption: “This process works” (ignoring when it fails and why).

None of these let you see when your approach fails or alternatives are better.

Examples of Confirmation Bias

Example 1: The Framework Grudge

A developer had a bad experience with a framework years ago and believed it was flawed. When the team considered using it, the developer pointed out every potential problem and ignored evidence of its improvements and success in similar projects.

The confirmation bias led the team to an uninformed decision, choosing a flawed framework hidden by developer bias.

Example 2: The Architecture Preference

A team favored microservices, viewing every issue as a need for more services. When performance problems arose, they believed splitting services further was the solution, despite evidence that the services were too small and that communication overhead was the real issue.

Confirmation bias worsened the problem by splitting services more, when consolidating could have helped.

Trade-offs and Limitations of Confirmation Bias Recognition

Having preferences and beliefs based on experience isn’t wrong, but they become problematic when they prevent seeing evidence or considering alternatives.

Confirmation bias isn’t always harmful; sometimes beliefs are correct and supported by evidence. The key is being open to evidence that contradicts them.

When Recognizing Confirmation Bias Isn’t Enough

Recognizing confirmation bias is hard because it involves noticing your own blind spots. You must actively seek disconfirming evidence, consider alternatives, and test beliefs that could prove you wrong. It often combines with other fallacies, such as seeing evidence for a false dichotomy or using post hoc reasoning to confirm existing beliefs.

Quick Check: Confirmation Bias

Before moving on, test your understanding:

  • Can I tell if I’m favoring info that confirms my beliefs?
  • Can I seek evidence contradicting my assumptions?
  • Can I recognize when my beliefs block seeing alternatives?

You’re on track if you identify when you’re gathering evidence selectively and seek contradictory info.

If the answer is unclear, ask: “What evidence would prove me wrong, and have I looked for it?”

Section 6: Strawman Fallacy

The strawman fallacy involves misrepresenting an argument to make it easier to attack, then attacking the misrepresentation, not the actual argument.

Think of it like this: someone suggests “consider using a different framework.” You respond, “Rewriting from scratch would take months and break everything.” But they only suggested considering it. You’ve created a strawman: an exaggerated version of their argument that’s easy to dismiss.

Understanding the Strawman Fallacy

The strawman fallacy is replacing someone’s real argument with a weaker, distorted one and attacking that.

The trap: thinking you’re winning the debate but ignoring what was actually said.

Common patterns include exaggerating the scope (e.g., ‘you want to rewrite everything’ for small changes), oversimplifying (e.g., ‘you’re saying we should never use this technology’ when caution was suggested), or attributing extreme views (e.g., ‘you want us to abandon all best practices’ when only one was questioned).

Why the Strawman Fallacy Works

The strawman fallacy seems natural because attacking weak arguments is easier than engaging strong ones. It’s easier to knock down exaggerated positions than address nuanced reasoning. However, it derail discussions and hinder good decisions.

In software development, strawman fallacies show up in:

  • Architecture discussions: “You want us to build everything from scratch” (when someone suggested refactoring one component).
  • Technology debates: “You’re saying we should never use modern tools” (when someone questioned one specific tool choice).
  • Process discussions: “You want us to abandon all planning” (when someone suggested a different planning approach).

Each misrepresents the actual position, hindering productive discussion.

Examples of the Strawman Fallacy

Example 1: The Framework Discussion

A developer suggested migrating to a newer, better-maintained framework. Another responded: “You want us to rewrite our entire application, which would take six months and break everything for our users. That’s completely unrealistic.”

The developer suggested a gradual migration over months, starting with new features. The strawman fallacy turned this into an extreme, easily dismissible position. The team argued about a full rewrite, not the actual strategy, delaying the decision and causing unnecessary conflict.

Example 2: The Platform Dependency Debate

A team member warned against relying heavily on a third-party platform, noting policy changes or shutdowns could erase features and community ties. They cited how political capital on external platforms can vanish when those platforms change or disappear, and suggested building more independent capabilities.

Another team member said, “You’re saying we should never use third-party services and build everything ourselves. That’s impossible. We’d need to rebuild databases, hosting, analytics, everything. We’d never ship anything.”

The concern was over-dependency on one platform, not avoiding all third-party services. The strawman fallacy turned the nuanced risk management discussion into an all-or-nothing choice. The team dismissed valid concern about dependency and continued building on a platform that later changed terms, causing disruption.

Trade-offs and Limitations of Strawman Recognition

Sometimes people hold extreme positions, and calling out those isn’t a strawman. The key is to accurately represent what someone said, not what you think they meant or their position might imply.

The strawman fallacy involves challenging the distorted version of an argument, not the actual one.

When Recognizing Strawman Fallacies Isn’t Enough

Recognizing strawman fallacies helps, but you still must accurately represent positions, engage with the strongest arguments, and clarify misunderstandings. Strawman fallacies often combine with false dichotomy (turning nuanced positions into binary choices) or confirmation bias (distorting arguments to match preexisting beliefs).

Quick Check: Strawman Fallacy

Before moving on, test your understanding:

  • Can I accurately represent someone’s argument before responding?
  • Can I tell if I’m attacking a distorted argument?
  • Can I engage with the strongest positions, not the weakest?

You’re on track if you recognize misrepresented arguments and engage with the actual words.

If the answer is unclear, practice asking: “What did they actually say, and am I responding to that or to a distorted version?”

Section 7: Planning Fallacy

The planning fallacy is underestimating task durations despite past evidence that similar ones took longer.

Think of it like this: you estimate a feature at two weeks, but similar ones took three. You’re confident this one will be different, yet it will still take three. Surprised? You shouldn’t be.

Understanding the Planning Fallacy

The planning fallacy is the tendency to underestimate time, cost, and risk despite historical data showing longer timelines.

The trap: assuming this time will be different, avoiding past problems, expecting everything to go smoothly.

Common patterns include “This should be quick” (ignoring that similar tasks weren’t quick) or “We can do this in a sprint” (when similar work took two sprints).

Why the Planning Fallacy Works

The planning fallacy is natural because optimism motivates and people focus on best-case scenarios. Believing work will be quick makes starting easier. They also assume they’ll avoid past issues, but this leads to missed deadlines, overcommitment, and stress.

In software development, the planning fallacy shows up in:

  • Feature estimates: This feature should take a week, unlike similar features that took three weeks.
  • Refactoring plans: We can refactor this in a day, though similar tasks took a week.
  • Bug fixes: “This bug looks easy” (similar bugs were complex).

Each ignores historical data, assuming this time will be different.

Examples of the Planning Fallacy

Example 1: The “Simple” Feature

A team estimated a feature would take one week, believing it was simpler than past features that took two to three weeks. It took three weeks, causing frustration, but historical data predicted this.

The planning fallacy caused missed deadlines and overcommitment. The team promised a feature in two weeks, creating pressure and delaying two other features due to resource issues. Using historical data, they could have estimated two to three weeks, preventing delays.

Example 2: The Quick Refactor

A developer estimated a refactor would take one day, confident it would be straightforward. Similar refactors took three to five days, but this one took four days, delaying other work.

The planning fallacy caused delays. Using historical estimates, the developer could have planned longer and avoided the cascade.

Trade-offs and Limitations of Planning Fallacy Recognition

Sometimes tasks are quicker than past work due to experience and better tools. The key is having evidence for why this time is different, not just optimism.

The planning fallacy isn’t about never being optimistic. It’s about basing estimates on data and realistically considering uncertainty.

When Recognizing the Planning Fallacy Isn’t Enough

Recognizing the planning fallacy helps, but you still need to use historical data, account for uncertainty, and build in buffers. The fallacy often combines with confirmation bias, the tendency to remember only accurate estimates, and the sunk cost fallacy, sticking to a timeline despite evidence that it’s unrealistic.

Quick Check: Planning Fallacy

Before moving on, test your understanding:

  • Can I tell if I’m underestimating time despite historical evidence?
  • Can I use past data for current estimates?
  • Can I recognize when optimism replaces realistic planning?

You’re on track if you recognize when you’re overly optimistic and base estimates on historical data.

If the answer is unclear, practice by asking: “How long did similar tasks take, and why would this one be different?”

Section 8: Common Misconceptions

Common misconceptions about logical fallacies include:

  • “Fallacies are always wrong.” Fallacies are reasoning errors, but the conclusion might still be correct by accident. A bad argument can lead to a good decision if other factors support it. The problem is relying on flawed reasoning.

  • “I can avoid all fallacies if I’m careful.” Fallacies are natural cognitive patterns. While you can’t eliminate them, recognizing and compensating for them improves reasoning. The goal is better, not perfect, reasoning.

  • “Pointing out fallacies is pedantic.” Recognizing fallacies helps improve decisions by correcting flawed reasoning, aiding the team in making better choices. It’s not pedantic if it prevents bad decisions.

  • “Expert opinion is always an appeal to authority.” Expert opinion is valuable when the expert’s domain matches and their reasoning is sound, but treating authority as proof without verification or ignoring experts is problematic.

  • “Correlation never implies causation.” Correlation can indicate potential causation worth exploring, but the issue is assuming causation from correlation without investigation.

Section 9: When NOT to Worry About Fallacies

Logical fallacies aren’t always an issue; knowing when to worry helps focus on what matters.

Low-stakes decisions - If a decision is easily reversible and the cost of error is low, analyzing fallacies might be unnecessary. Choose a direction and adjust as needed.

Clear evidence - If you have strong evidence supporting a decision, the reasoning pattern matters less. Evidence trumps reasoning flaws when the evidence is solid.

Time pressure - When deciding quickly, perfect reasoning isn’t always possible. Make the best choice with available info and adjust later.

Consensus without controversy - If the team agrees and the decision feels right, you might not need to analyze reasoning patterns. Save fallacy analysis for controversial or high-stakes decisions.

Even without detailed fallacy analysis, quick checks like “am I ignoring evidence?” or “are there other options?” can catch major issues.

Building Better Reasoning Habits

Recognizing fallacies is the first step. Building habits to prevent them follows.

Summary

  • Question sunk costs - Past investment is gone. Focus on future value.
  • Explore alternatives - Avoid false dichotomies; explore options beyond the presented choices.
  • Verify authority claims - Expert opinion is valuable, but verify that expertise applies, and reasoning is sound.
  • Separate correlation from causation - Temporal sequence doesn’t prove causation. Gather evidence.
  • Seek disconfirming evidence - Actively look for information that contradicts your beliefs.
  • Engage with actual arguments - Don’t attack distorted versions of positions. Address what was actually said.
  • Use historical data - Ground estimates in past experience, not optimism.

How These Concepts Connect

These fallacies often co-occur and compound. A team may fall into the sunk cost fallacy due to confirmation bias (seeing only evidence that supports continuing) and false dichotomy (believing only to continue or abandon, ignoring pivoting). This creates a trap harder to escape than either fallacy alone.

Here’s a concrete example: A team spent six months on a failing project, succumbing to the sunk cost fallacy, confirmation bias, and the false dichotomy, believing they had to continue or abandon, and ignoring pivot options. An authority’s opinion started it, and post hoc reasoning kept it alive. This led to three more months of wasted effort before pivoting.

Recognizing a fallacy often reveals others. Spot a false dichotomy? Check for authority appeals. See confirmation bias? Look for post hoc reasoning supporting it.

Getting Started with Fallacy Recognition

If you’re new to recognizing fallacies, start with a simple habit:

  1. Before major decisions, ask: “What reasoning patterns am I using?”
  2. When you hear “we must choose A or B”, ask: “What other options exist?”
  3. When someone says “expert X recommends Y”, ask: “Why, and does that apply here?”
  4. When you see “A happened, then B happened”, ask: “What evidence shows A caused B?”
  5. When you’re confident about something, ask: “What evidence would prove me wrong?”
  6. When responding to an argument, ask: “Am I addressing what they actually said, or a distorted version?”

Once routines feel established, apply the same questions to team discussions and architecture decisions.

Next Steps

Immediate actions:

  • Review a recent technical decision to identify any fallacious reasoning.
  • Practice spotting false dichotomies in team discussions.
  • Compare your next estimate with historical data for similar work.

Learning path:

  • Read about cognitive biases in decision-making.
  • Study formal logic basics to grasp reasoning patterns.
  • Practice spotting fallacies in technical discussions and articles.

Practice exercises:

  • Make a controversial technical decision and analyze both sides’ reasoning.
  • Review project post-mortems to identify fallacies that caused issues.
  • Practice rewriting arguments to avoid fallacies while keeping the core point.

Questions for reflection:

  • Which fallacies do I fall into most often?
  • How can I create habits that prevent my common fallacies?
  • When have fallacies in team discussions led to bad decisions?

The Fallacy Recognition Workflow: A Quick Reminder

The core workflow again:

flowchart TB A[Identify pattern] --> B[Question assumptions] B --> C[Seek evidence] C --> D[Consider alternatives]

When you see a reasoning pattern that might be a fallacy, question its assumptions, seek supporting or contradicting evidence, and consider alternatives beyond the presented option.

Final Quick Check

See if you can answer these out loud:

  1. How do I recognize when past investment influences a decision?
  2. What questions help identify false dichotomies?
  3. How can I verify if expert opinion applies to my situation?
  4. What’s the difference between correlation and causation?
  5. How can I actively seek evidence contradicting my beliefs?
  6. How can I ensure I’m addressing what someone actually said, not a distorted version?

If any answer feels fuzzy, revisit the matching section and skim the examples again.

Self-Assessment – Can You Explain These in Your Own Words?

See if you can explain these concepts in your own words:

  • Why sunk costs shouldn’t affect future decisions.
  • How false dichotomies limit options and solutions.
  • When expert opinion is valuable versus when it’s an appeal to authority.

If you explain these clearly, you’ve internalized the fundamentals.

Understanding reasoning errors and biases is evolving. A 2025 meta-analysis in Nature Human Behaviour of 54 trials on educational debiasing found small but significant improvements, though some biases are still hard to overcome and real-world transfer is under study. Other research continues to identify biases and test countermeasures (see Thinking, Fast and Slow and the list of cognitive biases.

Better Decision-Making Tools

Organizations develop tools and processes to help teams recognize and avoid fallacies, including structured decision-making frameworks, pre-mortems, and red team exercises.

What this means: Teams have support for avoiding errors, but they must use these tools.

How to prepare: Learn structured decision-making and practice with these tools in low-stakes situations before applying them to high-stakes decisions.

Increased Awareness of Cognitive Biases

As awareness of cognitive biases grows, teams recognize them better, but awareness alone isn’t enough. Habits and processes are needed to counteract biases.

What this means: More people recognize fallacies, but knowing them isn’t the same as avoiding them.

How to prepare: Build habits to prevent your common fallacies, not just be aware of them.

Integration with Technical Practices

Some teams integrate fallacy recognition into technical practices like code reviews, architecture discussions, and project planning, making recognition more routine and less ad-hoc.

What this means: Fallacy recognition becomes embedded in team workflows.

How to prepare: Find ways to incorporate reasoning checks into your processes.

Limitations & When to Involve Specialists

Fundamental logical fallacy principles offer a strong foundation, but some cases need deeper expertise.

When Fundamentals Aren’t Enough

Some reasoning challenges go beyond the fundamentals covered in this article.

Complex multi-stakeholder decisions: When multiple parties with conflicting interests are involved, reasoning becomes more complex, making facilitation and negotiation skills crucial.

Statistical reasoning: When decisions rely on data analysis, statistical fallacies and misinterpretations are relevant. Expertise helps avoid these errors.

Organizational psychology: When reasoning errors are systemic, individual recognition isn’t enough; organizational change and culture work are needed.

When Not to DIY Fallacy Analysis

There are situations where fundamentals alone aren’t enough:

  • High-stakes decisions with legal or regulatory implications - Get expert advice.
  • Decisions involving complex statistical analysis - Consult a statistician or data scientist.
  • Systemic organizational reasoning problems - Consider organizational psychology or change management expertise.

When to Involve Specialists

Consider involving specialists when:

  • Decisions have major financial, legal, or strategic implications.
  • Reasoning errors are systemic and affect multiple projects.
  • You need statistical analysis to evaluate evidence.
  • Organizational dynamics hinder good reasoning.

How to find specialists: Seek consultants, coaches, or internal experts in decision-making, behavioral economics, or organizational psychology.

Working with Specialists

When working with specialists:

  • Be clear about your reasoning patterns and decisions.
  • Share your team’s, organization’s, and challenge details.
  • Ask for frameworks and tools you can use independently, not just for one-time analysis.

Glossary

Appeal to authority: Using an expert’s opinion as proof without verifying their accuracy or relevance.

Confirmation bias: Seeking and interpreting information that confirms beliefs while ignoring evidence.

False dichotomy: Presenting two options as the only choices when others exist.

Logical fallacy: A flaw in reasoning that invalidates an argument, even if persuasive.

Planning fallacy: Underestimating how long tasks will take despite evidence of previous delays.

Post hoc ergo propter hoc: Assuming A caused B because B followed A is flawed; correlation doesn’t mean causation.

Strawman fallacy: Misrepresenting someone’s argument to make it easier to attack, then attacking the misrepresentation instead of the actual argument.

Sunk cost: A past, unrecoverable investment of time, money, or effort.

Sunk cost fallacy: Continuing to invest in something due to prior investment, even when stopping would be better.

References

Evolving Research

Industry Standards

Tools & Resources

Community Resources

Note on Verification

Logical fallacies and cognitive biases are well-studied in psychology and behavioral economics. While this article relies on established research, applications to software development reflect my experience. Verify reasoning patterns and adapt approaches for your team.