Introduction

This page is a reference catalog of algorithmic patterns and pattern families.

Use this page when you need to quickly map a problem statement to a known approach, then recall the core constraints and a canonical implementation shape.

This page is not a deep “why it works” explanation, nor a full tutorial on each algorithm.

Why patterns matter (in work mode):

  • Faster problem solving: You can pick a known template instead of starting from scratch.
  • Cleaner implementations: Patterns reduce accidental complexity and edge-case bugs.
  • Shared vocabulary: You can name the approach quickly in reviews and design discussions.

Type: Reference (pattern catalog with recognition guidance). Primary audience: intermediate developers learning to recognize and apply algorithmic patterns in real problems.

Examples are in Python, but the recognition cues are language-agnostic.

If you want the study-oriented explanation of why patterns work, and how to build intuition, read How Algorithmic Patterns Work.

Prerequisites: What You Need to Know First

Before exploring algorithmic patterns, you should be comfortable with these basics:

Basic Programming Skills:

  • Code familiarity - Ability to read and write code in at least one programming language (Python, JavaScript, Java, or similar).
  • Data structures - Familiar with basic structures like arrays, lists, hash maps, and trees.
  • Control flow - Comfort with loops, conditionals, and functions.

Algorithm Fundamentals:

  • Big O notation - Basic understanding of time and space complexity.
  • Algorithm families - Familiarity with searching, sorting, and traversal concepts.

Software Development Experience:

  • Some coding experience - You’ve written programs that process data or solve problems.
  • Problem-solving practice - Experience breaking down problems into smaller pieces.

You don’t need to memorize every algorithm. This article focuses on pattern recognition and application over implementation details.

See Fundamentals of Algorithms for core algorithmic concepts and Fundamentals of Data Structures for data structure fundamentals.

If you are rusty on the basics, skim these first:

How to Use This Page

Use this as a lookup:

  • Start at Pattern Selection Guide to narrow down the family.
  • Jump via Pattern Index (Quick Jump) to the specific entry.
  • Use each entry’s Constraints / gotchas to sanity check before you implement.

Pattern Selection Guide

If you are deciding which pattern fits, scan this index first.

Use this section when you are still diagnosing the problem from its wording and constraints.

  • Two pointers: Sorted sequences, pairs/triplets, “meet in the middle”.
  • Sliding window: Contiguous subarray/substring, longest/shortest, “at most k”.
  • Prefix sum: Range sums, “sum equals k”, subarray counts.
  • Monotonic stack and queue: “Next greater/smaller”, histogram, temperatures, spans.
  • Merge intervals: Overlapping ranges, scheduling, consolidation.
  • Binary search: Sorted input, “min feasible”, “max feasible”, log-time hints.
  • Graph traversals (BFS/DFS): Connected components, reachability, shortest unweighted paths.
  • Dynamic programming: “Optimal”, overlapping subproblems, exponential brute force smell.

Pattern Index (Quick Jump)

Use this section when you already know the family and want the specific entry.

Pointer-Based Patterns

Two Pointers

When to use: Problems involving sorted arrays or sequences where you need to find pairs, triplets, or validate conditions.

Contract: A sequence (often sorted) plus a target or predicate, return indices, values, or a boolean.

Typical complexity: (O(n)) time, (O(1)) extra space, plus (O(n \log n)) if you must sort first.

Key characteristics:

  • Array or sequence is sorted (or can be sorted).
  • Looking for pairs, triplets, or combinations.
  • Need to compare elements from different positions.
  • Linear or near-linear time complexity desired.

Common problems:

  • Find two numbers that sum to a target.
  • Remove duplicates from sorted array.
  • Validate palindrome.
  • Find three numbers that sum to zero.

How to spot it:

  • Sorted array with pair-finding requirements.
  • Need to compare elements at different indices.
  • Can eliminate possibilities by moving pointers.

Canonical shape:

def two_pointers(arr, target):
    left = 0
    right = len(arr) - 1
    
    while left < right:
        current_sum = arr[left] + arr[right]
        if current_sum == target:
            return [left, right]
        elif current_sum < target:
            left += 1
        else:
            right -= 1
    return []

Fast and Slow Pointers

When to use: Problems involving cycles, middle elements, or linked list operations.

Contract: A linked list or pointer chain, return cycle existence, cycle entry, or a position like the middle.

Typical complexity: (O(n)) time, (O(1)) extra space.

Key characteristics:

  • Need to detect cycles or loops.
  • Finding middle element without knowing length.
  • Need to traverse at different speeds.
  • Linked list or sequence with potential cycles.

Common problems:

  • Detect cycle in linked list.
  • Find middle of linked list.
  • Find kth element from end.
  • Detect palindrome in linked list.

How to spot it:

  • Linked list problems.
  • Cycle detection requirements.
  • Need to find position without full traversal.
  • “Floyd’s cycle detection” or “tortoise and hare” mentioned.

Canonical shape:

def has_cycle(head):
    slow = fast = head
    
    while fast and fast.next:
        slow = slow.next
        fast = fast.next.next
        if slow == fast:
            return True
    return False

Window and Range Patterns

Sliding Window

When to use: Problems involving subarrays, substrings, or contiguous sequences with constraints.

Contract: A sequence plus a window constraint, return a best window metric (max, min, count) or a window itself.

Decision rules:

  • Fixed-size window: Keep a running sum or counts, slide one step each time.
  • Variable-size window: Expand to include new items, shrink while the constraint is violated.
  • “At most k”: Usually sliding window with shrink-on-violation.
  • “Exactly k”: Often “at most k” difference, or prefix sum counting depending on the constraint.

Typical complexity: Usually (O(n)) time, (O(1)) to (O(\Sigma)) extra space for counts (alphabet size, distinct keys).

Key characteristics:

  • Need to find subarray or substring.
  • Window size is fixed or variable.
  • Optimizing for maximum, minimum, or count.
  • Linear time complexity possible.

Common problems:

  • Maximum sum subarray of size k.
  • Longest substring with k distinct characters.
  • Minimum window substring.
  • Find all anagrams in a string.

How to spot it:

  • “Subarray” or “substring” in problem description.
  • Need to maintain a window of elements.
  • Constraints on window contents (unique characters, sum limits).
  • Can expand and contract window based on conditions.

Canonical shape:

def sliding_window(arr, k):
    window_start = 0
    max_sum = 0
    window_sum = 0
    
    for window_end in range(len(arr)):
        window_sum += arr[window_end]
        
        if window_end >= k - 1:
            max_sum = max(max_sum, window_sum)
            window_sum -= arr[window_start]
            window_start += 1
    
    return max_sum

Prefix Sum

When to use: Problems requiring frequent range sum queries or cumulative calculations.

Contract: A numeric sequence, return fast range aggregates, or count subarrays that match a condition.

Decision rules:

  • Repeated range sum queries: Build a prefix array, query each range in (O(1)).
  • Count subarrays with sum equals k: Track prefix sum frequencies in a hashmap while scanning.

Typical complexity: (O(n)) build time, (O(1)) per range query, (O(n)) time and (O(n)) space for counting problems.

Key characteristics:

  • Multiple range sum queries.
  • Need cumulative information.
  • Preprocessing allowed for faster queries.
  • Range updates or queries.

Common problems:

  • Range sum queries.
  • Subarray sum equals k.
  • Equilibrium index.
  • Maximum size subarray sum.

How to spot it:

  • “Range sum” or “subarray sum” queries.
  • Need to answer multiple queries efficiently.
  • Can precompute cumulative values.
  • O(1) range queries after O(n) preprocessing.

Constraints / gotchas:

  • Prefix sums are (O(n)) memory, they trade memory for faster repeated queries.
  • For “subarray sum equals k” counting, you usually need a hashmap of prefix sum frequencies, not just the prefix array.

Canonical shape:

def prefix_sum(arr):
    prefix = [0]
    for num in arr:
        prefix.append(prefix[-1] + num)
    return prefix

def range_sum(prefix, i, j):
    return prefix[j + 1] - prefix[i]

Stack and Queue Patterns

Monotonic Stack and Queue

When to use: Problems requiring next/previous greater or smaller elements, or maintaining monotonic order.

Contract: A sequence, return “next greater/smaller” answers, spans, or range extents for each position.

Typical complexity: (O(n)) time, (O(n)) extra space for the stack or deque.

Key characteristics:

  • Need next greater/smaller element.
  • Maintaining increasing or decreasing order.
  • Problems involving temperature, stock prices, or heights.
  • Stack or queue maintains monotonic property.

Common problems:

  • Next greater element.
  • Largest rectangle in histogram.
  • Daily temperatures.
  • Trapping rain water.

How to spot it:

  • “Next greater” or “next smaller” in problem.
  • Need to compare with previous elements.
  • Maintaining sorted order as you process.
  • Stack-based solution with monotonic property.

Canonical shape:

def next_greater_element(nums):
    stack = []
    result = [-1] * len(nums)
    
    for i in range(len(nums)):
        while stack and nums[stack[-1]] < nums[i]:
            result[stack.pop()] = nums[i]
        stack.append(i)
    
    return result

Expression Evaluation, Stacks and Queues

When to use: Problems involving parsing, evaluating expressions, or handling nested structures.

Contract: A token stream or string expression, return a computed value, a decoded string, or a validity boolean.

Typical complexity: Usually (O(n)) time, (O(n)) extra space for the stack.

Key characteristics:

  • Expression evaluation (infix, postfix, prefix).
  • Parentheses matching.
  • Nested structure processing.
  • Operator precedence handling.

Common problems:

  • Evaluate arithmetic expression.
  • Valid parentheses.
  • Basic calculator.
  • Decode string.

How to spot it:

  • Expression parsing or evaluation.
  • Nested parentheses or brackets.
  • Operator precedence matters.
  • Need to process in reverse order (stack).

Canonical shape:

Constraints: This example is intentionally simplified. It assumes input contains digits, +, -, (, and ), and does not handle spaces or other operators.

def evaluate_expression(s):
    stack = []
    num = 0
    sign = 1
    result = 0
    
    for char in s:
        if char.isdigit():
            num = num * 10 + int(char)
        elif char == '+':
            result += sign * num
            num = 0
            sign = 1
        elif char == '-':
            result += sign * num
            num = 0
            sign = -1
        elif char == '(':
            stack.append(result)
            stack.append(sign)
            result = 0
            sign = 1
        elif char == ')':
            result += sign * num
            num = 0
            result *= stack.pop()
            result += stack.pop()
    
    return result + sign * num

Interval and Sorting Patterns

Merge Intervals

When to use: Problems involving overlapping intervals, scheduling, or time ranges.

Contract: A list of intervals, return a merged list, or a decision like “can schedule without conflicts”.

Typical complexity: Usually (O(n \log n)) time for sorting, (O(n)) extra space for the output.

Key characteristics:

  • Collection of intervals.
  • Need to merge overlapping intervals.
  • Scheduling or time-based problems.
  • Range consolidation required.

Common problems:

  • Merge intervals.
  • Insert interval.
  • Meeting rooms.
  • Non-overlapping intervals.

How to spot it:

  • “Intervals” or “ranges” in problem.
  • Overlapping conditions.
  • Need to consolidate or merge.
  • Scheduling or time-based context.

Canonical shape:

def merge_intervals(intervals):
    if not intervals:
        return []
    
    intervals.sort(key=lambda x: x[0])
    merged = [intervals[0]]
    
    for current in intervals[1:]:
        last = merged[-1]
        if current[0] <= last[1]:
            merged[-1] = [last[0], max(last[1], current[1])]
        else:
            merged.append(current)
    
    return merged

Overlapping Intervals

When to use: Problems requiring detection, counting, or scheduling under overlapping time ranges.

Contract: A list of intervals, return overlap counts, conflict detection, or maximum concurrency.

Decision rules:

  • Half-open intervals ([start, end)): Process end events before start events at the same time.
  • Closed intervals ([start, end]): Process start events before end events at the same time.

Typical complexity: Usually (O(n \log n)) time for event sorting, (O(n)) extra space for events.

Key characteristics:

  • Multiple intervals.
  • Need to find overlaps or conflicts.
  • Counting maximum overlaps.
  • Scheduling conflicts.

Common problems:

  • Meeting rooms II.
  • Car pooling.
  • Employee free time.
  • Maximum overlapping intervals.

How to spot it:

  • Interval overlap detection.
  • Maximum concurrent events.
  • Resource allocation with conflicts.
  • Sweep line or event-based approach.

Constraints / gotchas:

  • Endpoints matter, decide whether intervals are closed ([start, end]) or half-open ([start, end)).
  • If you are counting concurrent intervals, tie-breaking at equal timestamps changes results.

Canonical shape (sweep line, max concurrency):

def max_overlaps(intervals):
    events = []
    for start, end in intervals:
        events.append((start, 1))   # start adds one active interval
        events.append((end, -1))    # end removes one active interval

    # If you treat intervals as [start, end), sorting (time, delta) works.
    events.sort()

    active = 0
    best = 0
    for _, delta in events:
        active += delta
        best = max(best, active)
    return best

Sorting-Based Patterns

When to use: Problems where sorting enables efficient solutions or reveals structure.

Contract: Unordered items, return an ordered scan result like grouping, boundary detection, or schedule feasibility.

Typical complexity: Usually (O(n \log n)) time, (O(n)) extra space depending on the language and output needs.

Key characteristics:

  • Sorting reveals solution structure.
  • Need ordered processing.
  • Comparison-based problems.
  • Can sort to simplify logic.

Common problems:

  • K closest points to origin.
  • Largest number.
  • Meeting rooms.
  • Non-overlapping intervals.

How to spot it:

  • Problem becomes easier with sorted data.
  • Order matters for solution.
  • Can preprocess with sort.
  • Comparison or ordering requirements.

Constraints / gotchas:

  • Sorting cost is often the bottleneck, expect (O(n \log n)) time.
  • Sorting can change the meaning of indices, keep original positions if you need them.
  • Comparator and key functions can dominate runtime if they are expensive.
  • Stability matters when you rely on secondary ordering (Python’s sort is stable).

Canonical shape (sort, then scan):

def sort_then_scan(items, key_fn):
    items = sorted(items, key=key_fn)

    i = 0
    while i < len(items):
        j = i
        while j < len(items) and key_fn(items[j]) == key_fn(items[i]):
            j += 1

        group = items[i:j]
        # process: replace this with your operation
        _ = group

        i = j

Cyclic Sort

When to use: Problems involving arrays with numbers in a specific range (1 to n or 0 to n-1).

Contract: An array with values that map to indices, return the array rearranged in-place, or a derived value like “missing number”.

Typical complexity: Usually (O(n)) time, (O(1)) extra space.

Key characteristics:

  • Array contains numbers in range 1 to n or 0 to n-1.
  • Need to place each number at its correct index.
  • In-place sorting without extra space.
  • O(n) time complexity possible.

Common problems:

  • Find missing number.
  • Find all duplicates.
  • Find first missing positive.
  • Cyclic sort array.

How to spot it:

  • Array with numbers in range 1 to n or 0 to n-1.
  • Need to find missing or duplicate numbers.
  • Can use indices as hash map.
  • O(n) time, O(1) space possible.

Canonical shape:

def cyclic_sort(nums):
    i = 0
    while i < len(nums):
        correct_index = nums[i] - 1
        if nums[i] != nums[correct_index]:
            nums[i], nums[correct_index] = nums[correct_index], nums[i]
        else:
            i += 1
    return nums

Search Patterns

Binary Search and Variants

When to use: Problems with sorted data, search space reduction, or peak finding.

Contract: A sorted array, or a monotonic predicate over a search space, return an index, a boundary, or an extremal feasible value.

Decision rules:

  • Exact match in sorted array: Standard binary search.
  • First or last occurrence: Binary search on boundaries, not equality, return the leftmost or rightmost index.
  • “Minimum feasible” or “maximum feasible”: Binary search on the answer, use a monotonic feasibility function.
  • Rotated array: Decide which side is sorted each step, then discard half.

Typical complexity: (O(\log n)) time, (O(1)) extra space.

Key characteristics:

  • Sorted array or search space.
  • Can eliminate half the search space.
  • Logarithmic time complexity.
  • Search space reduction.

Common problems:

  • Binary search in sorted array.
  • Search in rotated sorted array.
  • Find peak element.
  • Search for range.

How to spot it:

  • Sorted data available.
  • “Search” or “find” in problem.
  • O(log n) time complexity hint.
  • Can eliminate half the possibilities.

Canonical shape:

def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    
    return -1

See Fundamentals of Algorithms for more on binary search and search algorithms.

Divide and Conquer Patterns

Divide and Conquer

When to use: Problems that can be broken into independent subproblems.

Contract: A problem that decomposes into subproblems, return a combined result from recursively solved parts.

Typical complexity: Often (O(n \log n)) time with (O(\log n)) recursion stack, but it depends on the split and combine cost.

Key characteristics:

  • Problem can be divided into smaller subproblems.
  • Subproblems are independent.
  • Solutions can be combined.
  • Recursive structure.

Common problems:

  • Merge sort.
  • Quick sort.
  • Maximum subarray.
  • Count inversions.

How to spot it:

  • Problem breaks into smaller versions.
  • Can solve subproblems independently.
  • Combine solutions for final answer.
  • Recursive thinking applies.

Canonical shape:

def divide_and_conquer(arr, left, right):
    if left == right:
        return arr[left]
    
    mid = (left + right) // 2
    left_result = divide_and_conquer(arr, left, mid)
    right_result = divide_and_conquer(arr, mid + 1, right)
    combined = combine(left_result, right_result)
    
    return combined

See Fundamentals of Algorithms for more on divide and conquer.

Greedy and Optimization Patterns

Greedy Algorithms

When to use: Problems where locally optimal choices lead to globally optimal solution.

Contract: A space of choices where a safe local decision exists, return an optimal solution without backtracking.

Typical complexity: Varies by problem, many greedy solutions are (O(n)) or (O(n \log n)) due to sorting or a heap.

Key characteristics:

  • Make best local choice at each step.
  • No need to reconsider previous choices.
  • Optimal substructure property.
  • Greedy choice property.

Common problems:

  • Activity selection.
  • Fractional knapsack.
  • Minimum spanning tree.
  • Huffman coding.

How to spot it:

  • Can make optimal choice without looking ahead.
  • Local optimization leads to global optimum.
  • No need to backtrack.
  • “Greedy” or “optimal at each step” thinking.

Constraints / gotchas:

  • Greedy works only if the greedy-choice property holds.
  • Many problems that feel greedy actually require dynamic programming.
  • Validate with a counterexample before you commit to greedy.

See Fundamentals of Algorithms for more on greedy algorithms.

Dynamic Programming Patterns

Dynamic Programming, 1D and 2D, Knapsack, Range DP

When to use: Problems with overlapping subproblems and optimal substructure.

Contract: A problem that can be expressed as a state machine with a recurrence, return an optimal value, count, or reconstruction.

Decision rules:

  • Memoization: Start with recursion when the state is hard to enumerate, cache results as you go.
  • Tabulation: Prefer bottom-up when you can order states cleanly and want predictable memory.
  • State first: Define the state and the transition before you touch code, most bugs come from a wrong state.

Typical complexity: Often (O(#states \times #transitions)) time and (O(#states)) space.

Key characteristics:

  • Overlapping subproblems.
  • Optimal substructure.
  • Can memoize or tabulate results.
  • Avoid recomputing same subproblems.

Common problems:

  • Fibonacci numbers.
  • Longest common subsequence.
  • Knapsack problem.
  • Edit distance.

How to spot it:

  • “Optimal” or “maximum/minimum” in problem.
  • Can break into smaller subproblems.
  • Subproblems overlap.
  • Brute force would be exponential.

Constraints / gotchas:

  • Dynamic programming correctness depends on defining the state and transition clearly, incorrect state choice is the most common bug.

Canonical shape:

def dp_1d(n):
    dp = [0] * (n + 1)
    dp[0] = base_case
    
    for i in range(1, n + 1):
        dp[i] = recurrence_relation(dp, i)
    
    return dp[n]

See Fundamentals of Algorithms for more on dynamic programming.

Backtracking Patterns

When to use: Problems requiring exploring all possibilities or finding all solutions.

Contract: A search space of partial choices, return all valid solutions, one valid solution, or a count.

Typical complexity: Often exponential time in the worst case, pruning and ordering determine whether it is usable.

Key characteristics:

  • Need to explore all possibilities.
  • Can prune invalid paths early.
  • Recursive exploration.
  • Undo choices (backtrack).

Common problems:

  • N-Queens problem.
  • Generate parentheses.
  • Subset generation.
  • Sudoku solver.

How to spot it:

  • “Find all” or “generate all” in problem.
  • Need to explore possibilities.
  • Can eliminate invalid paths.
  • Recursive structure with undo.

Constraints / gotchas:

  • Deep recursion can hit Python’s recursion limit, prefer an explicit stack for very deep searches.
  • Time can blow up exponentially, pruning is the difference between usable and unusable.

Canonical shape:

def backtrack(path, choices):
    if is_solution(path):
        result.append(path[:])
        return
    
    for choice in choices:
        if is_valid(choice, path):
            path.append(choice)
            backtrack(path, updated_choices)
            path.pop()  # backtrack

Graph and Tree Patterns

Graph Traversals, BFS and DFS

When to use: Problems involving graphs, trees, or relationships between entities.

Contract: A graph plus a start node or a set of nodes, return visited nodes, components, levels, or a derived property.

Decision rules:

  • Breadth-first search (BFS): Shortest path in an unweighted graph, level-by-level traversal.
  • Depth-first search (DFS): Connected components, cycle detection, topological sort variants, exploring a space deeply.

Typical complexity: (O(V + E)) time, (O(V)) space for visited, plus queue or stack.

Key characteristics:

  • Graph or tree structure.
  • Need to visit all nodes.
  • Path finding or connectivity.
  • Level-order or depth-first exploration.

Common problems:

  • Level-order traversal.
  • Shortest path.
  • Connected components.
  • Clone graph.

How to spot it:

  • Graph or tree in problem.
  • “Traverse” or “visit all nodes.”
  • Path finding requirements.
  • BFS for shortest path, DFS for exploration.

Constraints / gotchas:

  • For DFS in Python, deep recursion can hit the recursion limit, iterative DFS avoids that.
  • Always define what “visited” means (node-level, edge-level, state-level) to avoid missing valid paths.
  • For shortest paths, BFS is correct only for unweighted graphs (or equal weights).

Canonical shape:

from collections import deque

def bfs(graph, start):
    q = deque([start])
    visited = {start}

    while q:
        node = q.popleft()
        process(node)  # replace with your operation

        for neighbor in graph.get(node, []):
            if neighbor not in visited:
                visited.add(neighbor)
                q.append(neighbor)

See Fundamentals of Algorithms for more on graph traversals.

Topological Sort

When to use: Problems involving dependencies, ordering, or prerequisite relationships in a DAG (directed acyclic graph).

Contract: A directed acyclic graph (DAG), return a valid ordering of nodes that respects edges.

Typical complexity: (O(V + E)) time, (O(V + E)) space for the graph and indegree.

Key characteristics:

  • Directed acyclic graph (DAG).
  • Dependency relationships.
  • Need valid ordering.
  • Prerequisites before dependents.

Common problems:

  • Course schedule.
  • Build order.
  • Alien dictionary.
  • Task scheduling.

How to spot it:

  • “Dependencies” or “prerequisites” in problem.
  • Directed graph with no cycles.
  • Need valid ordering.
  • Kahn’s algorithm or DFS-based approach.

Constraints / gotchas:

  • If the graph has a cycle, there is no valid ordering.
  • Detect cycles by checking whether you process all nodes (Kahn), or by tracking a recursion stack (DFS).

Canonical shape (Kahn’s algorithm):

from collections import deque

def topo_sort(nodes, edges):
    # nodes: iterable of node ids
    # edges: iterable of (u, v) meaning u -> v
    graph = {n: [] for n in nodes}
    indegree = {n: 0 for n in nodes}

    for u, v in edges:
        graph[u].append(v)
        indegree[v] += 1

    q = deque([n for n in nodes if indegree[n] == 0])
    order = []

    while q:
        u = q.popleft()
        order.append(u)
        for v in graph[u]:
            indegree[v] -= 1
            if indegree[v] == 0:
                q.append(v)

    if len(order) != len(indegree):
        return []  # cycle detected
    return order

Binary Tree Traversals, Preorder, Inorder, Postorder, Level Order

When to use: Problems requiring tree node processing in specific orders, or when an order constraint is part of correctness.

Contract: A binary tree, return values in an order, or compute an aggregate while visiting nodes.

Typical complexity: (O(n)) time, (O(h)) space where (h) is tree height.

Key characteristics:

  • Binary tree structure.
  • Need specific traversal order.
  • Recursive or iterative approach.
  • Order matters for problem.

Common problems:

  • Serialize binary tree.
  • Construct tree from traversal.
  • Validate BST.
  • Tree to linked list.

How to spot it:

  • Binary tree in problem.
  • Specific order mentioned.
  • Need to process nodes in order.
  • Recursive or iterative traversal.

Constraints / gotchas:

  • Recursive traversals can hit recursion limits on deep trees.
  • Inorder traversal only implies sorted output if the tree is a valid BST.

Canonical shape (iterative inorder):

def inorder(root):
    stack = []
    node = root
    out = []

    while node or stack:
        while node:
            stack.append(node)
            node = node.left
        node = stack.pop()
        out.append(node.val)
        node = node.right

    return out

Path Sum and Root to Leaf Techniques

When to use: Problems involving paths from root to leaves or path-based conditions.

Contract: A tree and a path condition, return a boolean, a count, or the set of matching paths.

Typical complexity: Often (O(n)) time, (O(h)) space, plus output size if you return all paths.

Key characteristics:

  • Tree path problems.
  • Root to leaf paths.
  • Path sum or path validation.
  • Accumulate values along path.

Common problems:

  • Path sum.
  • All paths from root to leaf.
  • Path with given sequence.
  • Maximum path sum.

How to spot it:

  • “Path” in problem description.
  • Root to leaf requirements.
  • Sum or validation along path.
  • Tree traversal with path tracking.

Constraints / gotchas:

  • Be explicit about whether the path can stop early, or must end at a leaf.
  • For “any path” (not just root-to-leaf), you usually need prefix sums, not just DFS accumulation.

Canonical shape (root-to-leaf sum exists):

def has_root_to_leaf_sum(root, target):
    if not root:
        return False
    target -= root.val
    if not root.left and not root.right:
        return target == 0
    return has_root_to_leaf_sum(root.left, target) or has_root_to_leaf_sum(root.right, target)

Union Find, Disjoint Set

When to use: Problems involving connectivity, grouping, or cycle detection in undirected graphs.

Contract: A set of elements with union operations and connectivity queries, return component membership or connectivity.

Typical complexity: Amortized near-constant time per operation with path compression and union by rank, (O(n)) space.

Key characteristics:

  • Need to track connected components.
  • Union and find operations.
  • Cycle detection in undirected graphs.
  • Dynamic connectivity queries.

Common problems:

  • Number of connected components.
  • Detect cycle in undirected graph.
  • Friend circles.
  • Redundant connection.
  • Accounts merge.

How to spot it:

  • “Connected” or “group” in problem.
  • Need to merge sets dynamically.
  • Cycle detection in undirected graphs.
  • Union and find operations required.

Canonical shape:

class UnionFind:
    def __init__(self, n):
        self.parent = list(range(n))
        self.rank = [0] * n
    
    def find(self, x):
        if self.parent[x] != x:
            self.parent[x] = self.find(self.parent[x])
        return self.parent[x]
    
    def union(self, x, y):
        root_x = self.find(x)
        root_y = self.find(y)
        
        if root_x == root_y:
            return False
        
        if self.rank[root_x] < self.rank[root_y]:
            self.parent[root_x] = root_y
        elif self.rank[root_x] > self.rank[root_y]:
            self.parent[root_y] = root_x
        else:
            self.parent[root_y] = root_x
            self.rank[root_x] += 1
        
        return True

Shortest Path Pattern Family

When to use: Problems requiring finding shortest paths in weighted graphs.

Contract: A graph or grid plus a source (and sometimes a target), return shortest distances, and sometimes the path.

Choose the algorithm:

  • Unweighted graph: Breadth-first search (BFS).
  • 0 or 1 edge weights: 0 to 1 BFS with a deque.
  • Non-negative weights: Dijkstra’s algorithm.
  • Negative weights (no negative cycles): Bellman-Ford.
  • Directed acyclic graph (DAG): Topological order dynamic programming.
  • All-pairs, small n: Floyd-Warshall.

Typical complexity: Varies by algorithm, Dijkstra is often (O(E \log V)) with a heap, Bellman-Ford is (O(VE)), Floyd-Warshall is (O(V^3)).

Key characteristics:

  • Weighted graph structure.
  • Need shortest path from source to target.
  • Different algorithms for different constraints.
  • Single-source or all-pairs shortest path.

Common problems:

  • Network delay time.
  • Cheapest flights within k stops.
  • Path with minimum effort.
  • Shortest path in binary matrix.

How to spot it:

  • “Shortest path” or “minimum cost” in problem.
  • Weighted graph or grid.
  • Path optimization required.
  • Choose algorithm based on constraints (non-negative weights, cycles, etc.).

Reference algorithms:

  • Dijkstra’s algorithm - Non-negative weights, single-source shortest path.
  • Bellman-Ford - Handles negative weights, detects negative cycles.
  • Floyd-Warshall - All-pairs shortest path, handles negative weights (no negative cycles).

Canonical shape (Dijkstra’s):

import heapq

def dijkstra(graph, start):
    distances = {node: float('inf') for node in graph}
    distances[start] = 0
    pq = [(0, start)]
    visited = set()
    
    while pq:
        dist, node = heapq.heappop(pq)
        if node in visited:
            continue
        visited.add(node)
        
        for neighbor, weight in graph[node]:
            new_dist = dist + weight
            if new_dist < distances[neighbor]:
                distances[neighbor] = new_dist
                heapq.heappush(pq, (new_dist, neighbor))
    
    return distances

Minimum Spanning Tree Pattern Family

When to use: Problems requiring connecting all nodes with minimum total cost.

Contract: A weighted graph, return a minimum spanning tree or the minimum total cost to connect all nodes.

Choose the algorithm:

  • Kruskal’s algorithm: Sort edges, best fit when you already have an edge list and the graph is sparse.
  • Prim’s algorithm: Grow from a start node, often simpler with adjacency lists and a heap.

Typical complexity: Kruskal is (O(E \log E)), Prim is often (O(E \log V)) with a heap.

Key characteristics:

  • Need to connect all nodes.
  • Minimize total edge weight.
  • No cycles in final tree.
  • All nodes must be reachable.

Common problems:

  • Connect all cities with minimum cost.
  • Minimum cost to connect points.
  • Network connections.
  • Building connections.

How to spot it:

  • “Connect all” or “minimum cost” in problem.
  • Need spanning tree (all nodes, no cycles).
  • Weighted graph.
  • Kruskal’s or Prim’s algorithm applicable.

Reference algorithms:

  • Kruskal’s algorithm - Sort edges, use Union-Find to avoid cycles.
  • Prim’s algorithm - Start from node, grow tree greedily.

Canonical shape (Kruskal’s):

def kruskal(n, edges):
    edges.sort(key=lambda x: x[2])  # Sort by weight
    uf = UnionFind(n)
    mst = []
    total_cost = 0
    
    for u, v, weight in edges:
        if uf.union(u, v):
            mst.append((u, v))
            total_cost += weight
            if len(mst) == n - 1:
                break
    
    return mst, total_cost

Matrix and Grid Patterns

Matrix Traversal

When to use: Problems involving 2D grids, matrices, or spatial relationships.

Contract: A grid plus movement rules, return visited cells, components, or a computed value like shortest steps.

Typical complexity: Usually (O(R \times C)) time and space, where (R) is rows and (C) is columns.

Key characteristics:

  • 2D grid or matrix.
  • Need to traverse or process cells.
  • Adjacent cell relationships.
  • Direction-based movement.

Common problems:

  • Number of islands.
  • Word search.
  • Rotate image.
  • Spiral matrix.

How to spot it:

  • 2D grid or matrix.
  • “Traverse” or “visit all cells.”
  • Adjacent cell operations.
  • Direction-based logic (up, down, left, right).

Canonical shape:

def matrix_traversal(matrix):
    rows, cols = len(matrix), len(matrix[0])
    directions = [(0, 1), (1, 0), (0, -1), (-1, 0)]
    visited = set()

    def in_bounds(r, c):
        return 0 <= r < rows and 0 <= c < cols

    def dfs(r, c):
        if not in_bounds(r, c) or (r, c) in visited:
            return
        visited.add((r, c))

        # process: replace this with your operation
        _ = matrix[r][c]

        for dr, dc in directions:
            dfs(r + dr, c + dc)

    for r in range(rows):
        for c in range(cols):
            if (r, c) not in visited:
                dfs(r, c)

Linked List Patterns

Linked List Techniques, Dummy Node, In Place Reversal

When to use: Problems involving linked list manipulation or transformation.

Contract: A linked list, return a modified list, a removed node, or a derived property like palindrome detection.

Typical complexity: Often (O(n)) time, (O(1)) extra space for in-place operations.

Key characteristics:

  • Linked list operations.
  • Need to handle edge cases.
  • In-place modifications.
  • Pointer manipulation.

Common problems:

  • Reverse linked list.
  • Remove nth node from end.
  • Merge two sorted lists.
  • Detect cycle.

How to spot it:

  • Linked list in problem.
  • Need to modify structure.
  • Pointer manipulation required.
  • Dummy node simplifies edge cases.

Canonical shape:

def reverse_linked_list(head):
    prev = None
    current = head
    
    while current:
        next_node = current.next
        current.next = prev
        prev = current
        current = next_node
    
    return prev

Heap and Selection Patterns

Top K Elements, Heap and QuickSelect

When to use: Problems requiring finding k largest, smallest, or most frequent elements.

Contract: A collection plus k, return the k best items, or one boundary item like the kth largest.

Decision rules:

  • Heap: Great when k is small relative to n, and you want streaming-friendly behavior.
  • QuickSelect: Great when you only need the kth boundary, not the whole top-k set, average linear time.

Typical complexity: Heap is (O(n \log k)) time, QuickSelect is (O(n)) average time, both use (O(k)) to (O(n)) space depending on implementation.

Key characteristics:

  • Need k elements with specific property.
  • Don’t need full sort.
  • Heap or selection algorithm.
  • Efficient for large datasets.

Common problems:

  • K largest elements.
  • Top k frequent elements.
  • Find median.
  • K closest points.

How to spot it:

  • “Top k” or “k largest/smallest” in problem.
  • Don’t need full ordering.
  • Heap data structure applicable.
  • O(n log k) or O(n) time complexity.

Canonical shape:

import heapq

def top_k_elements(arr, k):
    heap = []
    
    for num in arr:
        heapq.heappush(heap, num)
        if len(heap) > k:
            heapq.heappop(heap)
    
    return heap

Kth Largest and Smallest Elements

When to use: Problems requiring finding specific ranked element.

Contract: A collection plus a rank k, return the kth smallest, kth largest, or a partition around that boundary.

Typical complexity: QuickSelect is (O(n)) average time, (O(n^2)) worst-case unless you randomize, (O(1)) extra space for in-place variants.

Key characteristics:

  • Need kth ranked element.
  • QuickSelect algorithm.
  • Partition-based approach.
  • O(n) average time complexity.

Common problems:

  • Kth largest element.
  • Kth smallest element.
  • Find median.
  • QuickSelect variations.

How to spot it:

  • “Kth” in problem description.
  • Don’t need full sort.
  • Partition-based thinking.
  • Average O(n) time complexity.

Constraints / gotchas:

  • QuickSelect is average O(n), but worst-case O(n^2) unless you randomize the pivot.
  • Be careful about “kth largest” vs “kth smallest”, and whether k is 0-based or 1-based.
  • This version favors clarity over in-place partitioning, in-place variants reduce memory usage.

Canonical shape (QuickSelect, kth smallest, 0-based k):

import random

def quickselect(nums, k):
    if len(nums) == 1:
        return nums[0]

    pivot = random.choice(nums)
    lows = [x for x in nums if x < pivot]
    highs = [x for x in nums if x > pivot]
    pivots = [x for x in nums if x == pivot]

    if k < len(lows):
        return quickselect(lows, k)
    if k < len(lows) + len(pivots):
        return pivot
    return quickselect(highs, k - len(lows) - len(pivots))

Two Heaps Pattern

When to use: Problems requiring maintaining a balanced partition or finding medians dynamically.

Contract: A stream of numbers, return running medians, or maintain a split between “small half” and “large half”.

Typical complexity: (O(\log n)) time per insert, (O(1)) time to read the median, (O(n)) space.

Key characteristics:

  • Need to maintain two halves of data.
  • Find median or balanced partition.
  • Min heap and max heap together.
  • Dynamic data stream.

Common problems:

  • Find median from data stream.
  • Sliding window median.
  • Initial Public Offering (IPO) maximize capital.
  • Maximize capital.

How to spot it:

  • “Median” or “balanced” in problem.
  • Need to maintain two halves.
  • Dynamic data (streaming or sliding window).
  • Min heap for larger half, max heap for smaller half.

Canonical shape:

import heapq

class MedianFinder:
    def __init__(self):
        self.max_heap = []  # Smaller half (negated for max heap)
        self.min_heap = []  # Larger half
    
    def add_num(self, num):
        if not self.max_heap or num <= -self.max_heap[0]:
            heapq.heappush(self.max_heap, -num)
        else:
            heapq.heappush(self.min_heap, num)
        
        # Balance heaps
        if len(self.max_heap) > len(self.min_heap) + 1:
            heapq.heappush(self.min_heap, -heapq.heappop(self.max_heap))
        elif len(self.min_heap) > len(self.max_heap) + 1:
            heapq.heappush(self.max_heap, -heapq.heappop(self.min_heap))
    
    def find_median(self):
        if len(self.max_heap) == len(self.min_heap):
            return (-self.max_heap[0] + self.min_heap[0]) / 2
        return -self.max_heap[0] if len(self.max_heap) > len(self.min_heap) else self.min_heap[0]

Hash-Based Patterns

Hashmaps and Frequency Counting

When to use: Problems requiring counting, grouping, or fast lookups.

Contract: Items plus a keying rule, return counts, groups, or fast membership tests.

Typical complexity: Usually (O(n)) time, (O(n)) space, assuming near-constant average hash operations.

Key characteristics:

  • Need to count occurrences.
  • Group elements by property.
  • Fast lookup required.
  • Frequency-based problems.

Common problems:

  • Two sum.
  • Group anagrams.
  • Longest substring without repeating characters.
  • First unique character.

How to spot it:

  • “Count” or “frequency” in problem.
  • Need fast lookups.
  • Grouping or categorization.
  • O(1) average lookup time.

Constraints / gotchas:

  • Hashmaps trade memory for speed, worst-case performance degrades with heavy collisions.
  • Be explicit about what makes a key “the same” (case sensitivity, normalization, hashing of composite keys).

Canonical shape:

def frequency_count(arr):
    freq = {}
    
    for item in arr:
        freq[item] = freq.get(item, 0) + 1
    
    return freq

Trie and String Patterns

Trie and Prefix Tree

When to use: Problems involving prefix matching, word search, or string autocomplete.

Contract: A set of strings, support fast prefix queries like “does any word start with prefix p”.

Typical complexity: Insert and lookup are (O(L)) where (L) is word length, space is (O(\text{total characters})).

Key characteristics:

  • Prefix-based operations.
  • String search and matching.
  • Autocomplete functionality.
  • Efficient prefix queries.

Common problems:

  • Implement Trie.
  • Word search II.
  • Longest common prefix.
  • Add and search words.

How to spot it:

  • “Prefix” in problem description.
  • String matching or search.
  • Autocomplete or dictionary.
  • Tree structure for strings.

Constraints / gotchas:

  • Tries use more memory than hash-based lookups, but they make prefix queries fast and predictable.
  • If you only need exact membership checks (not prefixes), a hashmap or hash set is usually simpler and smaller.
  • Define the alphabet and normalization upfront (case-folding, Unicode normalization) or your trie will behave inconsistently.

Canonical shape:

class TrieNode:
    def __init__(self):
        self.children = {}
        self.is_end = False

class Trie:
    def __init__(self):
        self.root = TrieNode()
    
    def insert(self, word):
        node = self.root
        for char in word:
            if char not in node.children:
                node.children[char] = TrieNode()
            node = node.children[char]
        node.is_end = True

Design Pattern Problems

Design Problems, Least Recently Used Cache, Twitter, Rate Limiter

When to use: Problems requiring designing data structures or systems with specific behaviors.

Contract: A required API plus performance constraints, return a design that meets the complexity requirements.

Typical complexity: Depends on the chosen data structures, most interview-style designs target (O(1)) average time for the core operations.

Key characteristics:

  • Design a data structure.
  • Specific operations and constraints.
  • Performance requirements.
  • Real-world system design.

Common problems:

  • Least Recently Used (LRU) cache.
  • Design Twitter.
  • Rate limiter.
  • Design parking lot.

How to spot it:

  • “Design” in problem description.
  • Specific operations required.
  • Performance constraints.
  • System design aspects.

Canonical shape:

from collections import OrderedDict

class LRUCache:
    def __init__(self, capacity):
        self.cache = OrderedDict()
        self.capacity = capacity
    
    def get(self, key):
        if key not in self.cache:
            return -1
        self.cache.move_to_end(key)
        return self.cache[key]
    
    def put(self, key, value):
        if key in self.cache:
            self.cache.move_to_end(key)
        self.cache[key] = value
        if len(self.cache) > self.capacity:
            self.cache.popitem(last=False)

Bit Manipulation Patterns

Bit Manipulation

When to use: Problems involving optimization, subset generation, or bit-level operations.

Contract: Integers or bitmasks, return a derived value, a transformed representation, or a generated set of subsets.

Typical complexity: Often (O(n)) over input length, or (O(\text{number of bits})) per number, subset generation is (O(2^n)).

Key characteristics:

  • Need to track multiple states or flags.
  • Subset generation without recursion.
  • XOR tricks for finding unique elements.
  • Bit masks for state representation.

Common problems:

  • Single number (find unique element).
  • Subsets generation.
  • Power of two.
  • Number of 1 bits.
  • Reverse bits.

How to spot it:

  • “Unique” or “single” element in array.
  • Need to generate all subsets.
  • Power of two checks.
  • Bit counting or manipulation.
  • Optimization problems with state tracking.

Common bit tricks:

  • Exclusive or (XOR) properties - a ^ a = 0, a ^ 0 = a, a ^ b ^ a = b.
  • Power of two - n & (n - 1) == 0.
  • Set bit - n | (1 << i).
  • Clear bit - n & ~(1 << i).
  • Toggle bit - n ^ (1 << i).

Canonical shape:

def single_number(nums):
    result = 0
    for num in nums:
        result ^= num
    return result

def generate_subsets(nums):
    n = len(nums)
    subsets = []
    for i in range(1 << n):  # 2^n subsets
        subset = []
        for j in range(n):
            if i & (1 << j):
                subset.append(nums[j])
        subsets.append(subset)
    return subsets

def is_power_of_two(n):
    return n > 0 and (n & (n - 1)) == 0

Troubleshooting: What Could Go Wrong

Here are common pattern application problems and solutions:

When something breaks, go back to the pattern’s recognition cues first, then re-check the decision rules and constraints.

Problem: Wrong Pattern Selection

Symptoms:

  • Solution feels overly complex.
  • Performance doesn’t match expectations.
  • Difficult to implement or maintain.

Solutions:

  • Re-read problem description for key characteristics.
  • List problem requirements and constraints.
  • Match characteristics to pattern features.
  • Consider simpler patterns first.

Problem: Pattern Misapplication

Symptoms:

  • Pattern seems right but solution doesn’t work.
  • Edge cases break the solution.
  • Performance is worse than expected.

Solutions:

  • Verify pattern assumptions match problem constraints.
  • Check edge cases (empty input, single element, etc.).
  • Review pattern prerequisites (sorted data, specific structure).
  • Test with small examples before full implementation.

Problem: Overlooking Pattern Combinations

Symptoms:

  • Solution works but feels incomplete.
  • Missing optimization opportunities.
  • Complex nested logic.

Solutions:

  • Look for secondary patterns that can simplify code.
  • Break problem into subproblems with distinct patterns.
  • Consider if combining patterns improves clarity or performance.

Reference: Pattern Quick Reference

Use this section when you already recognize the data shape and want a fast mapping from data shape to pattern.

Pattern Selection Cheatsheet

Sorted array problems:

  • Two pointers.
  • Binary search.

Subarray/substring problems:

  • Sliding window.
  • Prefix sum.

Graph/tree problems:

  • Breadth-first search (BFS) and depth-first search (DFS).
  • Topological sort.
  • Tree traversals.
  • Union-Find.
  • Shortest path algorithms.
  • Minimum spanning tree.

Optimization problems:

  • Dynamic programming.
  • Greedy algorithms.

Search problems:

  • Binary search.
  • Hash maps.

Interval problems:

  • Merge intervals.
  • Overlapping intervals.

String problems:

  • Trie.
  • Sliding window.
  • Two pointers.

Design problems:

  • Hash maps.
  • Heaps.
  • Custom data structures.

Bit manipulation problems:

  • XOR tricks.
  • Bit masks.
  • Subset generation.
  • Power of two checks.

Related fundamentals articles:

Algorithm Fundamentals: Fundamentals of Algorithms explains core algorithmic principles and how algorithms work in production systems. Fundamentals of Data Structures teaches you how data structure choices affect algorithm performance.

Software Engineering: Fundamentals of Software Development shows how algorithmic patterns fit into the broader software development process. Fundamentals of Software Design helps you understand how design principles inform pattern selection.

Glossary

Backtracking: A recursive search strategy that builds candidates incrementally, abandoning (backtracking) a path as soon as it violates constraints. Used for N-Queens, sudoku, and subset generation.

Binary search: A search technique that halves the search space each step by comparing a target against the midpoint of a sorted range. Runs in O(log n) time.

Bit manipulation: Using bitwise operators (AND, OR, XOR, shift) to solve problems at the binary level. Common tricks include XOR for uniqueness, bitmasks for subsets, and n & (n − 1) for power-of-two checks.

Cyclic sort: An in-place sorting pattern for arrays whose values map directly to indices (e.g., 1 to n). Each element is swapped to its correct position in O(n) time and O(1) space.

Divide and conquer: A strategy that splits a problem into independent subproblems, solves each recursively, and combines results. Merge sort and quicksort are classic examples.

Dynamic programming: A technique for problems with overlapping subproblems and optimal substructure. Stores previously computed results (memoization or tabulation) to avoid redundant work.

Fast and slow pointers: A two-pointer technique where one pointer advances twice as fast as the other. Detects cycles (Floyd's algorithm) and finds midpoints in linked lists in O(n) time and O(1) space.

Frequency counting: Using a hashmap to tally occurrences of each element. Enables O(n) solutions for problems like finding duplicates, grouping anagrams, and identifying unique elements.

Greedy algorithm: An approach that makes the locally optimal choice at each step, without reconsidering. Correct only when the greedy-choice property holds; otherwise dynamic programming is needed.

Merge intervals: A pattern for combining overlapping ranges by sorting intervals by start time and merging consecutive pairs whose ranges overlap.

Minimum spanning tree: A subset of edges in a weighted graph that connects all vertices with minimum total weight and no cycles. Found with Kruskal's or Prim's algorithm.

Monotonic stack: A stack that maintains elements in increasing or decreasing order. Solves next-greater-element, histogram, and span problems in O(n) time.

Prefix sum: A precomputed array where each entry stores the cumulative sum from the start. Enables O(1) range-sum queries after O(n) preprocessing.

QuickSelect: A selection algorithm based on quicksort partitioning that finds the kth smallest element in average O(n) time without fully sorting the array.

Shortest path: The minimum-cost route between nodes in a graph. Algorithm choice depends on constraints: BFS for unweighted, Dijkstra for non-negative weights, Bellman-Ford for negative weights.

Sliding window: A technique that maintains a contiguous subarray or substring bounded by two indices, expanding and shrinking to satisfy constraints. Solves many subarray problems in O(n) time.

Sweep line: An event-based approach for interval problems. Converts intervals into start and end events, sorts them, and scans to count overlaps or detect conflicts.

Topological sort: A linear ordering of a directed acyclic graph's vertices such that every edge points forward. Used for dependency resolution, build order, and scheduling.

Trie: A tree structure where each node represents a character and paths from root to nodes spell prefixes. Supports O(L) insert and prefix lookup, where L is word length.

Two heaps: A pattern using a max-heap for the smaller half and a min-heap for the larger half of a dataset. Provides O(1) median access and O(log n) insertion for streaming data.

Two pointers: A technique using two indices that move through a sorted sequence to find pairs, remove duplicates, or validate conditions in O(n) time and O(1) space.

Union-Find: A data structure (also called disjoint set) that tracks connected components with near-constant-time union and find operations via path compression and union by rank.

References

Algorithm Pattern Resources

Pattern Recognition and Problem Solving

  • Skiena, S. S. (2020). The Algorithm Design Manual (3rd ed.). Springer. - Comprehensive guide to algorithm design with pattern-based problem solving approach.
  • Laakmann McDowell, G. (2015). Cracking the Coding Interview (6th ed.). CareerCup. - Interview preparation guide that emphasizes pattern recognition.

Practice Platforms

  • LeetCode - Platform for practicing algorithm problems organized by pattern and difficulty.
  • HackerRank - Coding challenges and competitions with pattern-based problem sets.
  • CodeSignal - Interview preparation with pattern-focused practice problems.

Note: Algorithmic patterns remain consistent, but problem variations and best practices evolve. Always verify current approaches for your specific technology stack and use case.