Big O Notation - The Basics

Data Structures & Algorithms

Why Big O Matters

Understanding Big O isn’t about memorizing formulas—it’s about developing intuition for how algorithms scale. When your app works fine with 100 users but crashes with 1,000, that’s a Big O problem. This guide gives you the practical tools to predict and prevent performance issues before they happen.

Real impact: The difference between O(n) and O(n²) might seem small on paper, but it’s the difference between a search taking 1 second vs 17 minutes with a million items.


What is Big O Notation?

Mathematical notation introduced by Paul Bachmann (1894) and popularized by Edmund Landau (1909) for analyzing algorithm complexity

Big O notation describes how an algorithm’s runtime or space requirements grow as input size increases. It gives us a way to compare algorithms without getting bogged down in implementation details or hardware differences.

The “O” stands for “Order of” (as in “order of magnitude”). Formally, f(n) = O(g(n)) means f(n) grows no faster than g(n) as n approaches infinity, ignoring constant factors.

Key insight: We care about the pattern of growth, not exact timing.

The Mathematical Intuition

Think of Big O as asking: “What’s the worst-case scenario as data gets really large?”

  • O(1) - “No matter how much data I have, this takes the same time”
  • O(n) - “Double the data, double the time”
  • O(n²) - “Double the data, quadruple the time”
  • O(log n) - “Even massive data increases time just a little”

Core Big O Categories

O(1) - Constant Time

What it means: Performance doesn’t change with input size Examples: Array access by index, hash table lookup Real world: Getting a book from a specific shelf

public static int GetFirst(int[] array)
{
    return array[0];  // Always takes same time, regardless of array size
}

O(log n) - Logarithmic Time

What it means: Performance grows slowly even with massive input Examples: Binary search, balanced tree operations Real world: Finding a word in a dictionary (flip to middle, eliminate half, repeat)

Why it’s so efficient: Logarithmic algorithms eliminate half (or a fraction) of remaining possibilities with each step. With 1 million items, log₂(1,000,000) ≈ 20 steps. With 1 billion items, only ≈ 30 steps!

The logarithm base: In Big O, we write O(log n) without specifying the base because all logarithms differ only by a constant factor (log₂ n = log₁₀ n / log₁₀ 2), and Big O ignores constants.

public static int BinarySearch(int[] sortedArray, int target)
{
    int left = 0, right = sortedArray.Length - 1;

    while (left <= right)
    {
        int mid = left + (right - left) / 2;

        if (sortedArray[mid] == target) return mid;
        if (sortedArray[mid] < target) left = mid + 1;
        else right = mid - 1;
    }

    return -1;
}

O(n) - Linear Time

What it means: Performance grows proportionally with input Examples: Single loop through data, linear search Real world: Reading every page in a book

public static int FindMax(int[] array)
{
    int max = array[0];

    foreach (int num in array)  // Visit each element once
    {
        if (num > max) max = num;
    }

    return max;
}

O(n log n) - Linearithmic Time

What it means: Efficient divide-and-conquer, optimal for comparison sorts Examples: Merge sort, heap sort, efficient sorting Real world: Organizing a large library by author

// Merge sort example - divides problem in half each time
public static void MergeSort(int[] array, int left, int right)
{
    if (left < right)
    {
        int mid = left + (right - left) / 2;

        MergeSort(array, left, mid);        // Sort left half
        MergeSort(array, mid + 1, right);   // Sort right half
        Merge(array, left, mid, right);     // Combine results
    }
}

O(n²) - Quadratic Time

What it means: Performance grows quadratically with input (not exponentially - that’s O(2ⁿ)) Examples: Nested loops, bubble sort, selection sort, naive algorithms Real world: Comparing every person in a room to every other person (handshakes problem)

The scaling problem: If n doubles, runtime quadruples. With 1,000 items = 1 million operations. With 10,000 items = 100 million operations.

public static List<(int, int)> FindAllPairs(int[] array)
{
    var pairs = new List<(int, int)>();

    for (int i = 0; i < array.Length; i++)        // Outer loop: n times
    {
        for (int j = i + 1; j < array.Length; j++)  // Inner loop: n times
        {
            pairs.Add((array[i], array[j]));        // n × n = n² operations
        }
    }

    return pairs;
}

O(2ⁿ) - Exponential Time

What it means: Performance doubles with each additional input Examples: Recursive Fibonacci, brute force password cracking Real world: Trying every possible combination of a lock

public static long FibonacciNaive(int n)
{
    if (n <= 1) return n;

    // Each call spawns 2 more calls - exponential explosion!
    return FibonacciNaive(n - 1) + FibonacciNaive(n - 2);
}

Quick Rules for Analysis

1. Drop Constants

  • O(2n) becomes O(n)
  • O(100) becomes O(1)
  • O(n/2) becomes O(n)

Why? Big O cares about growth patterns, not exact coefficients.

2. Drop Lower-Order Terms

  • O(n² + n) becomes O(n²)
  • O(n log n + n) becomes O(n log n)
  • O(n + 1000) becomes O(n)

Why? With large inputs, the highest-order term dominates.

3. Different Inputs = Different Variables

// This is O(a × b), NOT O(n²)
for (int i = 0; i < arrayA.Length; i++)     // a iterations
{
    for (int j = 0; j < arrayB.Length; j++) // b iterations
    {
        // Some operation
    }
}

Space Complexity Basics

Space complexity follows the same notation but measures memory usage:

O(1) Space - Constant Memory

public static int Sum(int[] array)
{
    int total = 0;           // Fixed memory usage
    foreach (int num in array)
    {
        total += num;        // No additional memory per element
    }
    return total;
}

O(n) Space - Linear Memory

public static int[] CopyArray(int[] original)
{
    int[] copy = new int[original.Length];  // Memory grows with input size

    for (int i = 0; i < original.Length; i++)
    {
        copy[i] = original[i];
    }

    return copy;
}

Practical Rules of Thumb

Input Size Guidelines

  • O(1), O(log n): Can handle any reasonable input size
  • O(n), O(n log n): Good up to millions of items
  • O(n²): Acceptable up to thousands of items
  • O(2ⁿ): Only works for very small inputs (n < 25)

Quick Algorithm Assessment

// Single loop = O(n)
for (int i = 0; i < n; i++) { /* ... */ }

// Nested loops = O(n²)
for (int i = 0; i < n; i++)
    for (int j = 0; j < n; j++) { /* ... */ }

// Halving each iteration = O(log n)
while (n > 1) { n = n / 2; }

// Recursive with two calls = O(2ⁿ)
function recursiveFunction(n) {
    if (n <= 1) return 1;
    return recursiveFunction(n-1) + recursiveFunction(n-1);
}

Common Misconceptions

❌ “Big O gives exact runtime”

Reality: Big O shows growth patterns, not precise timing

❌ “O(n) is always faster than O(n²)”

Reality: With small inputs, O(n²) might be faster due to lower overhead

❌ “Better Big O always means better algorithm”

Reality: Consider constants, memory usage, and real-world constraints

❌ “Optimize everything to O(1)”

Reality: Sometimes O(n) with simple code beats O(1) with complex overhead


Quick Reference

Complexity Growth Rate Max Practical Input Common Examples
O(1) Constant Any size Array access, hash lookup
O(log n) Logarithmic Billions Binary search, balanced trees
O(n) Linear Millions Single loop, linear search
O(n log n) Linearithmic Millions Merge sort, heap sort
O(n²) Quadratic Thousands Nested loops, bubble sort
O(2ⁿ) Exponential ~20 Naive recursion, subsets

Pattern Recognition:

  • Single loop → O(n)
  • Nested same-size loops → O(n²)
  • Halving each iteration → O(log n)
  • Divide and conquer → Often O(n log n)

Found this guide helpful? Share it with your team:

Share on LinkedIn