All articles
· 12 min deep-divedata-structuresalgorithms
Article 1 in your session

Sorting Algorithms — From Bubble Sort to Radix Sort

A scroll-driven visual deep dive into sorting algorithms. Compare bubble, insertion, merge, quick, heap, counting, and radix sort — with complexity analysis, stability, and real-world applications in databases, graphics, and event systems.

Introduction 0%
Introduction
🎯 0/4 0%

📊

Unsorted to sorted.
The most studied problem in CS.

Sorting is everywhere: database queries (ORDER BY), search result ranking, rendering layers in games, scheduling events, and binary search preprocessing. The difference between O(n²) and O(n log n) is the difference between 1 second and 11 days on 1 billion elements. Understanding WHY quick sort is fast, WHEN merge sort is better, and HOW radix sort breaks the comparison barrier will make you a better engineer.

↓ Scroll through every major sorting algorithm, from simple to mind-bending

Simple Sorts

O(n²) Sorts: Simple but Slow

Bubble SortSwap adjacent pairsif out of order.Repeat until no swaps.Time: O(n²)Space: O(1)Stable: YesUse: Teaching only.Never in production.Selection SortFind minimum, put itat position i.Repeat for i+1, i+2…Time: O(n²) alwaysSpace: O(1)Stable: NoUse: Minimizing swaps(expensive write cost).Insertion SortTake next element,insert into sorted portion.Shift larger elements right.Time: O(n²) worst, O(n) bestSpace: O(1)Stable: YesUse: Small arrays (n ≤ 50)or nearly-sorted data.
Bubble sort, selection sort, and insertion sort — O(n²) average and worst case
↑ Answer the question above to continue ↑
🟢 Quick Check Knowledge Check

You have an array of 1000 elements that's already 95% sorted (only ~50 elements are out of place). Which O(n²) sort would be fastest?

Merge Sort

Divide and Conquer: Merge Sort and Quick Sort

Merge Sort[38, 27, 43, 3, 9, 82, 10]↓ split[38,27,43,3][9,82,10]↓ split recursively to size 1↓ merge sorted halves[3, 9, 10, 27, 38, 43, 82]Time: O(n log n) alwaysSpace: O(n) — needs extra arrayStable: YesPredictable: No worst caseUsed by: Python (Timsort), Java (Arrays.sort for objects)Quick Sort[38, 27, 43, 3, 9, 82, 10]↓ choose pivot (e.g. 27)[3,9,10]27[38,43,82]↓ partition recursively↓ no merge needed — in-place![3, 9, 10, 27, 38, 43, 82]Time: O(n log n) averageSpace: O(log n) — in-place!Stable: NoWorst case: O(n²) — bad pivotsUsed by: C stdlib qsort, Java (primitives), V8
Merge sort: split evenly, merge sorted halves. Quick sort: partition around pivot, recurse on halves.

Why divide-and-conquer gives O(n log n)

1
T(n) = 2T(n/2) + O(n) — recurrence for merge sort
Split into 2 halves (2T(n/2)), then merge in linear time (O(n))
2
log₂(n) levels of recursion — each level does O(n) total work
Level 0: 1 problem of size n. Level 1: 2 problems of size n/2. Level k: 2^k problems of size n/2^k.
3
Total: O(n) work × log₂(n) levels = O(n log n)
For n=1 billion: n log n ≈ 30 billion ops. n² = 10^18 ops. That's 33 million times faster.
4
Quick sort: O(n log n) average but O(n²) worst case
Worst case: pivot is always min or max → partitions of size 0 and n-1. Fix: randomized pivot or median-of-three.
↑ Answer the question above to continue ↑
🟡 Checkpoint Knowledge Check

Quick sort is O(n²) in the worst case but usually faster than merge sort in practice. Why?

Heap Sort

Heap Sort: The Best of Both Worlds?

Heap sort combines in-place with guaranteed O(n log n)

1
Step 1: Build a max-heap from the array → O(n)
Bottom-up heapify: process each node starting from the last non-leaf. Total work converges to O(n), not O(n log n).
2
Step 2: Repeatedly extract max → swap root with last element → heapify root → O(n log n)
Each extraction: swap root to end (O(1)), shrink heap by 1, sift root down (O(log n)). Repeat n times.
3
Time: O(n log n) worst case guaranteed. Space: O(1) — truly in-place
Unlike merge sort (needs O(n) space) or quick sort (has O(n²) worst case), heap sort has both guarantees.
4
But: poor cache locality → slower in practice than quick sort
Heap operations jump between parent (i) and children (2i+1, 2i+2) — non-sequential access. Quick sort scans sequentially.
↑ Answer the question above to continue ↑
🔴 Challenge Knowledge Check

Java's Arrays.sort uses dual-pivot quicksort for primitives but merge sort (Timsort variant) for objects. Why different algorithms for different data types?

Non-Comparison Sorts

Breaking the O(n log n) Barrier

Counting SortCount occurrences ofeach value. Reconstructfrom counts.Time: O(n + k)Space: O(k) — k = rangeStable: YesBest for: small integerranges (ages, scores)Radix SortSort by least significantdigit first, then next,using a stable sub-sort.Time: O(d × (n + k))Space: O(n + k)Stable: YesBest for: fixed-lengthkeys (SSNs, IPs, dates)Bucket SortDistribute elements intobuckets by range. Sorteach bucket. Concatenate.Time: O(n) averageSpace: O(n + k)Stable: Yes (if sub-sort is)Best for: uniformlydistributed floats
Counting sort, radix sort, and bucket sort — O(n) when input has bounded range
Real-World Uses

Sorting in Production Systems

🗄️ Databases (ORDER BY)• External merge sort for disk data• fits-in-memory: quicksort / Timsort• Indexes avoid sorting entirelyPostgreSQL, MySQL, SQLite🎮 Graphics / Rendering• Z-buffer: sort by depth (painter’s algo)• Transparency: back-to-front sort• Radix sort for particle systemsUnity, Unreal Engine, WebGL📧 Event Processing• Timestamp-ordered event logs• Merge sort for streaming data• Priority queue (heap) for schedulingKafka, Flink, Apache Beam🔬 Scientific Computing• Spatial partitioning (k-d tree sort)• Median finding (quickselect)• Histogram constructionNumPy, MATLAB, R
Different sorting algorithms optimized for different real-world constraints
↑ Answer the question above to continue ↑
🔴 Challenge Knowledge Check

You have 100GB of log entries that don't fit in RAM (8GB available). Each has a timestamp and you need them sorted by time. Which approach works?

🎓 What You Now Know

O(n²) sorts (bubble, selection, insertion) are only useful for tiny arrays — but insertion sort is great for nearly-sorted data and is used inside hybrid sorts.

Merge sort guarantees O(n log n) and is stable — but needs O(n) extra space. Quick sort is faster in practice but has O(n²) worst case.

Heap sort is optimal in theory (O(n log n), O(1) space) — but poor cache locality makes it slower than quicksort in practice.

Non-comparison sorts (counting, radix, bucket) break the O(n log n) barrier — when data has bounded range, they achieve O(n) time.

Production sorts are hybrids — Timsort (merge + insertion), introsort (quick + heap), dual-pivot quicksort. They combine the best properties of multiple algorithms.

Sorting is a building block: binary search needs sorted data, merge sort powers external sorting, heaps power priority queues, and radix sort powers GPU rendering. ⚡

Keep Learning