Linked Lists — Pointers, Flexibility, and the Tradeoffs vs Arrays
A scroll-driven visual deep dive into linked lists. Singly, doubly, and circular variants. Understand pointer manipulation, O(1) insertion/deletion, and real-world uses in LRU caches, OS schedulers, and undo systems.
🔗
No contiguous memory.
O(1) insert and delete.
Arrays store elements side by side in memory — fast to read, expensive to resize. Linked lists store each element anywhere in memory, connected by pointers. Insert or delete in O(1) if you have the node reference. The tradeoff? No random access — finding the kth element is O(n). Linked lists power LRU caches, OS task schedulers, undo/redo stacks, and more.
↓ Scroll to understand when pointers beat arrays
Three Flavors of Linked Lists
Linked Lists vs Arrays: The Complete Tradeoff
Time complexity comparison
Arrays use pointer arithmetic (addr = base + i × size) for instant O(1) access. Linked lists must walk from the head node, costing O(n).
Array O(1) vs Linked List O(n) Arrays must shift ALL elements right to make room, costing O(n). Linked lists just update 2 pointers for O(1) insertion.
Array O(n) vs Linked List O(1) Given a pointer to the node, linked list insertion is just 2 pointer updates at O(1). Arrays still must shift remaining elements at O(n).
Array O(n) vs Linked List O(1) Arrays shift all subsequent elements left at O(n). Linked lists simply re-wire pointers around the deleted node for O(1) removal.
Array O(n) vs Linked List O(1) Arrays store elements contiguously — cache-friendly with CPU prefetching. Linked lists scatter nodes across memory with per-node pointer overhead, causing frequent cache misses.
You need to implement an LRU (Least Recently Used) cache with O(1) get and O(1) put. Which data structure combination would you use?
💡 You need O(1) for both finding a key AND maintaining recency order...
This is the classic LRU cache design (LeetCode 146). Hash map: key → linked list node (O(1) lookup). Doubly linked list: most recent at head, least recent at tail. On GET: find node via hash map (O(1)), move it to head (O(1) — re-wire 4 pointers). On PUT: if key exists, update + move to head. If capacity exceeded, remove tail node (O(1)) and delete from hash map. The doubly linked list is essential: you can't move a node to the front of a singly linked list in O(1) without the previous node.
Classic Linked List Operations
Essential pointer manipulation patterns
The most interview-tested operation. Use three pointers (prev, curr, next) and re-point each node backward in a single O(n) pass.
prev=null, curr=head → curr.next=prev → shift right Floyd's tortoise & hare: slow pointer moves 1 step, fast moves 2. If they meet, a cycle exists. If fast reaches null, no cycle. O(n) time, O(1) space.
Slow pointer advances 1 step while fast advances 2. When fast reaches the end, slow is at the middle. Used in merge sort on linked lists and palindrome checking.
Compare heads of both lists, append the smaller node, advance that pointer. Runs in O(n + m) time and forms the foundation of merge sort on linked lists.
You have a singly linked list that MAY contain a cycle. How do you detect it in O(1) space?
💡 Can you solve this without any extra data structures?
Floyd's algorithm works because if there's a cycle, the fast pointer 'laps' the slow pointer inside the cycle. Think of it like two runners on a circular track — the faster one eventually catches up. The slow pointer moves 1 step/iteration, fast moves 2. If the cycle has length C, the fast pointer closes the gap by 1 node per iteration, so they meet after at most C iterations. The hash set approach (option A) works but uses O(n) space. Floyd's is optimal: O(n) time, O(1) space.
Where Linked Lists Shine in Production
Design an LRU cache that supports get(key) and put(key, value) both in O(1) time. Which data structure combination do you need, and why?
💡 You need O(1) for BOTH finding a key AND updating its position in the access order...
The LRU cache is the classic linked-list interview question. A hash map alone can't track access order. A BST or heap gives O(log n), not O(1). The doubly linked list + hash map combo: the map stores key → list-node pointers, the list maintains recency. Moving a node to head is O(1) because doubly-linked. Removing the tail is O(1). This is exactly how Redis, Memcached, and browser caches work. The insight is that you need two data structures cooperating — one for fast lookup, one for ordered eviction.
🎓 What You Now Know
✓ Linked lists trade random access for O(1) insert/delete — ideal when you frequently add/remove elements and rarely need index access.
✓ Doubly linked lists enable O(1) removal given a node — essential for LRU caches and undo systems.
✓ Cache locality often matters more than Big O — arrays outperform linked lists in most benchmarks due to CPU prefetching.
✓ Floyd’s cycle detection uses O(1) space — the tortoise-and-hare pattern is a fundamental technique.
Linked lists are building blocks for more complex structures — trees, graphs, hash tables with chaining, and skip lists all use linked nodes. ⚡
↗ Keep Learning
Hash Tables — O(1) Access That Powers Everything
A scroll-driven visual deep dive into hash tables. From hash functions to collision resolution (chaining vs open addressing) to load factors — understand the data structure behind caches, databases, and deduplication.
Trees — Hierarchies, Traversals, and Balanced Search
A scroll-driven visual deep dive into tree data structures. Binary trees, BSTs, AVL trees, heaps, and B-trees — with real-world applications in databases, file systems, priority queues, and decision engines.
Sorting Algorithms — From Bubble Sort to Radix Sort
A scroll-driven visual deep dive into sorting algorithms. Compare bubble, insertion, merge, quick, heap, counting, and radix sort — with complexity analysis, stability, and real-world applications in databases, graphics, and event systems.
Comments
No comments yet. Be the first!