All articles
· 9 min deep-divedata-structuresalgorithms
Article 1 in your session

Linked Lists — Pointers, Flexibility, and the Tradeoffs vs Arrays

A scroll-driven visual deep dive into linked lists. Singly, doubly, and circular variants. Understand pointer manipulation, O(1) insertion/deletion, and real-world uses in LRU caches, OS schedulers, and undo systems.

Introduction 0%
Introduction
🎯 0/3 0%

🔗

No contiguous memory.
O(1) insert and delete.

Arrays store elements side by side in memory — fast to read, expensive to resize. Linked lists store each element anywhere in memory, connected by pointers. Insert or delete in O(1) if you have the node reference. The tradeoff? No random access — finding the kth element is O(n). Linked lists power LRU caches, OS task schedulers, undo/redo stacks, and more.

↓ Scroll to understand when pointers beat arrays

Linked List Types

Three Flavors of Linked Lists

Singly Linked ListAnext→Bnext→Cnext→nullOne direction →Doubly Linked ListAprev|nextBprev|nextCprev|next← Both ways →Circular Linked ListABCLoops back ↻
Singly linked (forward only), doubly linked (forward + backward), circular (wraps around)
vs Arrays

Linked Lists vs Arrays: The Complete Tradeoff

Time complexity comparison

📍 Access by Index vs Insert at Head vs 🔗 Insert at Position vs ✂️ Delete (Given Pointer) vs 📦 Memory Layout
📍 Access by Index

Arrays use pointer arithmetic (addr = base + i × size) for instant O(1) access. Linked lists must walk from the head node, costing O(n).

Array O(1) vs Linked List O(n)
Insert at Head

Arrays must shift ALL elements right to make room, costing O(n). Linked lists just update 2 pointers for O(1) insertion.

Array O(n) vs Linked List O(1)
🔗 Insert at Position

Given a pointer to the node, linked list insertion is just 2 pointer updates at O(1). Arrays still must shift remaining elements at O(n).

Array O(n) vs Linked List O(1)
✂️ Delete (Given Pointer)

Arrays shift all subsequent elements left at O(n). Linked lists simply re-wire pointers around the deleted node for O(1) removal.

Array O(n) vs Linked List O(1)
📦 Memory Layout

Arrays store elements contiguously — cache-friendly with CPU prefetching. Linked lists scatter nodes across memory with per-node pointer overhead, causing frequent cache misses.

↑ Answer the question above to continue ↑
🟡 Checkpoint Knowledge Check

You need to implement an LRU (Least Recently Used) cache with O(1) get and O(1) put. Which data structure combination would you use?

Key Operations

Classic Linked List Operations

Essential pointer manipulation patterns

🔄 Reverse a List

The most interview-tested operation. Use three pointers (prev, curr, next) and re-point each node backward in a single O(n) pass.

prev=null, curr=head → curr.next=prev → shift right
🔍 Detect a Cycle

Floyd's tortoise & hare: slow pointer moves 1 step, fast moves 2. If they meet, a cycle exists. If fast reaches null, no cycle. O(n) time, O(1) space.

📍 Find Middle

Slow pointer advances 1 step while fast advances 2. When fast reaches the end, slow is at the middle. Used in merge sort on linked lists and palindrome checking.

🔗 Merge Two Sorted

Compare heads of both lists, append the smaller node, advance that pointer. Runs in O(n + m) time and forms the foundation of merge sort on linked lists.

↑ Answer the question above to continue ↑
🟡 Checkpoint Knowledge Check

You have a singly linked list that MAY contain a cycle. How do you detect it in O(1) space?

Real-World Uses

Where Linked Lists Shine in Production

⚡ LRU Cache• HashMap + doubly linked list• O(1) get, O(1) evict• Evicts least recently used itemsRedis, Memcached, CPU caches🔄 OS Task Scheduling• Round-robin: circular linked list• O(1) context switch to next task• O(1) add/remove processesLinux CFS, Windows scheduler↩️ Undo/Redo Systems• Doubly linked list of states• Undo = move pointer backward• Redo = move pointer forwardPhotoshop, VS Code, browsers🧱 Memory Management• Free list: linked list of free blocks• malloc/free use linked lists• Garbage collectors track referencesglibc, jemalloc, V8 GC
Linked lists power systems where insertion/deletion must be O(1) and traversal is infrequent
↑ Answer the question above to continue ↑
🔴 Challenge Knowledge Check

Design an LRU cache that supports get(key) and put(key, value) both in O(1) time. Which data structure combination do you need, and why?

🎓 What You Now Know

Linked lists trade random access for O(1) insert/delete — ideal when you frequently add/remove elements and rarely need index access.

Doubly linked lists enable O(1) removal given a node — essential for LRU caches and undo systems.

Cache locality often matters more than Big O — arrays outperform linked lists in most benchmarks due to CPU prefetching.

Floyd’s cycle detection uses O(1) space — the tortoise-and-hare pattern is a fundamental technique.

Linked lists are building blocks for more complex structures — trees, graphs, hash tables with chaining, and skip lists all use linked nodes. ⚡

Keep Learning