Elements stored in adjacent memory addresses. Knowing the base address and element size gives constant-time access to any element.
i = base + i * sizeof(element)Each element (node) stores a pointer to the next node. Nodes can live anywhere in the heap.
Fundamental trade-off: arrays optimise for read performance; linked lists optimise for insert/delete flexibility.
A fixed-size, contiguous block of memory allocated at compile time (stack) or at runtime with a known size.
Base address: 0x1000
Element size: 4 bytes (int32)
Index: [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Address: 0x1000 0x1004 0x1008 0x100C 0x1010
Value: 10 25 37 42 58
int scores[5] = {10, 25, 37, 42, 58}; // stack
int *heap_arr = malloc(5 * sizeof(int)); // heap
The formula base + index * element_size is a single arithmetic operation. No matter the array size, access time is constant.
Accessing arr[i] loads an entire cache line (64 bytes). If arr[i+1] is next, it is already in L1 cache — a cache hit.
Cache line (64 bytes) loaded on access to arr[3]:
┌────┬────┬────┬────┬────┬────┬────┬────┐
│ [0]│ [1]│ [2]│ [3]│ [4]│ [5]│ [6]│ [7]│
└────┴────┴────┴────┴────┴────┴────┴────┘
▲ requested
entire line loaded into L1
| Level | Latency | Typical size |
|---|---|---|
| L1 cache | ~1 ns | 32–64 KB |
| L2 cache | ~4 ns | 256 KB–1 MB |
| L3 cache | ~12 ns | 4–32 MB |
| Main memory | ~100 ns | 8–64 GB |
Sequential array traversal achieves near-peak memory bandwidth. Linked list traversal pays the full main-memory latency for each node.
A resizable array that grows automatically when capacity is exceeded. The underlying storage is still a contiguous block.
When the array is full and a new element must be added:
k * current_capacity (typically k = 2)Capacity: 8 Size: 5
[ 10 ][ 25 ][ 37 ][ 42 ][ 58 ][ ][ ][ ]
▲ size ▲ capacity
| Operation | Average | Worst |
|---|---|---|
| Access by index | O(1) | O(1) |
| Push back | O(1)* | O(n) |
| Pop back | O(1) | O(1) |
| Insert at index | O(n) | O(n) |
| Delete at index | O(n) | O(n) |
| Search (unsorted) | O(n) | O(n) |
*Amortised — resize triggers full copy
When capacity doubles on overflow, the total cost of n push-back operations is O(n), giving an amortised cost of O(1) per operation.
Operations: 1 2 3 4 5 6 7 8 9
Cost: 1 1 1 1 1 1 1 1 1 (inserts)
Resize at: +2 +4 +8 (copies)
Total for n inserts:
n (inserts) + 1 + 2 + 4 + ... + n
= n + (2n - 1)
= 3n - 1 = O(n)
Amortised per insert = O(n)/n = O(1)
Fewer reallocations; simple bit-shift. Up to 50% wasted space.
GCC libstdc++
Less wasted space (~33% max). More frequent reallocations.
MSVC Java ArrayList
Minimal waste. Very frequent reallocations for small lists.
Python list
A sequence of nodes where each node stores a value and a pointer to the next node. The last node points to NULL.
struct Node {
int data;
Node* next;
};
iNodes are scattered across the heap — addresses are not sequential. Every traversal step is potentially a cache miss.
Node* insert_head(Node* head, int value) {
Node* n = malloc(sizeof(Node));
n->data = value;
n->next = head;
return n; // new head
}
void insert_after(Node* prev, int value) {
Node* n = malloc(sizeof(Node));
n->data = value;
n->next = prev->next;
prev->next = n;
}
void delete_after(Node* prev) {
Node* target = prev->next;
prev->next = target->next;
free(target);
}
void print_list(Node* head) {
Node* curr = head;
while (curr != NULL) {
printf("%d -> ", curr->data);
curr = curr->next;
}
printf("NULL\n");
}
Key insight: finding the predecessor is the bottleneck. Doubly linked lists solve this by storing a prev pointer.
Each node stores pointers to both the next and previous nodes, enabling bidirectional traversal.
struct DNode {
int data;
DNode* prev;
DNode* next;
};
A sentinel (dummy) node simplifies boundary conditions by eliminating NULL checks. Used with doubly linked lists.
Every operation must check for empty list, head insert, and tail insert as special cases.
Empty list:
sentinel.next = sentinel
sentinel.prev = sentinel
After inserting 10, 25, 37:
SEN ──▶ 10 ──▶ 25 ──▶ 37 ──▶ SEN
SEN ◀── 10 ◀── 25 ◀── 37 ◀── SEN
// Insert after any node — no NULL checks
void insert_after(DNode* node, int val) {
DNode* n = malloc(sizeof(DNode));
n->data = val;
n->next = node->next;
n->prev = node;
node->next->prev = n;
node->next = n;
}
node->next is always valid — it may be the sentinel itself. No special cases needed.
The Linux kernel's list_head uses sentinel-based circular doubly linked lists throughout. See include/linux/list.h.
The last node points back to the first, forming a cycle. Can be singly or doubly linked.
void print_circular(Node* start) {
if (!start) return;
Node* curr = start;
do {
printf("%d -> ", curr->data);
curr = curr->next;
} while (curr != start);
}
Process scheduler cycles through tasks endlessly
Fixed-size ring buffer for streaming data
Classic elimination game, naturally modelled as a circle
Loop playback without special end-of-list logic
Token passed around a circular topology
A probabilistic data structure that layers multiple sorted linked lists on top of each other. Invented by William Pugh (1989).
p (typically 0.5)O(log n)Start at the highest level. Move right while the next node's key is less than the target. Drop down one level and repeat.
Search for 33:
L3: H ─────────────── 42 (too far, drop)
L2: H ────── 17 ───── 42 (too far, drop)
L1: ... 17 ── 25 ──── 42 (too far, drop)
L0: ... 25 ── 33 FOUND
| Operation | Expected | Worst |
|---|---|---|
| Search | O(log n) | O(n) |
| Insert | O(log n) | O(n) |
| Delete | O(log n) | O(n) |
| Space | O(n) | O(n log n) |
Redis sorted sets (ZSET) use skip lists internally. LevelDB and RocksDB use skip lists for their in-memory memtable.
A memory-efficient doubly linked list that stores only one pointer per node instead of two, using the XOR of the previous and next addresses.
Each node stores:
link = prev_addr XOR next_addr
Forward traversal:
next = prev_addr XOR node->link
Backward traversal:
prev = next_addr XOR node->link
Node A Node B Node C
link = 0^B link = A^C link = B^0
= B = A^C = B
Forward from A (prev=0):
next = 0 XOR B = B
From B (prev=A):
next = A XOR (A^C) = C
From C (prev=B):
next = B XOR B = 0 (end)
XOR linked lists are a clever interview topic but almost never appear in production code. Prefer standard doubly linked lists.
A hybrid that stores multiple elements per node, combining array cache friendliness with linked list insertion flexibility.
B elements (block size, e.g. 4–16)| Operation | Unrolled | Standard |
|---|---|---|
| Traversal | Cache-friendly | Pointer chasing |
| Search | O(n) better constants | O(n) |
| Insert | O(B) shift + O(1) splice | O(1) at known pos |
| Overhead | 1 ptr per B elements | 1–2 ptrs per element |
Used in text editors (rope variants), B+ tree leaf chains, and gap buffer hybrids. Delivers 2–5x speedup over standard linked lists for sequential access.
| Operation | Array | Dynamic array | Singly linked | Doubly linked |
|---|---|---|---|---|
| Access by index | O(1) | O(1) | O(n) | O(n) |
| Search (unsorted) | O(n) | O(n) | O(n) | O(n) |
| Search (sorted) | O(log n) | O(log n) | O(n) | O(n) |
| Insert at front | O(n) | O(n) | O(1) | O(1) |
| Insert at back | N/A | O(1)* | O(n) | O(1) |
| Insert at middle | O(n) | O(n) | O(n) | O(n) |
| Delete at front | O(n) | O(n) | O(1) | O(1) |
| Delete at back | N/A | O(1) | O(n) | O(1) |
| Delete given node | O(n) | O(n) | O(n) | O(1) |
| Memory per element | 0 extra | 0 extra | +8 bytes | +16 bytes |
| Cache performance | Excellent | Excellent | Poor | Poor |
*Amortised O(1) for dynamic array push-back.
In practice, arrays win more often than theory suggests. Cache locality dominates modern hardware. Linked lists only outperform when insertions/deletions at arbitrary positions are the primary operation and elements are large.
std::vectorpush_back amortised O(1)ArrayListObject[]; growth 1.5xLinkedList almost never the right choicelistappend() amortised O(1)deque = unrolled linked listsliceappend() may return new header on reallocVec<T>VecDeque<T> = ring buffer for front/back opsDecision heuristic: default to a dynamic array. Switch to a linked structure only when profiling shows that insert/delete cost dominates, or when you need O(1) splice/remove with a direct node reference.
| Source | Description |
|---|---|
| Cormen et al. | Introduction to Algorithms (CLRS) — elementary data structures and amortised analysis |
| Pugh, W. | "Skip Lists: A Probabilistic Alternative to Balanced Trees" (1990) |
| Stroustrup, B. | "Why you should avoid Linked Lists" — cache effects on vector vs list |
| Linux kernel | include/linux/list.h — intrusive doubly linked list |
| Sedgewick & Wayne | Algorithms, 4th ed. — linked lists and dynamic arrays |