A mathematical model that defines a data type purely by its behaviour — the operations it supports and their semantics — without specifying how those operations are implemented.
| ADT | Possible implementations |
|---|---|
| Stack | Dynamic array, singly linked list |
| Queue | Circular array buffer, doubly linked list |
| Priority Queue | Binary heap, Fibonacci heap, sorted array |
| Deque | Circular array buffer, doubly linked list |
The ADT is the interface; the data structure is the implementation. A stack is not “an array” — it is an abstraction that can be backed by an array.
Last In, First Out. The most recently added element is the first one removed.
| Operation | Description | Time |
|---|---|---|
push(x) | Place element on top | O(1) |
pop() | Remove & return top | O(1) |
peek() | Return top without removing | O(1) |
isEmpty() | True if no elements | O(1) |
size() | Number of elements | O(1) |
push(A) push(B) push(C) pop()
┌───┐
┌───┐ │ C │ ┌───┐
┌───┐ │ B │ │ B │ │ B │ ← top
│ A │ │ A │ │ A │ │ A │
└───┘ └───┘ └───┘ └───┘
returns C
Dynamic array with a top index. Push increments and writes; pop reads and decrements.
O(1) push with doublingO(n) resizeclass ArrayStack:
def __init__(self):
self._data = []
def push(self, x):
self._data.append(x)
def pop(self):
return self._data.pop()
def peek(self):
return self._data[-1]
Each node holds a value and a pointer to the node below. Push and pop operate on the head.
O(1) worst-case push/popIn practice: array-backed stacks win for almost all workloads. The cache advantage dominates. Use linked only when worst-case O(1) per-op matters (real-time systems).
Every running thread maintains a call stack — a LIFO structure that tracks function invocations.
Unbounded recursion pushes frames without popping. When the call stack exceeds its allocated memory (typically 1–8 MB), the OS raises a stack overflow error.
def factorial(n):
if n <= 1: # base case
return 1
return n * factorial(n - 1)
Tail-call optimisation: some languages reuse the current frame for a recursive call in tail position, converting recursion to iteration. Supported in Scheme, Kotlin (tailrec). Not in Python or Java.
Verify that every opening bracket has a correctly nested closing partner.
For each character c in the input:
if c is '(' or '[' or '{':
push(c)
if c is ')' or ']' or '}':
if stack is empty:
return INVALID
if top does not match c:
return INVALID
pop()
After all characters:
return VALID if stack is empty
{[()]}| Step | Char | Stack | Action |
|---|---|---|---|
| 1 | { | { | push |
| 2 | [ | {[ | push |
| 3 | ( | {[( | push |
| 4 | ) | {[ | pop — matches ( |
| 5 | ] | { | pop — matches [ |
| 6 | } | empty | pop — matches { |
VALID — stack is empty after processing all characters.
Postfix eliminates parentheses and precedence rules. Evaluate left to right using a stack.
| Step | Token | Stack | Action |
|---|---|---|---|
| 1 | 3 | [3] | push operand |
| 2 | 4 | [3, 4] | push operand |
| 3 | + | [7] | pop 4,3 → push 7 |
| 4 | 2 | [7, 2] | push operand |
| 5 | * | [14] | pop 2,7 → push 14 |
| 6 | 7 | [14, 7] | push operand |
| 7 | - | [7] | pop 7,14 → push 7 |
Expression: 3 4 + 2 * 7 - Result: 7
For each token:
if token is a number:
push(token)
if token is an operator:
b = pop()
a = pop()
push(a op b)
Return pop()
Used by HP calculators, Forth, PostScript, and the Java bytecode operand stack.
Dijkstra's algorithm converts infix expressions to postfix using an operator stack.
For each token:
NUMBER → output directly
( → push onto operator stack
) → pop & output until '('
OPERATOR → while top has ≥ precedence:
pop & output
push current operator
After all tokens:
pop & output remaining operators
3 + 4 * 2 - ( 1 + 5 )| Token | Output | Op stack |
|---|---|---|
3 | 3 | |
+ | 3 | + |
4 | 3 4 | + |
* | 3 4 | + * |
2 | 3 4 2 | + * |
- | 3 4 2 * + | - |
( | 3 4 2 * + | - ( |
1 | 3 4 2 * + 1 | - ( |
+ | 3 4 2 * + 1 | - ( + |
5 | 3 4 2 * + 1 5 | - ( + |
) | 3 4 2 * + 1 5 + | - |
| END | 3 4 2 * + 1 5 + - |
First In, First Out. Elements are added at the rear and removed from the front.
| Operation | Description | Time |
|---|---|---|
enqueue(x) | Add element to the rear | O(1) |
dequeue() | Remove & return front | O(1) |
front() | Return front without removing | O(1) |
isEmpty() | True if no elements | O(1) |
size() | Number of elements | O(1) |
The queue is the natural structure for fairness — first come, first served. Every message broker, task runner, and print spooler is built on this principle.
enqueue(A) enqueue(B) enqueue(C)
front front
↓ ↓
┌───┐ ┌───┬───┐ ┌───┬───┬───┐
│ A │ │ A │ B │ │ A │ B │ C │
└───┘ └───┴───┘ └───┴───┴───┘
rear
dequeue() → returns A
front
↓
┌───┬───┐
│ B │ C │
└───┴───┘
rear
A fixed-size array with head and tail pointers that wrap around, avoiding the O(n) cost of shifting elements.
Capacity: 5 head=0, tail=0 (empty)
enqueue(A): [A _ _ _ _] h=0, t=1
enqueue(B): [A B _ _ _] h=0, t=2
enqueue(C): [A B C _ _] h=0, t=3
dequeue(): [_ B C _ _] h=1, t=3 → A
enqueue(D): [_ B C D _] h=1, t=4
enqueue(E): [_ B C D E] h=1, t=0 ← wraps
enqueue(F): [F B C D E] h=1, t=1 ← wraps
tail = (tail + 1) % capacity
head = (head + 1) % capacity
size = (tail - head + cap) % cap
full = (size == capacity - 1)
Why circular? A naive array queue wastes all space before the head pointer. Shifting elements on dequeue is O(n). The circular buffer solves both with modular arithmetic.
A deque (pronounced “deck”) supports insertion and removal at both ends in O(1).
| Operation | Description |
|---|---|
pushFront(x) | Insert at the front |
pushBack(x) | Insert at the rear |
popFront() | Remove from the front |
popBack() | Remove from the rear |
peekFront() | View front element |
peekBack() | View rear element |
Same as queue, but head can move backwards too
Natural fit — insert/remove at either end is O(1)
std::deque)Map of fixed-size blocks for pointer stability and cache friendliness
A deque restricted to one end is a stack. Restricted to pushBack + popFront, it is a queue. Python's collections.deque is recommended for both.
A collection where each element has a priority and the highest-priority element is always dequeued first.
| Operation | Description | Heap time |
|---|---|---|
insert(x, p) | Add element with priority | O(log n) |
extractMin() | Remove highest-priority element | O(log n) |
peek() | View highest-priority element | O(1) |
changePriority() | Update an element's priority | O(log n) |
A sorted array gives O(1) extract but O(n) insert. A heap gives O(log n) for both — the right trade-off for dynamic workloads.
A binary heap is a complete binary tree stored in a flat array. No pointers needed.
Index: 0 1 2 3 4 5 6
Value: [10, 15, 20, 25, 30, 35, 40]
10 Parent: (i-1)/2
/ \ Left: 2i + 1
15 20 Right: 2i + 2
/ \ / \
25 30 35 40
O(log n)O(log n)O(n)Most nodes are near the bottom and sift down only 1–2 levels. The sum forms a convergent geometric series:
n/4 × 1 + n/8 × 2 + n/16 × 3 + … = O(n)
This is why heapq.heapify() in Python and Arrays.sort() (Timsort) can build a heap in linear time — a non-obvious but critical result.
A stack that maintains elements in monotonically increasing or decreasing order. When a new element violates the monotonicity, elements are popped until order is restored.
def next_greater(nums):
n = len(nums)
result = [-1] * n
stack = [] # indices; values decreasing
for i in range(n):
while stack and nums[i] > nums[stack[-1]]:
idx = stack.pop()
result[idx] = nums[i]
stack.append(i)
return result
# nums = [2, 1, 4, 3, 5]
# result = [4, 4, 5, 5, -1]
Each element is pushed once and popped at most once. Total: O(n) despite the nested loop.
Left or right variant — the foundational pattern
O(n) using a monotonic increasing stack
Monotonic stack or two-pointer approach
Days until a warmer day — direct application
A deque that maintains elements in monotonic order, enabling sliding-window min/max queries in O(n) total.
from collections import deque
def max_sliding_window(nums, k):
dq = deque() # indices; values decreasing
result = []
for i, x in enumerate(nums):
while dq and dq[0] < i - k + 1:
dq.popleft() # outside window
while dq and nums[dq[-1]] <= x:
dq.pop() # maintain order
dq.append(i)
if i >= k - 1:
result.append(nums[dq[0]])
return result
# nums=[1,3,-1,-3,5,3,6,7], k=3
# result=[3,3,5,5,6,7]
Each element enters and leaves the deque at most once
Deque holds at most k elements
A naive approach checks all k elements per window: O(nk). The monotonic deque eliminates redundant comparisons by discarding elements that can never be the maximum.
Track the minimum (or maximum) at every point in time using an auxiliary stack.
class MinStack:
def __init__(self):
self.stack = []
self.min_stack = [] # parallel minimums
def push(self, x):
self.stack.append(x)
m = min(x, self.min_stack[-1]) \
if self.min_stack else x
self.min_stack.append(m)
def pop(self):
self.min_stack.pop()
return self.stack.pop()
def getMin(self):
return self.min_stack[-1] # O(1)
| Operation | Main | Min stack |
|---|---|---|
push(x) | push x | push min(x, cur) |
pop() | pop | pop |
getMin() | — | peek |
Store 2*x - current_min when x < current_min. On pop, decode the previous min. Uses O(1) extra space but risks integer overflow.
Implement a FIFO queue using two LIFO stacks. Amortised O(1) per operation.
enqueue(1): IN=[1] OUT=[]
enqueue(2): IN=[1,2] OUT=[]
dequeue(): IN=[] OUT=[2,1] ← pour
IN=[] OUT=[2] → 1
enqueue(3): IN=[3] OUT=[2]
dequeue(): IN=[3] OUT=[] → 2
dequeue(): IN=[] OUT=[3] ← pour
IN=[] OUT=[] → 3
Over n operations, total moves = n. Each operation is amortised O(1).
This is a popular interview question, but it also has real uses: functional languages without mutable arrays implement queues this way (e.g. Okasaki's persistent queue).
Implement a LIFO stack using two FIFO queues. Two strategies exist.
push(x):
enqueue x into Q2
while Q1 is not empty:
dequeue from Q1 → enqueue into Q2
swap Q1 and Q2
pop():
dequeue from Q1
Best when many more pops than pushes.
push(x):
enqueue x into Q1
pop():
while Q1 has more than 1 element:
dequeue from Q1 → enqueue into Q2
result = dequeue from Q1
swap Q1 and Q2
return result
Best when many more pushes than pops.
Neither approach achieves amortised O(1) for both operations. This is why the two-stack queue is preferred — it achieves amortised O(1) for all operations.
In multi-threaded systems, queues must handle simultaneous producers and consumers safely.
Uses Compare-And-Swap (CAS) to atomically update head and tail pointers without locks.
CAS(addr, expected, new):
atomically:
if *addr == expected:
*addr = new
return true
return false
next from null to new node, then CAS tail forwardhead.next| Approach | Throughput | Complexity |
|---|---|---|
| Mutex queue | Moderate | Low |
| Lock-free (CAS) | High | High |
| Wait-free | Highest | Very high |
A queue drives BFS: enqueue start, repeatedly dequeue a node, process it, enqueue unvisited neighbours. Guarantees shortest path in unweighted graphs.
queue = [start]
while queue:
node = dequeue()
for n in neighbours(node):
if not visited(n):
enqueue(n)
Stack-based: every action → undo stack. On undo: pop undo, push to redo. On redo: reverse.
Action → push undo
Undo → pop undo, push redo
Redo → pop redo, push undo
New action → clear redo
| Structure | Applications |
|---|---|
| Stack | Call stack, undo, expression eval, DFS, backtracking |
| Queue | BFS, scheduling, buffering, message passing |
| Priority queue | Dijkstra, A*, Huffman, OS scheduling |
| Deque | Sliding window, work stealing, palindrome checking |
O(n²) to O(n)O(1) minimum queries| Source | Description |
|---|---|
| Cormen et al. | Introduction to Algorithms (CLRS) — heaps, stacks, queues |
| Okasaki | Purely Functional Data Structures — persistent queues |
| Sedgewick | Algorithms, 4th ed. — practical Java implementations |
| Michael & Scott | Non-blocking concurrent queue algorithms (1996) |
| Skiena | Algorithm Design Manual — when to use which structure |