COMPUTER SCIENCE FUNDAMENTALS SERIES

Stacks and
Queues

LIFO · FIFO · Priority queues · Monotonic structures ·
Deque · Expression evaluation
Mid-level software engineer track  ·  20 slides
02

Abstract Data Types vs Concrete Implementations

Abstract Data Type (ADT)

A mathematical model that defines a data type purely by its behaviour — the operations it supports and their semantics — without specifying how those operations are implemented.

  • Specifies what operations exist and what they return
  • Says nothing about memory layout, pointers, or arrays
  • Language-independent — a contract, not code

ADT → Implementation

ADTPossible implementations
StackDynamic array, singly linked list
QueueCircular array buffer, doubly linked list
Priority QueueBinary heap, Fibonacci heap, sorted array
DequeCircular array buffer, doubly linked list

The ADT is the interface; the data structure is the implementation. A stack is not “an array” — it is an abstraction that can be backed by an array.

03

The Stack — LIFO

Last In, First Out. The most recently added element is the first one removed.

Core operations

OperationDescriptionTime
push(x)Place element on topO(1)
pop()Remove & return topO(1)
peek()Return top without removingO(1)
isEmpty()True if no elementsO(1)
size()Number of elementsO(1)

Mental model

push(A)  push(B)  push(C)  pop()
                   ┌───┐
         ┌───┐    │ C │   ┌───┐
┌───┐   │ B │    │ B │   │ B │ ← top
│ A │   │ A │    │ A │   │ A │
└───┘   └───┘    └───┘   └───┘
                          returns C
04

Array-Backed vs Linked Stack

Array-backed stack

Dynamic array with a top index. Push increments and writes; pop reads and decrements.

  • Pro: cache-friendly contiguous memory
  • Pro: zero pointer overhead
  • Pro: amortised O(1) push with doubling
  • Con: occasional O(n) resize
  • Con: wastes space if stack shrinks
class ArrayStack:
    def __init__(self):
        self._data = []
    def push(self, x):
        self._data.append(x)
    def pop(self):
        return self._data.pop()
    def peek(self):
        return self._data[-1]

Linked stack

Each node holds a value and a pointer to the node below. Push and pop operate on the head.

  • Pro: O(1) worst-case push/pop
  • Pro: memory proportional to actual size
  • Con: pointer overhead per node
  • Con: poor cache locality
  • Con: heap allocation per push

In practice: array-backed stacks win for almost all workloads. The cache advantage dominates. Use linked only when worst-case O(1) per-op matters (real-time systems).

05

Stack Application — The Call Stack

Every running thread maintains a call stack — a LIFO structure that tracks function invocations.

What lives in a stack frame

1. Return address — where to resume after the function returns
2. Local variables — primitive values, object references
3. Function arguments — passed by caller
4. Saved registers — CPU state to restore on return

Stack overflow

Unbounded recursion pushes frames without popping. When the call stack exceeds its allocated memory (typically 1–8 MB), the OS raises a stack overflow error.

def factorial(n):
    if n <= 1:         # base case
        return 1
    return n * factorial(n - 1)

Tail-call optimisation: some languages reuse the current frame for a recursive call in tail position, converting recursion to iteration. Supported in Scheme, Kotlin (tailrec). Not in Python or Java.

06

Stack Application — Parentheses Matching

Verify that every opening bracket has a correctly nested closing partner.

Algorithm

For each character c in the input:
  if c is '(' or '[' or '{':
      push(c)
  if c is ')' or ']' or '}':
      if stack is empty:
          return INVALID
      if top does not match c:
          return INVALID
      pop()
After all characters:
  return VALID if stack is empty

Trace: {[()]}

StepCharStackAction
1{{push
2[{[push
3({[(push
4){[pop — matches (
5]{pop — matches [
6}emptypop — matches {

VALID — stack is empty after processing all characters.

07

Stack Application — Expression Evaluation

Postfix (Reverse Polish Notation) evaluation

Postfix eliminates parentheses and precedence rules. Evaluate left to right using a stack.

StepTokenStackAction
13[3]push operand
24[3, 4]push operand
3+[7]pop 4,3 → push 7
42[7, 2]push operand
5*[14]pop 2,7 → push 14
67[14, 7]push operand
7-[7]pop 7,14 → push 7

Expression: 3 4 + 2 * 7 -   Result: 7

Algorithm

For each token:
  if token is a number:
      push(token)
  if token is an operator:
      b = pop()
      a = pop()
      push(a op b)
Return pop()

Used by HP calculators, Forth, PostScript, and the Java bytecode operand stack.

08

Infix to Postfix — Shunting-Yard Algorithm

Dijkstra's algorithm converts infix expressions to postfix using an operator stack.

Rules

For each token:
  NUMBER   → output directly
  (        → push onto operator stack
  )        → pop & output until '('
  OPERATOR → while top has ≥ precedence:
                 pop & output
               push current operator
After all tokens:
  pop & output remaining operators

Trace: 3 + 4 * 2 - ( 1 + 5 )

TokenOutputOp stack
33
+3+
43 4+
*3 4+ *
23 4 2+ *
-3 4 2 * +-
(3 4 2 * +- (
13 4 2 * + 1- (
+3 4 2 * + 1- ( +
53 4 2 * + 1 5- ( +
)3 4 2 * + 1 5 +-
END3 4 2 * + 1 5 + -
09

The Queue — FIFO

First In, First Out. Elements are added at the rear and removed from the front.

Core operations

OperationDescriptionTime
enqueue(x)Add element to the rearO(1)
dequeue()Remove & return frontO(1)
front()Return front without removingO(1)
isEmpty()True if no elementsO(1)
size()Number of elementsO(1)

The queue is the natural structure for fairness — first come, first served. Every message broker, task runner, and print spooler is built on this principle.

Mental model

enqueue(A)  enqueue(B)  enqueue(C)

front                    front
  ↓                        ↓
┌───┐     ┌───┬───┐     ┌───┬───┬───┐
│ A │     │ A │ B │     │ A │ B │ C │
└───┘     └───┴───┘     └───┴───┴───┘
                                 rear

dequeue() → returns A

  front
    ↓
  ┌───┬───┐
  │ B │ C │
  └───┴───┘
       rear
10

Array-Backed Circular Buffer

A fixed-size array with head and tail pointers that wrap around, avoiding the O(n) cost of shifting elements.

How it works

Capacity: 5     head=0, tail=0 (empty)

enqueue(A): [A _ _ _ _]  h=0, t=1
enqueue(B): [A B _ _ _]  h=0, t=2
enqueue(C): [A B C _ _]  h=0, t=3
dequeue():  [_ B C _ _]  h=1, t=3  → A
enqueue(D): [_ B C D _]  h=1, t=4
enqueue(E): [_ B C D E]  h=1, t=0  ← wraps
enqueue(F): [F B C D E]  h=1, t=1  ← wraps

Key formulas

tail = (tail + 1) % capacity

head = (head + 1) % capacity

size = (tail - head + cap) % cap

full = (size == capacity - 1)

Why circular? A naive array queue wastes all space before the head pointer. Shifting elements on dequeue is O(n). The circular buffer solves both with modular arithmetic.

11

Double-Ended Queue (Deque)

A deque (pronounced “deck”) supports insertion and removal at both ends in O(1).

Operations

OperationDescription
pushFront(x)Insert at the front
pushBack(x)Insert at the rear
popFront()Remove from the front
popBack()Remove from the rear
peekFront()View front element
peekBack()View rear element

Implementations

Circular array buffer

Same as queue, but head can move backwards too

Doubly linked list

Natural fit — insert/remove at either end is O(1)

Block-based (std::deque)

Map of fixed-size blocks for pointer stability and cache friendliness

A deque restricted to one end is a stack. Restricted to pushBack + popFront, it is a queue. Python's collections.deque is recommended for both.

12

Priority Queue

A collection where each element has a priority and the highest-priority element is always dequeued first.

Operations

OperationDescriptionHeap time
insert(x, p)Add element with priorityO(log n)
extractMin()Remove highest-priority elementO(log n)
peek()View highest-priority elementO(1)
changePriority()Update an element's priorityO(log n)

A sorted array gives O(1) extract but O(n) insert. A heap gives O(log n) for both — the right trade-off for dynamic workloads.

Use cases

Dijkstra's shortest path — extract nearest unvisited vertex
A* search — extract node with lowest f(n) = g(n) + h(n)
Huffman coding — extract two lowest-frequency nodes
OS scheduling — extract highest-priority runnable thread
Event simulation — extract event with earliest timestamp
13

Binary Heap Implementation

A binary heap is a complete binary tree stored in a flat array. No pointers needed.

Array layout (min-heap)

Index:   0   1   2   3   4   5   6
Value: [10, 15, 20, 25, 30, 35, 40]

          10          Parent:  (i-1)/2
         /  \         Left:    2i + 1
       15    20       Right:   2i + 2
      / \    / \
    25  30  35  40

Heap operations

  • Sift-up (after insert): place at end, swap with parent while smaller. O(log n)
  • Sift-down (after extract): move last to root, swap with smallest child. O(log n)
  • Heapify (build from array): sift-down on each non-leaf, bottom up. O(n)

Why O(n) heapify?

Most nodes are near the bottom and sift down only 1–2 levels. The sum forms a convergent geometric series:

n/4 × 1 + n/8 × 2 + n/16 × 3 + … = O(n)

This is why heapq.heapify() in Python and Arrays.sort() (Timsort) can build a heap in linear time — a non-obvious but critical result.

14

Monotonic Stack

A stack that maintains elements in monotonically increasing or decreasing order. When a new element violates the monotonicity, elements are popped until order is restored.

Pattern: Next Greater Element

def next_greater(nums):
    n = len(nums)
    result = [-1] * n
    stack = []    # indices; values decreasing
    for i in range(n):
        while stack and nums[i] > nums[stack[-1]]:
            idx = stack.pop()
            result[idx] = nums[i]
        stack.append(i)
    return result

# nums   = [2, 1, 4, 3, 5]
# result = [4, 4, 5, 5, -1]

Each element is pushed once and popped at most once. Total: O(n) despite the nested loop.

Classic problems

Next greater / smaller element

Left or right variant — the foundational pattern

Largest rectangle in histogram

O(n) using a monotonic increasing stack

Trapping rain water

Monotonic stack or two-pointer approach

Daily temperatures

Days until a warmer day — direct application

15

Monotonic Queue

A deque that maintains elements in monotonic order, enabling sliding-window min/max queries in O(n) total.

Sliding window maximum

from collections import deque

def max_sliding_window(nums, k):
    dq = deque()   # indices; values decreasing
    result = []
    for i, x in enumerate(nums):
        while dq and dq[0] < i - k + 1:
            dq.popleft()        # outside window
        while dq and nums[dq[-1]] <= x:
            dq.pop()            # maintain order
        dq.append(i)
        if i >= k - 1:
            result.append(nums[dq[0]])
    return result

# nums=[1,3,-1,-3,5,3,6,7], k=3
# result=[3,3,5,5,6,7]

Complexity

Time: O(n)

Each element enters and leaves the deque at most once

Space: O(k)

Deque holds at most k elements

A naive approach checks all k elements per window: O(nk). The monotonic deque eliminates redundant comparisons by discarding elements that can never be the maximum.

16

Min-Stack / Max-Stack in O(1)

Track the minimum (or maximum) at every point in time using an auxiliary stack.

Auxiliary stack approach

class MinStack:
    def __init__(self):
        self.stack = []
        self.min_stack = []  # parallel minimums

    def push(self, x):
        self.stack.append(x)
        m = min(x, self.min_stack[-1]) \
            if self.min_stack else x
        self.min_stack.append(m)

    def pop(self):
        self.min_stack.pop()
        return self.stack.pop()

    def getMin(self):
        return self.min_stack[-1]  # O(1)

All operations: O(1)

OperationMainMin stack
push(x)push xpush min(x, cur)
pop()poppop
getMin()peek

Alternative: encoded values

Store 2*x - current_min when x < current_min. On pop, decode the previous min. Uses O(1) extra space but risks integer overflow.

17

Queue Using Two Stacks

Implement a FIFO queue using two LIFO stacks. Amortised O(1) per operation.

Strategy

  • Stack IN — receives all enqueued elements
  • Stack OUT — serves all dequeue requests
  • When OUT is empty, pour all of IN into OUT (reversing order)

Trace

enqueue(1): IN=[1]     OUT=[]
enqueue(2): IN=[1,2]   OUT=[]
dequeue():  IN=[]      OUT=[2,1] ← pour
            IN=[]      OUT=[2]   → 1
enqueue(3): IN=[3]     OUT=[2]
dequeue():  IN=[3]     OUT=[]    → 2
dequeue():  IN=[]      OUT=[3]   ← pour
            IN=[]      OUT=[]    → 3

Amortised analysis

Each element moves IN → OUT exactly once

Over n operations, total moves = n. Each operation is amortised O(1).

This is a popular interview question, but it also has real uses: functional languages without mutable arrays implement queues this way (e.g. Okasaki's persistent queue).

18

Stack Using Two Queues

Implement a LIFO stack using two FIFO queues. Two strategies exist.

Strategy 1: Costly push — O(n) push, O(1) pop

push(x):
    enqueue x into Q2
    while Q1 is not empty:
        dequeue from Q1 → enqueue into Q2
    swap Q1 and Q2

pop():
    dequeue from Q1

Best when many more pops than pushes.

Strategy 2: Costly pop — O(1) push, O(n) pop

push(x):
    enqueue x into Q1

pop():
    while Q1 has more than 1 element:
        dequeue from Q1 → enqueue into Q2
    result = dequeue from Q1
    swap Q1 and Q2
    return result

Best when many more pushes than pops.

Neither approach achieves amortised O(1) for both operations. This is why the two-stack queue is preferred — it achieves amortised O(1) for all operations.

19

Concurrent Queues

In multi-threaded systems, queues must handle simultaneous producers and consumers safely.

Lock-free queue (Michael & Scott, 1996)

Uses Compare-And-Swap (CAS) to atomically update head and tail pointers without locks.

CAS(addr, expected, new):
    atomically:
        if *addr == expected:
            *addr = new
            return true
        return false
  • Enqueue: CAS tail's next from null to new node, then CAS tail forward
  • Dequeue: CAS head from current to head.next
  • If CAS fails, another thread succeeded — retry

Trade-offs

ApproachThroughputComplexity
Mutex queueModerateLow
Lock-free (CAS)HighHigh
Wait-freeHighestVery high

Standard library support

Java: ConcurrentLinkedQueue Go: channels (CSP) Rust: crossbeam::queue C++: boost::lockfree
20

Applications — BFS, Scheduling, Undo

Breadth-First Search

A queue drives BFS: enqueue start, repeatedly dequeue a node, process it, enqueue unvisited neighbours. Guarantees shortest path in unweighted graphs.

queue = [start]
while queue:
    node = dequeue()
    for n in neighbours(node):
        if not visited(n):
            enqueue(n)

Task Scheduling

  • Round-robin CPU — processes cycle through a ready queue
  • Thread pool — tasks enqueued by producers, dequeued by workers
  • Message queues — Kafka, RabbitMQ, SQS: async producer/consumer

Undo / Redo

Stack-based: every action → undo stack. On undo: pop undo, push to redo. On redo: reverse.

Action → push undo
Undo   → pop undo, push redo
Redo   → pop redo, push undo
New action → clear redo

Structure → Application mapping

StructureApplications
StackCall stack, undo, expression eval, DFS, backtracking
QueueBFS, scheduling, buffering, message passing
Priority queueDijkstra, A*, Huffman, OS scheduling
DequeSliding window, work stealing, palindrome checking
21

Summary & Further Reading

Key takeaways

  • Stacks and queues are ADTs — define by operations and invariants, not implementation
  • Array-backed implementations dominate due to cache locality
  • The circular buffer solves the dequeue-shift problem with modular arithmetic
  • Monotonic stacks/queues reduce common problems from O(n²) to O(n)
  • A min-stack augments the stack ADT with O(1) minimum queries
  • Two stacks make a queue with amortised O(1) — a foundational result
  • Binary heaps power shortest-path and greedy algorithms
  • Lock-free concurrent queues use CAS to avoid mutex bottlenecks

Recommended reading

SourceDescription
Cormen et al.Introduction to Algorithms (CLRS) — heaps, stacks, queues
OkasakiPurely Functional Data Structures — persistent queues
SedgewickAlgorithms, 4th ed. — practical Java implementations
Michael & ScottNon-blocking concurrent queue algorithms (1996)
SkienaAlgorithm Design Manual — when to use which structure