From Stacks and Queues to Graph Algorithms
Primitive Containers as the Hidden Machinery of Computer Science
One of the beautiful things about classic algorithm books is that they begin with very humble objects: stacks, queues, arrays, linked lists. At first sight this can feel almost too elementary. But then, a few chapters later, you are doing graph traversal, shortest paths, dynamic programming, scheduling, search, parsing, caching, distributed systems. And strangely, those same humble objects are still there.
That is the main thesis of this post: most higher-order algorithms are not built from nothing; they are built from primitive containers. A graph algorithm is often just a graph plus a queue, or a graph plus a stack and a set, or a graph plus a heap and a dictionary.
1. Primitive Containers: the Small Objects Behind Big Algorithms
Here are the main primitive containers you see again and again:
| Container | Main Idea | Typical Operations | Why It Matters |
|---|---|---|---|
| Stack | LIFO ordering | push, pop |
Backtracking, recursion, DFS |
| Queue | FIFO ordering | append, popleft |
BFS, level-order exploration, scheduling |
| Deque | Both ends available | append, appendleft, pop, popleft |
Sliding windows, 0-1 BFS, caches |
| Set | Fast membership, uniqueness | add, in |
Visited nodes, deduplication, cycle detection |
| Dictionary / Hash Map | Key → value mapping | get, [key] |
Distances, parents, counts, memoization |
| Heap / Priority Queue | Extract best element quickly | heappush, heappop |
Dijkstra, Prim, event simulation, greedy search |
| Array / List | Indexed storage | a[i], append |
DP tables, adjacency lists, dense state |
If you get these objects deeply into your fingers, a lot of "advanced algorithms" start feeling less mysterious. What changes is often not the soul of the computation, but the container controlling it.
2. The First Great Distinction: Stack vs Queue
Let us start with the two classic siblings. Both store a frontier of work to do. But each imposes a different temporal structure on the algorithm.
Stack: depth, recursion, backtracking
A stack returns the most recently added element. It behaves like a pile of plates. This is why it naturally models recursion and depth-first exploration.
stack = []
stack.append("A") # push
stack.append("B")
stack.append("C")
print(stack.pop()) # C
print(stack.pop()) # B
Typical uses:
- Depth-first search
- Recursion simulation
- Backtracking
- Parsing and expression evaluation
- Undo mechanisms
Why DFS loves the stack: DFS explores one path as far as possible, then backtracks. That is exactly a stack-shaped behavior.
graph = {
"A": ["B", "C"],
"B": ["D"],
"C": ["E"],
"D": [],
"E": []
}
stack = ["A"]
visited = set()
while stack:
node = stack.pop()
if node in visited:
continue
visited.add(node)
print(node)
for neighbor in reversed(graph[node]):
stack.append(neighbor)
The reversed() is not required for correctness. It just helps preserve a visually intuitive traversal order.
Queue: breadth, layers, wavefronts
A queue returns the oldest element first. It behaves like a line of people. This is why it naturally models breadth-first exploration and level-order processes.
from collections import deque
queue = deque()
queue.append("A") # enqueue
queue.append("B")
queue.append("C")
print(queue.popleft()) # A
print(queue.popleft()) # B
Typical uses:
- Breadth-first search
- Shortest path in unweighted graphs
- Multi-source propagation
- Task scheduling
- Topological sort (Kahn's algorithm)
from collections import deque
graph = {
"A": ["B", "C"],
"B": ["D"],
"C": ["E"],
"D": [],
"E": []
}
queue = deque(["A"])
visited = {"A"}
while queue:
node = queue.popleft()
print(node)
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
BFS feels like a wave expanding outward. The queue is what makes that wave possible.
3. Sets: the Memory of the Explored World
A set is usually not the star of the show, but it is often the difference between a working algorithm and an infinite loop. In graph problems, sets commonly store what has already been seen.
visited = set()
if node not in visited:
visited.add(node)
Typical uses:
- Visited tracking in DFS/BFS
- Cycle detection
- Removing duplicates
- Checking whether a state has already been processed
Conceptually, a set is the algorithm's memory of the already-known universe.
set membership is average-case O(1) because it is hash-based.
But that depends on the elements being hashable and on hash-table behavior. In a tree-based set in another language,
membership might instead be O(log n).
4. Dictionaries: state, index, memoization
A dictionary maps keys to values. This makes it a perfect tool whenever an algorithm needs to attach information to entities: distance to node, parent pointer, count, score, cached subproblem answer.
dist = {"A": 0}
parent = {"A": None}
count = {"apple": 3}
Typical uses:
- Distance maps in shortest path algorithms
- Parent reconstruction in BFS/DFS
- Memoization in dynamic programming
- Frequency counting
- Adjacency lists
Example: BFS with distances
from collections import deque
graph = {
"A": ["B", "C"],
"B": ["D"],
"C": ["D", "E"],
"D": [],
"E": []
}
queue = deque(["A"])
dist = {"A": 0}
while queue:
node = queue.popleft()
for neighbor in graph[node]:
if neighbor not in dist:
dist[neighbor] = dist[node] + 1
queue.append(neighbor)
print(dist)
Here the dictionary is not just storage. It is the mathematical state of the algorithm.
5. Deque: when you need both ends
A deque is a double-ended queue. It often appears when an algorithm needs to push or pop from both left and right efficiently.
from collections import deque
dq = deque([2, 3])
dq.append(4) # right
dq.appendleft(1) # left
print(dq) # deque([1, 2, 3, 4])
Typical uses:
- Sliding window maximum
- 0-1 BFS
- LRU-like policies
- Bidirectional processing
Example: why deque matters in Python
Python lists are great for append() and pop() at the end, but removing from the front
with pop(0) is O(n), because all elements shift.
By contrast, collections.deque gives O(1) appends and pops from both ends.
from collections import deque
# Good queue
q = deque()
q.append("task")
item = q.popleft() # O(1)
# Bad large queue pattern with list
lst = ["task1", "task2", "task3"]
item = lst.pop(0) # O(n)
6. Heap / Priority Queue: when order is not time, but score
A queue gives "oldest first". A stack gives "newest first". A heap gives "best first", according to a priority.
In Python, the standard tool is heapq, which implements a min-heap.
import heapq
heap = []
heapq.heappush(heap, (5, "low"))
heapq.heappush(heap, (1, "high"))
heapq.heappush(heap, (3, "medium"))
print(heapq.heappop(heap)) # (1, "high")
Typical uses:
- Dijkstra's algorithm
- Prim's minimum spanning tree
- A* search
- Event simulation
- Scheduling by priority
Example: Dijkstra skeleton
import heapq
graph = {
"A": [("B", 1), ("C", 4)],
"B": [("C", 2), ("D", 5)],
"C": [("D", 1)],
"D": []
}
dist = {"A": 0}
heap = [(0, "A")]
while heap:
current_dist, node = heapq.heappop(heap)
if current_dist > dist[node]:
continue
for neighbor, weight in graph[node]:
new_dist = current_dist + weight
if neighbor not in dist or new_dist < dist[neighbor]:
dist[neighbor] = new_dist
heapq.heappush(heap, (new_dist, neighbor))
print(dist)
Dijkstra is a graph plus a heap plus a dictionary. That is already a powerful way to remember it.
7. Arrays and Lists: dense state and direct indexing
Arrays or array-like lists are indispensable when the state space is dense and indexable. Dynamic programming often lives here.
# Fibonacci bottom-up
n = 10
dp = [0] * (n + 1)
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
print(dp[n])
Typical uses:
- Dynamic programming tables
- Adjacency matrices
- Prefix sums
- Counting arrays
- Sorting buffers
8. A Map from Primitive Containers to Higher-Order Algorithms
| Higher-Level Algorithm / Pattern | Main Primitive Container(s) | Why |
|---|---|---|
| DFS | Stack + Set | Depth exploration with visited tracking |
| BFS | Queue + Set/Dictionary | Layered expansion with memory of seen nodes |
| Dijkstra | Heap + Dictionary | Always expand cheapest known state next |
| A* | Heap + Dictionary + Heuristic | Priority by estimated total cost |
| Topological Sort (Kahn) | Queue + Dictionary | Process indegree-zero nodes in order |
| Memoized DP | Dictionary | Cache subproblem results |
| Tabulation DP | Array / Matrix | Structured state table |
| Sliding Window Maximum | Deque | Efficient front and back maintenance |
| Cycle Detection | Set + Stack/Recursion | Track what is active and what is done |
| Frequency Counting | Dictionary | Map item to count |
| Caching | Dictionary + Deque/Linked Order | Fast lookup plus eviction order |
9. Complexity: not just asymptotics, but language reality
This part is crucial. It is not enough to know the abstract container. You also need to know how your language implements it. In interviews and production code alike, the same abstract idea can have very different runtime behavior depending on the language feature you pick.
Complexity cheat sheet in Python
| Container / Operation | Average Time | Space Notes | Language / Implementation Notes |
|---|---|---|---|
list.append(x) |
Amortized O(1) | May over-allocate capacity | Python list is a dynamic array |
list.pop() |
O(1) | No extra space | Good stack primitive |
list.pop(0) |
O(n) | Shifts elements | Bad queue primitive for large workloads |
deque.append() |
O(1) | Efficient both ends | Use for queues |
deque.popleft() |
O(1) | Efficient both ends | Main reason deque beats list for BFS queues |
x in set |
Average O(1) | Hash table overhead | Requires hashable keys |
d[key] |
Average O(1) | Hash table overhead | Worst-case can degrade, though rarely in practice |
heapq.heappush/pop |
O(log n) | Stored in list-backed heap | No custom heap object; functions operate on lists |
list[i] |
O(1) | Contiguous-ish dynamic array behavior | Excellent for dense DP and buffers |
Why language features matter
Consider two implementations of BFS:
Bad BFS queue choice
# Avoid for large BFS
queue = ["A"]
while queue:
node = queue.pop(0) # O(n)
Good BFS queue choice
from collections import deque
queue = deque(["A"])
while queue:
node = queue.popleft() # O(1)
At the algorithm level, both are "BFS with a queue". But at the language level, one scales well and the other quietly sabotages the algorithm.
Space complexity is part of the story too
DFS and BFS may both visit all nodes and edges in O(V + E) time, but their space profile differs:
- DFS: uses stack depth proportional to current exploration depth, worst-case O(V)
- BFS: uses queue proportional to frontier width, worst-case O(V)
The difference is practical, not only symbolic. On wide graphs, BFS can hold a large frontier. On deep recursive traversals, recursive DFS can hit language recursion limits.
10. The "container + invariant" way of reading algorithms
A powerful way to learn algorithms is to stop memorizing them as opaque recipes and instead identify three things:
- What is the frontier?
- What is the memory?
- What is the ordering principle?
Examples:
| Algorithm | Frontier | Memory | Ordering Principle |
|---|---|---|---|
| BFS | Queue | Set / Dictionary | Oldest discovered first |
| DFS | Stack | Set | Most recent branch first |
| Dijkstra | Heap | Distance dictionary | Smallest tentative distance first |
| Memoization | Recursive calls | Dictionary | Only solve unseen subproblems |
This framing is especially useful in interviews, systems design, and production debugging. Often the bug is not in the "algorithm idea". It is in the wrong container choice, or in violating the container's intended invariant.
11. Small Python examples by container
Stack example: balanced parentheses
def is_balanced(s: str) -> bool:
stack = []
pairs = {')': '(', ']': '[', '}': '{'}
for ch in s:
if ch in "([{":
stack.append(ch)
elif ch in pairs:
if not stack or stack.pop() != pairs[ch]:
return False
return not stack
Pattern: last-opened must match first-closed. That is stack logic.
Queue example: shortest path in unweighted graph
from collections import deque
def shortest_steps(graph, start, target):
queue = deque([(start, 0)])
visited = {start}
while queue:
node, dist = queue.popleft()
if node == target:
return dist
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append((neighbor, dist + 1))
return None
Pattern: wavefront exploration discovers shortest unweighted paths.
Set example: dedup preserving only uniqueness information
items = ["a", "b", "a", "c", "b"]
unique = set(items)
print(unique)
Pattern: you care about membership, not order or multiplicity.
Dictionary example: frequency counter
text = "banana"
freq = {}
for ch in text:
freq[ch] = freq.get(ch, 0) + 1
print(freq)
Pattern: symbolic state accumulation.
Heap example: top-k smallest values
import heapq
nums = [8, 2, 5, 1, 9, 3]
heapq.heapify(nums)
k = 3
result = [heapq.heappop(nums) for _ in range(k)]
print(result)
Pattern: repeatedly extract the current best candidate.
12. Space/time complexity, but with behavior in mind
It is tempting to stop at asymptotic notation, but there is a more mature way to think: complexity tells you the broad class; language features tell you the lived behavior.
For example:
- A Python
dictgives average O(1) lookup, but uses significant overhead compared with a tight array. - A Python
listis excellent for stack behavior, but poor for front-removal queue behavior. - A heap gives O(log n) insertion/removal, but may still be much faster than sorting the whole collection each iteration.
- A recursive algorithm may be elegant mathematically but undesirable operationally in Python if recursion depth is large.
So when choosing a container, ask both:
- What is the asymptotic complexity?
- What is the concrete runtime behavior in this language?
13. A philosophical closing: big algorithms, small organs
There is a quiet lesson here. Computer science often looks like it is about giant abstractions: graphs, parsers, operating systems, databases, machine learning systems, compilers. But again and again, the machinery underneath is built from small, disciplined objects.
A stack is not just a stack. It is depth, backtracking, nested time. A queue is not just a queue. It is breadth, layering, wave propagation. A set is not just a container. It is memory of what the algorithm already knows. A dictionary is not just lookup. It is state indexed by meaning. A heap is not just an optimization. It is a way to make "best next" operational.
That is why introductory data structures never really disappear. They become the hidden organs of higher reasoning.
14. Final cheat sheet
| Primitive Container | Best Mental Model | Classic Algorithms | Python Best Choice |
|---|---|---|---|
| Stack | Depth / last unfinished work | DFS, parsing, backtracking | list |
| Queue | Wavefront / oldest unfinished work | BFS, scheduling | collections.deque |
| Deque | Two-ended control | 0-1 BFS, sliding window | collections.deque |
| Set | Membership memory | Visited tracking, deduplication | set |
| Dictionary | State by key | Distances, counts, memoization | dict |
| Heap | Best candidate first | Dijkstra, Prim, A* | heapq |
| Array / List | Dense indexed state | DP tables, buffers | list |
If you want to extend this post later, the natural sequel is: "From Primitive Containers to Algorithmic Patterns" — frontier search, dynamic programming, greedy choice, divide and conquer, and connectivity.