Threads do not operate in isolation. The most common interactions:
T5 × T7 — Caching and State Machines: caches are state. Caching the result of a stateful computation means the cache must be invalidated whenever the state machine transitions. Forgetting this is how stale data produces wrong answers.
T1 × T6 — Hashing and Redundancy: consistent hashing (T1) is the mechanism that makes distributed replication (T6) tractable. Without it, adding a replica requires remapping all keys. With it, only the adjacent segment migrates.
T9 × T7 — Consensus and State: distributed systems replicate state (T7) across machines. Consensus (T9) is what determines which replica’s state is authoritative when they diverge.
T11 × T12 — Feedback and Tradeoffs: every feedback loop encodes a tradeoff. A circuit breaker trades availability (some requests are refused) for stability (the system does not collapse). A rate limiter trades throughput for fairness. Name the tradeoff or the feedback loop’s behaviour will surprise you.
Concept: The Twelve Recurring Threads
Thread: T12 ← Constrained optimisation (Book 1) → Series-wide architectural reasoning (all books)
Core Idea: Twelve patterns — hashing, trees, graphs, queues, caching, redundancy, state machines, divide & conquer, consensus, encoding, feedback loops, tradeoffs — recur across every layer of the computing stack. Name the thread and you navigate across layers without getting lost.
Tradeoff: Generality vs Specialisation (F4 #6) — the threads are general patterns; their specific applications are specialised; mastering both is the goal
Failure Mode: Observability Blindness (F3 #11) — engineers who cannot see the pattern beneath the technology reinvent the wheel at every layer
Signal: You see the same problem in three different systems; you are debugging a cross-layer incident; you are reading an unfamiliar codebase and want to understand its architecture quickly
Maps to: Reference Book Ch 2 (Knowledge Stack); all 7 series books — each book traces the threads through its layer