The Computing Series

Introduction

This is the most important chapter in Books 1 and 2.

Not because the algorithm is complicated — it is not. The algorithm is simple: before computing something, check whether you have already computed it. If yes, return the stored result. If no, compute it, store the result, and return.

This chapter is important because it is the bridge chapter. Every piece of infrastructure you will study in Book 3 — caching layers, CDNs, read replicas, materialized views — is built on this idea. And every engineer who understands the idea at the algorithm level will understand infrastructure caching at the design level, without memorising a catalogue of systems.

In Book 1, Chapter 21, you added a dictionary to a recursive function and called it memoization. You saw it reduce an exponential-time computation to linear. Now you will see that the dictionary is a cache. The subproblem is the cache key. The computed result is the cached value. The only difference between a DP memo table and a Redis instance is this: Redis cannot rely on the underlying data remaining constant. The problem that follows from that difference is where all the complexity lives.


Read in the book →