The Computing Series

Exercises

Level 1 — Understand

  1. What consistency model does the distributed KV store provide when W=1, R=1, N=3, and why does R+W > N guarantee that at least one node has the latest write?
  2. Name three properties of consistent hashing that make it preferable to simple modulo hashing for distributed storage.
  3. What is the purpose of virtual nodes in a consistent hash ring, and what problem do they solve?

Level 2 — Apply

  1. A distributed KV store has N=5 replicas. You set W=3 and R=3. (a) Does R + W > N hold? (b) Is this configuration strongly consistent? (c) If you change to W=1, R=1, what consistency guarantee does the system provide? (d) What FM code describes the failure risk of the W=1, R=1 configuration?

  2. A consistent hash ring has three nodes at positions 100, 300, and 500 on a 0–1000 ring. A new node joins at position 200. Which keys are reassigned? Which nodes are affected? How does this compare to a simple modulo scheme with N=3 adding one node to get N=4?

Level 3 — Design

  1. A gaming platform stores player session state (current level, inventory, score) in a distributed KV store. Sessions are read on every game action (200ms intervals), written on every level-up or purchase. 10M concurrent players. Session reads must be strongly consistent — a player must always see their latest inventory. Design the quorum configuration. What AT1 tradeoff does strong consistency impose? What FM12 risk exists if the data centre hosting the primary quorum experiences a network partition? How would you detect and recover from it?

A complete answer will: (1) specify a concrete quorum configuration — e.g., N=3 replicas with W=2, R=2 satisfying W+R>N for strong consistency — and calculate the read/write QPS required (10M players at 200ms intervals = 50M reads/sec, requiring a sharded cluster), (2) name AT1 (Consistency/Availability) and state the cost precisely: strong quorum reads block until W replicas confirm, so write latency increases and availability drops during node failures, (3) name FM12 (network partition) and explain that a partition isolating the primary replica forces the quorum to either reject reads (preserving consistency) or serve stale data (violating it) — specify which side of CAP the design chooses, and (4) propose a concrete detection and recovery mechanism — e.g., heartbeat-based leader election with fencing tokens to prevent split-brain writes, and a read-repair protocol to reconcile diverged replicas after partition heals.

Read in the book →