Amazon DynamoDB provides tunable consistency per read (eventually consistent or strongly consistent). It adds automatic partitioning, DAX (a dedicated caching layer), and global tables for multi-region replication. The operational complexity of Dynamo is hidden behind a managed service API.
Apache Cassandra exposes the quorum parameters directly. It uses leaderless replication (no primary node) and LWW by default. It excels at write-heavy workloads and time-series data. Read performance requires careful data modelling — secondary indexes are costly.
Redis Cluster uses a different sharding approach: 16,384 hash slots, each assigned to a primary node. Clients maintain a routing table and contact the correct primary directly. It provides strong consistency within a single shard but no cross-shard transactions. Suitable for caching and session storage, not general-purpose distributed storage.
Concept: Distributed Key-Value Store
Thread: T1 (Hashing) ← Book 2, Ch 3 → Ch 4 (Distributed Cache); T6 (Redundancy) ← Book 3, Ch 5 → Ch 6 (API Gateway); T9 (Consensus) ← Book 3, Ch 8 → Ch 17 (Payment Processing)
Core Idea: Consistent hashing distributes keys across nodes so that topology changes affect only adjacent key ranges. Quorum reads and writes (R + W > N) provide tunable consistency without a central coordinator.
Tradeoff: AT1 — Consistency vs Availability: tunable quorum parameters let operators shift between strong consistency (high R+W) and high availability (low R+W) per operation.
Failure Mode: FM12 — Split-Brain: network partitions allow both sides to accept conflicting writes; vector clocks surface the conflict, but application logic must resolve it.
Signal: When a system needs to store billions of key-value pairs with high write throughput and must remain available during node failures.
Maps to: Reference Book, Framework 6 (System Archetypes)