1. What are the three properties that make an abstraction good, as described in this chapter?
2. What does the Law of Leaky Abstractions state, and what does it imply for interface design?
3. Name two real-world examples from the chapter where an abstraction has survived significant underlying change, and identify what made each interface stable.
A team exposes a UserRepository interface with
methods findById, findByEmail,
save, delete, and
findAllByCreatedDateBetween. Which methods are likely to
leak implementation details? Rewrite the interface to expose only what
callers need, and describe what you removed and why.
A logging abstraction currently defines
log(level, message, timestamp, hostname, pid). A new
implementation — a cloud logging service — does not use pid
or hostname (it derives them automatically). What does this
tell you about the interface design? Propose a corrected interface and
explain the tradeoff.
Two services share a PaymentResult type defined in a
shared library. Team A adds a new field processorFee to the
type and deploys. Team B has not deployed yet. Describe exactly what
breaks and which failure mode this represents.
CacheStore interface that could be implemented
by an in-memory LRU cache, Redis, and a distributed consistent cache
(like etcd). Identify which operations are safe to include in a shared
interface, which are implementation-specific and must be excluded, and
where leakage is inevitable under failure scenarios. Name the tradeoffs
using AT codes.A complete answer will: (1) define a minimal safe interface (get, set, delete, exists) and justify why distributed-specific operations (e.g., distributed locking, watch/notify, CAS) must be excluded — callers that assume these operations exist cannot safely use the in-memory implementation, violating the Liskov Substitution Principle, (2) name AT3 (Simplicity/Flexibility): a minimal interface is easy for all implementations to satisfy but forces callers to work at the lowest common denominator; a richer interface enables callers to use more powerful features but narrows the set of valid implementations, (3) identify where leakage is inevitable: failure semantics differ across implementations — an in-memory LRU never throws a network error, Redis may throw a connection timeout, etcd may throw a quorum failure; a shared interface cannot fully abstract these differences, and callers must either handle implementation-specific exceptions or the interface must define a unified exception hierarchy, and (4) name FM1 (single point of failure) for the Redis implementation (a single Redis node behind the interface is a SPOF that the in-process LRU never is) and FM12 (network partition) for etcd — state that an abstraction hiding these failure modes from callers produces callers that cannot handle the failures correctly.
A complete answer will: (1) name the Façade (or Adapter) pattern
and describe a PaymentProcessor interface that wraps both
the old and new processor SDKs — the forty call sites are updated once
to call the interface instead of the old SDK directly, and the interface
implementation routes to old or new processor based on a feature flag,
(2) design the incremental migration: route a configurable percentage of
transactions to the new processor (e.g., 1%, 10%, 50%, 100%) with the
ability to roll back to 0% without a code deployment — specify the AT3
tradeoff (the routing layer adds code complexity but eliminates flag-day
risk), (3) name FM2 (cascading failure) during the migration window: if
the new processor’s API returns unexpected errors, the calling code must
not crash — the interface must implement a fallback that retries on the
old processor for recoverable errors, with the AT1 tradeoff (consistency
of which processor charged the card vs. availability of the transaction)
stated explicitly, and (4) identify the FM4 risk (stale data /
dual-write inconsistency): during the window when both processors are
active, refund and dispute queries must check both processor histories —
propose a transaction log that records which processor handled each
charge, enabling correct routing of refunds.