F1 (Mental Models) is the entry point. Every analysis begins by asking what kind of thing this is. Not the specifics yet — the category. A distributed system has different natural failure modes than a monolith. A batch pipeline behaves differently than a request-response service. F1 primes the rest of the analysis by narrowing which patterns are relevant.
F2 (Constraints) runs in parallel with F1. Once the mental model is active, constraints become visible. What cannot change? What is the system obligated to guarantee? What is the operating environment that the architecture cannot escape? Constraints are not limitations on what you can build — they are the load-bearing walls. Identifying them early prevents the mistake of designing a solution that requires moving them.
F3 (Failure Modes) is where the system becomes honest. Every architecture is a bet that certain things will not fail together. F3 surfaces the failure modes that the design creates exposure to. This is not pessimism — it is structural honesty. A system that has been analysed through F3 has named its risks. A system that has not has hidden risks that feel safe until they are not.
F4 (Tradeoffs) names what F3 reveals. Every decision that creates a failure mode also creates a tradeoff. The architectural decisions that produce FM1 (SPOF) exposure are also the decisions that chose AT5 (Centralization) over distribution. F4 makes explicit what every architecture has already decided implicitly.
F5 (Review Questions) provides the structured process for interrogating a specific system. The seven questions are a protocol, not a checklist — they produce useful output only when answered in order, because each answer changes the space of the next question.
F6 (Archetypes) is the pattern library. Having identified what the system is (F1), what it cannot change (F2), what can go wrong (F3), and what decisions shaped it (F4), F6 says which category of system this most resembles. The archetype carries inherited implications — known failure modes, natural tradeoff surfaces, structural patterns that either fit or have been deliberately violated.
F7 (Communication) translates the analysis into something actionable for different audiences. The technical findings from F3 and F4 mean different things to an engineer, a product manager, and a CFO. F7 is not softening the message — it is selecting the abstraction level that makes the decision actionable for the person receiving it.
F8 (Vocabulary) is what makes F7 possible at scale. Shared vocabulary eliminates the coordination cost of every team needing to rediscover the same concepts. When engineers across an organisation use AT5 to mean the same thing, the conversation about centralisation vs distribution moves faster and stays precise.
F9 (Empirical Grounding) anchors the analysis in observed laws rather than opinion. Conway’s Law, Goodhart’s Law, Amdahl’s Law — these are not recommendations. They are regularities that hold regardless of intent. When an analysis contradicts one of these laws, the analysis is wrong.