These are empirical observations about how teams, estimates, and organisations actually behave. They are tendencies, not laws — they can be mitigated, but they cannot be safely ignored.
The Observation: Adding engineers to a late software project makes it later.
Why it is true: New engineers require onboarding time from existing engineers (reducing existing capacity). Communication overhead grows non-linearly with team size (N engineers produce N(N-1)/2 communication channels). New engineers must be productive before they contribute; they are a drag on the team before they are an asset.
The mitigation: Do not add engineers to a project late in a delivery cycle. Onboard new engineers on separate, parallel work. Invest in documentation and tooling that reduces onboarding cost.
The Observation: It always takes longer than you expect, even when you take into account Hofstadter’s Law.
Why it is true: Engineers are optimistic about their own tasks and tend to underestimate the non-coding work: debugging, integration, testing, review cycles, deployment, and incidents. Estimates are based on the happy path. Reality includes the unhappy path.
The mitigation: Use historical data, not intuition, for estimates. Add explicit time for integration, testing, and deployment. Run retros on why the last project took longer than estimated and address the actual causes.
The Observation: Work expands to fill the time available for its completion.
Why it is true: In the absence of a firm deadline, the scope of “done” expands: more edge cases are handled, more refactoring happens, more polish is added. This is not laziness — it is the natural tendency to improve work when time permits.
The mitigation: Time-box deliverables. The question is not “is it perfect?” but “is it good enough to ship?” Define “done” before starting.
The Observation: When a measure becomes a target, it ceases to be a good measure.
Why it is true: Optimising for a metric produces behaviour that improves the metric without improving the underlying objective. A team measured on “bugs closed” will close bugs without fixing them. A model measured on “user engagement” will learn to maximise engagement, not user satisfaction. The metric stops measuring the objective it was chosen to track.
The mitigation: Choose metrics whose only natural path to improvement is to improve the underlying objective. Retention is hard to game. Number of commits is easy to game. Measure the outcome, not the activity.