The Computing Series

Introduction

A team ships a feature that passes all tests. n = 1,000 in test. Everything is fast. They deploy to production. n = 10,000,000. The feature times out. They roll back. They examine the code. It has not changed.

The code was always slow. They never gave it an input large enough to find out.

Big-O notation is the tool that lets you predict this failure before it happens. It is a language for naming how an algorithm’s resource consumption grows. If you can read Big-O, you can look at code and say: “This will fail at n = 100,000” — before you deploy it.


Read in the book →