estimation and sanity checks
Fermi estimation, order-of-magnitude thinking, and the discipline of checking whether your answer makes sense.
Fermi estimation
named after Enrico Fermi, who was famous for estimating quantities with almost no data. the method: break a hard question into sub-questions you can roughly answer, multiply the estimates together, and get within an order of magnitude (factor of 10) of the real answer.
the point isn't precision — it's getting close enough to be useful with minimal data.
why this matters for modeling
every mathematical model should pass a Fermi sanity check before you trust its output. if your model says a city of 100k people needs 50,000 hospital beds, something is broken. if your optimization says the answer is negative when it should be positive, something is broken.
this sounds obvious but i've seen it fail — including in my own work. you get deep into the math, the equations look right, the code runs, and you accept the output without asking "does this number make sense in the real world?"
the sanity check habit
after getting any result — analytical, simulated, or estimated — i try to check:
- order of magnitude — is the number roughly the right size? if you're modeling the weight of a car and get 50 kg, stop.
- sign and direction — does it go the right way? if increasing price decreases demand in your model, good. if it increases demand, probably a bug.
- limiting cases — what happens at extremes? if you set a parameter to 0 or infinity, does the model behave sensibly?
- dimensional analysis — do the units work out? this catches a surprising number of errors.
- comparison to known values — is there a real-world benchmark you can compare to?
estimation as a competition skill
in competition-strategy, estimation serves two purposes:
- guiding model development — before building a complex model, estimate the answer. this tells you what order of magnitude to expect and helps catch errors early.
- validating results — after your model produces numbers, check them against Fermi estimates. if they're wildly different, either your model or your estimate has a problem — figure out which.
in timed competitions, Fermi estimation is also a time management tool. if a quick estimate tells you the answer, you might not need the complex model at all.
connection to other thinking
Fermi estimation is decomposition — breaking a hard question into easier sub-questions. this is the same skill as identifying critical-path in a project, or problem-framing in modeling. the meta-skill is always: "what simpler questions can i answer that combine to answer the hard question?"