Create wiki/math/estimation-and-sanity-checks.md
3ecd547f37f0 harrisonqian 2026-04-12 1 file
new file mode 100644
index 0000000..e4f4f8e
@@ -0,0 +1,48 @@
+---
+visibility: public-edit
+---
+
+# estimation and sanity checks
+
+Fermi estimation, order-of-magnitude thinking, and the discipline of checking whether your answer makes sense.
+
+## Fermi estimation
+
+named after Enrico Fermi, who was famous for estimating quantities with almost no data. the classic: "how many piano tuners are in Chicago?" you don't know the answer, but you can break it down:
+
+- population of Chicago (~3 million)
+- fraction of households with pianos (~5%? → 150k pianos)
+- how often a piano needs tuning (~once a year → 150k tunings/year)
+- how many tunings a tuner can do per day (~4) × working days (~250) → ~1000 tunings/year per tuner
+- answer: ~150 tuners
+
+the point isn't precision. it's getting within an order of magnitude — within a factor of 10. if your estimate says 150 and the real answer is 100 or 300, you're doing fine. if it says 15 or 1500, something is wrong with your decomposition.
+
+## why this matters for modeling
+
+every mathematical model should pass a Fermi sanity check before you trust its output. if your model says a city of 100k people needs 50,000 hospital beds, something is broken. if your optimization says the answer is negative when it should be positive, something is broken.
+
+this sounds obvious but i've seen it fail — including in my own work. you get deep into the math, the equations look right, the code runs, and you accept the output without asking "does this number make sense in the real world?"
+
+## the sanity check habit
+
+after getting any result — analytical, simulated, or estimated — i try to check:
+
+1. **order of magnitude** — is the number roughly the right size? if you're modeling the weight of a car and get 50 kg, stop.
+2. **sign and direction** — does it go the right way? if increasing price decreases demand in your model, good. if it increases demand, probably a bug.
+3. **limiting cases** — what happens at extremes? if you set a parameter to 0 or infinity, does the model behave sensibly?
+4. **dimensional analysis** — do the units work out? this catches a surprising number of errors.
+5. **comparison to known values** — is there a real-world benchmark you can compare to?
+
+## estimation as a competition skill
+
+in [[competition-strategy]], estimation serves two purposes:
+
+- **guiding model development** — before building a complex model, estimate the answer. this tells you what order of magnitude to expect and helps catch errors early.
+- **validating results** — after your model produces numbers, check them against Fermi estimates. if they're wildly different, either your model or your estimate has a problem — figure out which.
+
+in timed competitions, Fermi estimation is also a time management tool. if a quick estimate tells you the answer, you might not need the complex model at all.
+
+## connection to other thinking
+
+Fermi estimation is decomposition — breaking a hard question into easier sub-questions. this is the same skill as identifying [[critical-path]] in a project, or [[problem-framing]] in modeling. the meta-skill is always: "what simpler questions can i answer that combine to answer the hard question?"
\ No newline at end of file