monte carlo and simulation
when to simulate versus when to solve analytically, and why simulation is underrated as a modeling tool.
the case for simulation
analytical solutions are elegant but they require the problem to be tractable. most real-world problems aren't. the moment you have nonlinear interactions, stochastic elements, or enough variables, closed-form solutions disappear.
simulation doesn't care. throw random numbers at the problem 10,000 times and see what happens. you trade mathematical elegance for computational brute force, and in practice that trade is almost always worth it.
monte carlo basics
the core idea: use random sampling to estimate quantities that are hard to compute directly.
- define the system and its random inputs
- sample those inputs from their distributions
- run the model for each sample
- aggregate the results — mean, variance, percentiles, whatever you need
the power: as sample size grows, your estimate converges on the true value. the error shrinks as 1/sqrt(n), which means 100x more samples gives 10x more precision. diminishing returns, but predictable diminishing returns.
when to simulate vs solve analytically
solve analytically when:
- the problem is tractable (linear, well-behaved)
- you need to understand why the result is what it is, not just what it is
- the analytical solution reveals structure (sensitivities, critical thresholds)
- judges/reviewers expect it (in competitions, an analytical result carries more weight)
simulate when:
- the problem has too many interacting variables for closed-form
- you need distributions, not just point estimates — "what's the range of outcomes?"
- the system has feedback loops or emergent behavior
- you want to stress-test your analytical solution (see estimation-and-sanity-checks)
- time pressure makes deriving a closed-form impractical (see competition-strategy)
in competitions
in MCM/ICM, i've found the strongest papers often combine both: an analytical model that captures the core dynamics, validated and extended by simulation. the analytical model shows you understand the structure. the simulation shows it actually works under realistic conditions.
the trap: using simulation as a black box. judges want to see that you understand what your simulation is doing, not just that it produces numbers. always pair simulation results with sensitivity analysis and sanity checks.
practical simulation tips
- start with a tiny simulation — 100 runs, simple model. get the code working and the output format right. then scale up.
- validate against known cases — if your problem has a special case with a known answer, simulate that case first. if the simulation doesn't match, you have a bug.
- vary one thing at a time — understand the effect of each parameter before varying everything at once.
- visualize intermediate results — don't just look at final numbers. plot distributions, time series, spatial patterns. the shape of the distribution often tells you more than the mean.
- seed your random number generator — for reproducibility. nothing worse than "it worked yesterday but not today" because of random variation.
connection to modeling
simulation is a form of modeling where the model is expressed as code rather than equations. the same principles apply: all simulations are wrong, but some are useful. the question is still "does this capture the dynamics that matter for my question?"