\begin{align} p_{\text{fail}} &= \mathbb{E}_{\tau \sim p\left(\cdot\right)} \left[1 \left\{\tau \not \in \psi\right\}\right] \\ &= \int 1 \left\{\tau \not\in \psi\right\} p\left(\tau\right) \dd{\tau } \end{align}

that is, the Probability of Failure is just the normalizing constant of the Failure Distribution. Like with Failure Distribution itself, computing this is quite intractable. We have a few methods to solve this, namely: direct estimation: directly approximate your failure probability from nominal distribution p\tau_{j} \sim p\left(\cdot\right), \hat{p}_{\text{fail}} = \frac{1}{m} \sum_{i=1}^{m} 1\left\{\tau_{i} \not \in \psi\right\} Importance Sampling: design a distribution to probe failure, namely proposal distribution q, and then reweight by how different it is from p\tau_{j} \sim q\left(\cdot\right), \hat{p}_{\text{fail}} = \frac{1}{m}\sum_{i=1}^{m} w_{i} \mathbb{1} \left\{\tau_{i}\not \in \psi\right\}, call w_{i} = \frac{p\left(\tau_{i}\right)}{q\left(\tau_{i}\right)} (the “importance weight”) adaptive importance sampling multiple importance sampling sequential monte-carlo How do you pick a proposal distribution? See proposal distribution. evaluating estimators bias: an estimator is unbiased if it predicts the true value in expectation; namely \mathbb{E}\left[\hat{p}_{\text{fail}}\right] = p_{\text{fail}} consistency: an estimator is consistent if it converges to infinity at infinite samples, namely that \text{lim}_{m \to \infty} \hat{p}_{\text{fail}} = p_{\text{fail}} variance: we want the variance of the estimates around the true value to be low evaluation of direct estimation \begin{equation} \text{Var}\left[\hat{p}\right] = \frac{p_{\text{fail}} \left(1-p_{\text{fail}}\right)}{m} \end{equation} as the number of samples increases, the variance in the estimate decreases (yay!) as the probability of failure deceases, the variance in the failure actually icreases!!! baaaad vibes!

[[curator]]
I'm the Curator. I can help you navigate, organize, and curate this wiki. What would you like to do?