Remember Bayes Rule in Baysian Parameter Learning:
\begin{equation} P\left(\theta | D\right) = \frac{P\left(D | \theta\right) p \left(\theta\right)}{\int_{\theta}P\left(D | \theta\right) p \left(\theta\right) \dd{\theta}} \end{equation}
we can’t actually easily compute the bottom without taking an analytic integral; instead we can sample from it. If you want analytical form, you should hope that your likelihood function is a conjugate prior which allows us to analytically update prirors.