Consider a general non-linear First Order ODEs:
Suppose we have some time interval, we have some solutions to the expression given. Is it possible for us to, given x(t_0) = x_0, what x(t_0+T) would be? Can we approximate for explicit numbers? The solutions have to exist for all time: blow-up cannot be present during numerical estimations. Explicit Euler Method \begin{equation} x(t+h) \approx x_{t+1} = x_{t} + h f(x_t) \end{equation} motivation recall that given x(t_0) = x_0, we desire x(t_0+T). divide your solution interval into N small intervals; each interval would have length h= \frac{T}{N} let t_{i} = t_0 + i \frac{T}{N}, where t_{N} = t_{0}+T for each segment t_{i}, we attempt to compute a x_{i}, and we’d like to approximate the error between x_{i} and x(t_{i}). In the explicit Euler method, we make piecewise linear approximations. At each x_0, we follow the slope estimated via the ODE at that point. Specifically:
for some small h. Meaning, specifically, x(t+h) \approx x(t) + h x’(t), where h is the step size we computed before. Consider that we had an ODE that is x’ = F(x), whech gives us:
Following this scheme, we can calculate from x_0 all the way stepwise to x_{N}. evaluation Situation: we have X_{N}, we have x(t_{N}), how close are they? In fact:
We have some constant C(x_0, t_0, T, f), which we can use to estimate C the bounds specific to the problem you are solving. stiffness Certain parts of a solution maybe decaying/oscillating very different from another part of the solution— example Consider a system:
our solutions look like:
so the top expression gives x_i = (1-h)^{i} x_0 and bottom x_{i} = (1-10h)^{i}x_0, which means they will have different requirements for h to be able to converge example 2 \begin{equation} y’ = -5 (y-\cos x) \end{equation} with method of undetermined coefficients, we obtain:
the first parts are fine and not stiff at all, the third part, we realize that we need (1-5h)^{i}x_0, meaning we need h < \frac{1}{5}. motivation Let’s consider:
The explicit Euler gives out:
meaning, in general:
We know the function is bound to decay, yet the Explicit Euler will give us that this decays only when:
Implicit Euler Method doesn’t have this problem— consider:
meaning:
Implicit Euler Method A small twist on the Explicit Euler Method. To be able to use this method, we can formulate this as:
where we use Newton’s Method to estimate some input i+1 for which the above statement gets to x_{i}. evaluation We actually didn’t do that much error; its is still bounded by:
Derivation \begin{equation} \frac{x((t+h)-h) - x(t+h)}{-h} \approx x’(t+h) \end{equation} this is first-order Taylor Approximation written backwards This also yields:
Now, let t = t_0, and therefore we have t_1 = t +h, this gives us that: Now, recall that, because f is the ODE:
Multiplying h to both sides gives:
which gives:
we will now attempt to estimate x_1 by declaring x_1 := x(t_{1}), which will give us:
Let us call G(x_{1}) = x_1 - h f(x_1) = x_0. Finally, we run Newton’s Method to solve the x_1 such that we can obtain x_0 by trying to find the zeros of G(x_1) - x_0. Because h is small, a good initial guess is actually G(x_0), and then we can optimize. Trapezoidal Method \begin{equation} x_{t+1} = x_t + h \frac{f(x_{t+1})+f(x_t)}{2} \end{equation} motivation “averaging smoothed things out”:
meaning we have:
which averages our derivatives out. Cross-multiplying, this gives:
which can also be written as, multiplying by some h:
explicitly \begin{equation} x_{i} = \left( \frac{(1- \frac{1}{2}\lambda h)}{(1+ \frac{1}{2}\lambda h)}\right)^{i} x_0 \end{equation} evaluation Importantly, this gives bounds
Modified Euler Method This is also called “Midpoint Method”. This is one of thee methods which doesn’t break during “stiff” ODEs, and converges h^{N} times quickly. For some:
this is motivated by the Trapezoidal Method, but “A thorough introduction to these methods requires additional background in approximation theory and numerical analysis” The Book error \begin{equation} |x_{N} - x(t_{n}) | \leq Ch^{2} \end{equation} motivation we take a half step in front of our original point using its slope, and compute the slope there. Improved Euler Method This is also called “Heun’s Method”
error \begin{equation} |x_{N} - x(t_{n}) | \leq Ch^{2} \end{equation} motivation we average the slopes of the current location and a full step in front, calculating their slopes, and average them Runge-Kutta Method a.k.a. instead of contending with the forward, backward, middle slope, or native slope from f, we just ball and average all of them:
and then:
the coefficients are that from pascal’s triangle. error \begin{equation} |x_{N} - x(t_{n}) | \leq Ch^{4} \end{equation} motivation this is essentially like “fitting a parabola” against our curve