optimization inequalities cannot be strict Consider:
this has NO SOLUTION. (1,1) wouldn’t actually be in feasible set. So, we usually specify optimization without a strict inequality. So, instead, we write:
Univariate Conditions First order Necessary Condition \begin{equation} \nabla f(x^{}) = 0 \end{equation} Second order necessary condition \begin{equation} \nabla^{2}f(x^{}) \geq 0 \end{equation} Derivative \begin{equation} f’(x) = \frac{\Delta f(x)}{\Delta x} \end{equation} Or gradient; our convention is that gradients are a COLUMN vector—
Hessian matrix (2nd order partial); its just this, where columns are the second index and rows are the first index. Directional Derivative \begin{align} \nabla_{s} f(x) &= \lim_{h \to 0} \frac{f(x+hs) - f(x)}{h} \ &= \lim_{h \to 0} \frac{f(x+\frac{hs}{2}) - f(x- \frac{hs}{2})}{h} \end{align} i.e. this is “derivative along a direction” Numerical Method Finite-Difference Method All of these methods suffer from the fact that f(x+h) - f(x) cancels out at small values of x and h, because of floating point errors. To fix this, use Complex-Difference Method. Forward Difference Recall the Taylor Series about f(x+h):
Moving it around to get f’(x) by itself:
So:
where … errors in the end at O(h). So:
Error Analysis \frac{f’’(x)}{2!}h + … h^{n}, the biggest error term lives with h, so this scheme has asymtotic error at O(h). Central Difference Slightly different formulation, which gives quadratic error O(h^{2}), because the non-squared h term cancels out:
Backward Difference Forward difference, backward:
Complex-Difference Method Consider a Taylor approximation of complex difference:
Let’s again try to get f’(x) by itself; rearranging and thinking for a bit, we will get every other term on the expression above:
Where the … errors is at O(h^{2}) NOTICE: we no longer have the cancellation error because we no longer have subtraction. Automatic Differentiation See Automatic Differentiation Bracketing Given a unimodal function, the global minimum is guaranteed to be within [a,c] with b \in [a,c] if we have that f(a) > f(b) < f( c). So let’s find this bracket. Unimodality A function f is unimodal if: \exists unique x^{*} such that f is monotonically decreasing for x \leq x^{*} and monotonically increasing for x \geq x^{*} Bracketing Procedure If we don’t know anything, we might as well start with a=-1, b=0, c=1. One of three things: we already satisfy f(a) > f(b) < f( c), well, we are done if our left side f(a) is too low, we will move a to the left without moving c—doubling the step size every time until it works if our right side is too low to the other thing, move it too, … Fibonacci Search Say you wanted to evaluate your sequence a finite number of times to maximally lower the interval for bracketing. Two Evaluations At two evaluations, you should pick two points right down the middle very close together; this will cut your interval in half. n evaluations Evaluate intervals with lengths
as in; say you are allowed n evaluations; figure the sequence \{F_1, \dots, F_{n}\}, and then partition your space between a and b into F_{n} slices; evaluate at locations \frac{F_{n-1}}{F_{n}} and 1- \frac{F_{n-1}}{F_{n}}, and lop off the half-line which is to the extrema of the higher point. Golden Section Search Because of Binet’s Formula, we can write:
the inverse of the the golden ratio. So just shrink intervals by that instead.