The gradient Ѧ(X) is a vector function defined as:
The gradient vector points in the direction of the greatest increase in ¦(X). Since we are looking for the minimum, we will use -Ѧ(X), which points in the direction of the greatest decrease.
Basic Gradient Method
- Start at point p0 and move along the line in the direction -Ѧ(X)
p1 = p0 - Ѧ(p0)h . . . pk = pk-1 - Ѧ(pk-1)h | where h is a small increment |
EXAMPLE |
¦(x) = x2 + 1 ¦'(x) = 2x Ѧ(x) = 2x i   | let h = .1 |
Using the iteration equation xk = xk-1 - 2xk-1h , start with x0 = -1 and Ѧ(x0) = -2. x1 = -1 - 2(-1)(.1) = -.8 x2 = -.8 - 2(-.8)(.1) = -.64 x3 = -.64 - 2(-.64)(.1) = -.512 This converges to the solution x = 0. |
Analytical EXAMPLE |
y' = ky |   | dy/dt = ky | |||
ò | dy/y = | ò | kdt |   | |
ln y = kt + k2 | |||||
eln y = ekt+k2 | |||||
y = (ek2)(ekt) | |||||
y = k3(ekt) |
Let [a, b] be the interval for y' = ¦(t, y) with y (a) = y0. We will find a set of points (t0, y0) , (t1, y1) ,   . . .   , (tk, yk) that are used to approximate y (t) = y (tk).
![]() |
h = (b - a)/M h is the stepsize tk = a + kh for k = 0, 1, 2, ... M |
We want to solve y' = ¦(t, y) over {t0, t1, ... tM} with y (t0) = 0. Using Taylor's expansion to approximate y (t) around t0 we have:
And in general:
EXAMPLE |
y' = t2 - y | y (0) = 1 | h = .2 |
t1 = 0 + .2 = .2 y1 = y0 + h¦(t0, y0) = 1 + .2(02 - 1) = .8 t2 = .2 + .2 = .4 y2 = .8 + .2(.22 - .8) = .648 t3 = .4 + .2 = .6 y3 = .648 + .2(.42 - .648) = .5504 t4 = .6 + .2 = .8 y4 = .5504 + .2(.62 - .5504) = .5123 The exact solution is .5907. |
We want to solve y '(t) = ¦(t, y (t)) over [a, b] with y (t0) = 0. We can integrate y '(t) over [t0, t1]:
t1 |   | t1 |   |
ò |  y '(t)dt =  | ò |  ¦(t, y (t))dt = y (t1) - y (t0) |
t0 |   | t0 |   |
We use numerical integration to approximate the integral in the right side. Use the trapezoidal rule:
Then we have:
Observe we still need to know ¦(t1, y (t1)). That involves y (t1) which is what we want to solve. To eliminate this circular reference, we use Euler's approximation to obtain y (t1).
So we have:
With y0 + h¦(t0, y0) as p0 we have in general:
EXAMPLE |
¦(t, y) = y' = t2 - y | y (0) = 1 | h = .2 |
__________________________________________________ For k = 0 and t0 = 0 we have y0 = 1 __________________________________________________ For k = 1 and t1 = 0 + .2 = .2 p1 = y1 + h¦(t0, y0) = y0 + h(t02 - y0) = 1 + .2(02 - 1) = .8 y1 = (h/2)[(t02 - y0) + (t12 - p1)] y1 = 1 + (.2/2)[(02 - 1) + (.22 - .8)] = .8240 __________________________________________________ For k = 2 and t2 = .4 p2 = .8240 + .2(.22 - .8240) = .6672 y2 = .8240 + (.2/2)[(.22 - .8240) + (.42 - .6672)] = .6949 __________________________________________________ For k = 3 and t3 = .6 p3 = .6949 + .2(.42 - .6949) = .5879 y3 = .6949 + (.2/2)[(.42 - .6949) + (.62 - .5879)] = .6186 __________________________________________________ For k = 4 and t4 = .8 p4 = .6186 + .2(.62 - .6186) = .5669 y4 = .6186 + (.2/2)[(.62 - .6186) + (.82 - .5669)] = .6001 The exact solution is .5907. |
Using the Taylor Approximation to approximate the solution where y (t1) = y (t0 + h) we have:
We want to solve y' = ¦(t, y), where y (t0) = a. From the Taylor expansion we can find the succesive points:
We still need to compute y''k, y'''k,  . . .   etc. using y' = ¦(t, y). The error in the approximation will be O(hN+1). The higher the order of the Taylor series, the smaller the error.
EXAMPLE |
With N = 3, solve y' = t2 - y | y (0) = 1 | h = .2 |
yk+1 = yk + hy'k + h2y''k/2 + h3y'''k/6 y' = t2 - y y'' = 2t - y' y''' = 2 - y'' __________________________________________________ For k = 0 and t = 0 y0 = 1 y'0 = 02 - 1 = -1 y''0 = 2(0) - (-1) = 1 y'''0 = 2 - 1 = 1 __________________________________________________ For k = 1 and t = .2 y1 = 1 + (.2)(-1) + (.2)2(1)/2 + (.2)3(1)/6 = .821333 y'1 = (.2)2 - .821333 = -.781333 y''1 = 2(.2) - (-.781333) = 1.181333 y'''1 = 2 - 1.181333 = .818667 __________________________________________________ For k = 2 and t = .4 y2 = .821333 + (.2)(-.781333) + (.2)2(1.181333)/2 + (.2)3(.818667)/6 = .689785 y'2 = (.4)2 - .689785 = -.524785 y''2 = 2(.4) - (-.524785) = 1.329785 y'''2 = 2 - 1.329785 = .670215 __________________________________________________ For k = 3 and t = .6 y3 = .689785 + (.2)(-.529785) + (.2)2(1.329785)/2 + (.2)3(.670215)/6 = .611317 y'3 = (.6)2 - .611317 = -.251317 y''3 = 2(.6) - (-.251317) = 1.451317 y'''3 = 2 - 1.451317 = .548683 __________________________________________________ For k = 4 y4 = .611317 + (.2)(-.251317) + (.2)2(1.451317)/2 + (.2)3(.548683)/6 = .590812 The exact solution is .5907. |
To compare with the exact solution of .5907, the Taylor series method gave .590812, the Heun method gave .6001, and the Euler method gave .5123. To reduce approximation error, increase the order N of the dimensions or reduce the value of h. It is more effective to increase N. Recall that the error is O(hN+1).
The Taylor method gives a good approximation, but it requires the derivatives of the function. The Runge-Kutta method of order 4 simulates the accuracy of the Taylor series method using an order of N = 4 but it does not require the computation of derivatives.
EXAMPLE |
¦(t, y) = y' = t2 - y | y (0) = 1 | h = .2 |
For k = 0 and t0 = 0 y0 = 1 ¦1 = 02 - 1 = -1 ¦2 = ¦(0 + .2/2, 1 + .2/2(-1)) = ¦(.1, .9) = .12 - .9 = -.89 ¦2 = ¦(0 + .2/2, 1 + .2/2(-.89)) = ¦(.1, .911) = .12 - .911 = -.901 ¦4 = ¦(0 + .2, 1 + .2(-.901)) = f(.2, .8198) = .22 - .8198 = -.7798 __________________________________________________ For k = 1 and t1 = .2 y1 = 1 + (.2/6)(-1 + 2(-.89) + 2(-.901) + (-.7798)) = .821273 ¦1 = .22 - .821273 = -.781273 ¦2 = ¦(.2 + .2/2, .821273 + .2/2(-.981273)) = ¦(.3, .743196) = .32 - .743196 ¦2 = ¦(.2 + .2/2, .821273 + .2/2(-.653146)) = ¦(.3, .755958) = .32 - .755958 = -.665958 ¦4 = ¦(.2 + .2, .821273 + .2(-.665958)) = f(.4, .688081) = .42 - .688081 = -.528081 __________________________________________________ For k = 2 and t2 = .4 y2 = .821273 + (.2/6)(-.781273 + 2(-.653146) + 2(-.665958) - .528081) = .689688 The exact solution of y (.4) is .6897. Taylor (.4) is also .6897. |
Assume we have dx/dt = ¦(t, x, y) and dy/dt = g (t, x, y) with x (t0) = x0 and y (t0) = y0. The solution to this system are functions x (t) and y (t) that when derivated and substituted in the system of equations give equality.
An example of a system of Differential Equations:
x' = x + 2y | x (0) = 6 |
y' = 3x + 2y | y (0) = 4 |
Analytical solution: x (t) = 4e4t + 2e-t y (t) = 6e4t - 2e-t |
We can extend Euler's method of a single differential equation to a system of equations.
dx/dt = ¦(t, x, y) | dx = ¦(t, x, y)dt |
dy/dt = g (t, x, y) | dy = g (t, x, y)dt |
dxk = xk+1 - xk dyk = yk+1 - yk dtk = tk+1 - tk xk+1 - xk = ¦(tk, xk, yk)(tk+1 - tk) yk+1 - yk = g (tk, xk, yk)(tk+1 - tk) With h = tk+1 - tk we have that:  | |
xk+1 = xk + ¦(tk, xk, yk)h | x (t0) = x0 |
yk+1 = yk + g (tk, xk, yk)h | y (t0) = y0 |
The Euler method does not give good accuracy because it approximates the Taylor expansion only to the first derivative.
The Runge-Kutta method can be extended to systems of linear equations. The formulas are the following for Runge-Kutta of Order 4:
Higher order differential equations involve higher derivates like x ''(t) and y ''(t). For example: mx ''(t) + cx '(t) + kx (t) = g (t). These higher order differential equations can be transformed to a system of differential equations of first order.
Use the substitution: y (t) = x '(t)
For the above system we obtain x ''(t):
Substituting y (t) = x '(t) we get the first differentional equation of first order in the system:
The second equation is simply the substitution y (t) = x '(t).
EXAMPLE |
4x'' + 3x' + 5x = 2 | x (0) = 1 | x '(0) = 3 |
  | ||
x'' = (2 - 3x' - 5x)/4 | Let y = x' | |
y' = (2 - 3y - 5x)/4 | x' = y |
The system of differential equations uses the last two equations from above y' = (2 - 3y - 5x)/4 and x' = y with x (0) = 1 and y (0) = 3.
CURVE FITTING
- Least Squares Line
- Least Squares for non-linear equations
- Transformation for data linearization
- Polynomial fitting
- Splines
- 5 properties of splines
- Proof
- How to use formulas to get spline coefficients
- End point constraints
NUMERICAL DIFFERENTIATION
- Limit of Difference Quotient
- Central Difference Formula of Order O(h2)
- Central Difference Formula of Order O(h4)
NUMERICAL INTEGRATION
- Trapezoidal Rule
- Simpson's Rule
NUMERICAL OPTIMIZATION
- Local/global minimum/maximum
- Properties of minimum/maximum
- Golden Ratio Method
- Minimization using derivatives and solving with methods like Newton-Raphson
- Steepest Descent or Gradient Method
SOLUTION OF DIFFERENTIAL EQUATIONS
- Euler's Method
- Heun's Method
- Taylor Series Method
- Runge-Kutta Method of Order 4
- Systems of differential equations
- Euler's Method
- Runge-Kutta Method of Order 4
- Higher order differential equations
The final exam is Friday August 8 at 10:20-12:20 AM in WTHR 104.