Minimization using the derivative:

Assume we want to minimize f(x) and it has a unique minimum at p. a<=p<=b we start our search at p.

if f'(p)>0 p is to the left of p0

if f'(p)<0 p is to the right of p0

if the derivative of f(x) is available then we can use any of the methods we have covered to solve non-linear equations to solve f'(x)=0

Finding the Minimum for Multiple Variables:

Steepest Descent or Gradient Method

Assume we want to minimize f(x) of n vars where x=(x1,x2, thru xn) The gradient =(df(x)/dx1,df(x)/dx2,thru df(x)/dxn)

from the concept of the gradient, we know that the gradient vector points in the direction of the greatest change of f(x). Since we are looking for the minimum, we will use - grad(f(X))

Basic gradient method:

* Start at point P0 and move along the line in the direction -gradf(x).

p1=p0-gradf(P0)h   

pk=pk-1-gradf(pk-1)h        h is a very small increment

Example:

f(x)=x^2+1

f'(x)=2x

grad f(x)=2xi

let x0=-1 and h=0.1

x1 = -1 - 2(-1)0.1 = -0.8

x2 = -.8 -2(-.8)(.1) =  -.64

x3 = -.64-2(-.64)(.1) = -.512

Numerical Solution of Differential Equations:

introduction:

example:

y'=k*y

dy/dt=k*y

Euler's Method:

let [a,b] be the interval which we want to find the solution of y'=f(t,y), y(a)=y0

we will find a set of points (t0,y0) up to (tn,yn) that are used to approximate y(t)

Also, tk=a+kh  and k=0,1,2,3..........m    over [t0,t1,.........tm] with y(t0)=0

Using Taylor's expansion to approxmate y(t) at t0:

 y(t)=y(t0)+y'(t0)(t-t0)+y''(t0)(t-t1)^2/2

we use this equation to obtain y(t1)=y(t0)+y'(t0)*(t1-t0)+error

if the step size is small enough we can neglect the second order error.

y1=y0+h*y'(t0)

y1=y0+h*f(t0,y0)

in general tk=tk-a+h

               yk=yk-1+h*f(tk-1,yk-1)

example:

y'=t^2-y      y(0)=1 h=0.2

t1=0.2

y1=2+0.2(-1)=0.8

t2=0.4

y2=0.8+0.2(0.2^2-0.8)=0.6897

t3=0.6

y3=0.5504

which is converging to the exact solution of 0.5123

Heun's Method:

we want to solve y'(t)=f(t,y(t)) over [a,b] with y(t0)=y0

we can integrate y'(t) over [t0,t1]

∫ y'(t) dt=∫ f(t,y(t)) dt

y(t1)-y(t0)=∫  f(t,y(t)) dt

using numerical integrations to approximate the integral on the right side. use trapezoidal rule:

y(t1)-y(t0)=(h/2)(f(t0,y0)+f(t1,y1)

observe that we still need to know f(t1,y(t1)) that involves y(t1) that is what we want to solve. To eliminate this circular reference, we use Euler's approximation to approxmiate y(t1)

y(t1)=y(t0)+h*f(t0,y0)

y1=y0+h/2(f(t0,y0)+f(t1,y0,hf(t0,y0)

example

y'=t^2-y      y(0)=1 h=0.2

t1=0.2

p1=y0+hf(t0,y0)=0.8

y1=0.824

thru

t4=0.8

y4=0.6001

Taylor Series Method:

Using the Taylor Series expansion to approximate the solution

y(t0+h)=y(t0)+hy'(t0)+(h^2/2!)(y''(t0)).......+h^n*(y^n(t0))/n!

we want to solve

y'=f(t,y) at  y(t0)=a

from the Taylor expansion we can find the successive points yk+1=yk+hy'k + h^2*y^nk/n!

but we still need to compute yk'' and yk''' using y'=f(t,y)

Runge-Kutta Method of Order 4:

the Taylor method gives a good approximation but it requries the derivatives of the function. The Runge-Kutta method of order 4 simulates the accuracy of the Taylor series method using an order of N=4 but it does not require the computation of derivatives

yk+1=yk+(h/6)(f1+2f2+2f3+f4)

f1=f(tk,yk)

f2=f(tk+h/2,yk+(h/2)f1)

f3=f(tk+h/2,yk+h/2f2)

f4=(tk+h,yk+hf3)

example

y'=t^2-y      y(0)=1 h=0.2

k=0 t0=0

f1=-1

f2=f(0.1,0.9)=-.89

f3=f(0.1,0.911)=-.901

f4=f(0.2,0.8198=-0.7798

k=1 t0=0.2

f1=-.781273

f2=-0.653416

f3=-0.665958

f4=-0.528081

Systems of Differential Equations:

Assume we have dx/dt=f(t,x,y) dy/dt=g(t,x,y) x(t0)=x0  y(t0)=y0. the solution for this system are the functions X(t) and Y(t) that when derivated and substituted in the system of equations give equality.

example

x'=x+2y x(0)=6

y'=3x+2y  y(0)=4

Eulers Method:

we can extend Euler's method of a single differential equation to a system of equations.

dx/dt=f(t,x,y) dx=f(t,x,y)dt

dy/dt=g(t,x,y)  dy=g(t,x,y)dt

dxk=xk+1-xk

dyk=yk+1-yk

dtk=tk+1-tk

xk+1-kx=f(tk,xk,yk)(tk+1-tk)

yk+1-yk=f(tk,xk,yk)(tk+1-tk)

yk+1-yk=f(tk,xk,yk)(tk+1-tk)

the Euler method does not give a good accuracy because it approximates the Taylor Expansion only to the first derivative.

Runge-Kutta for systems of differential equatioins:

The runge-kutta method can be extended to systems of linear equations the formulas are:

xk+1=xk+(h/6)*(f1+2f2+2f3+f4)

yk+1=yk+(h/6)*(g1+2g2+2g3+g4)

f1=f(tk,xk,yk)

f2=f(tk+h/2,x+h/2f2,ykh/2)g1)

f3=f(tk+h/2,xk+h/2)f2,yk+h/2g2)

f4=f(tk+h,xk+hf3,yk+hg3)

g1=f(tk,xk,yk)

g2=g(tk+h/2,xk+h/2f1,yk+h/2g1)

g3=g(tk+h/2,xk+h/2f2,yk+h/2g2)

g4=g(tk+h+xk+hf3,yk+hg3)

Higher order differential Equations:

Higher order differential equations involve higher derivatives (e.g. x''(t) and y''(t) for example, mx''(t)+cx'(t)+kx(t)=g(t). these higher order differential equations can be transformed to a system of differential equations of first order. use the substitution y(t)=x'(t). for the previous example obtain x''(t)

x''(t)=(g(t)-cx'(t)-kx(t))/m

y'(t)=(g(t)-cx'-kx(t))/m

example

4x''(t)+3x'+5x=2   x(0)=1 x'(3)

x''=(2-3x'-5x)/4

y'=(2-3y'-5x)/4    x'=y

Final Exam Material:

-least squares line

-least squares for non-linear equations

-transformation for data linearization

-polynomial fitting

-splines

    *5 properties of splines

    *proofs

    *how to use for the spline formulas

    *end point constraints

-numerical differentiation

   *limit of difference quotient

   *central difference formulas

-numerical integration

   *Trapezoidal rule

   *Simpson rule

-numerical optimization

   *local and global minima and maxima

   *golden ratio

   *gradient method

-Solution of different equations

   *Euler's Method

   *Heun's Method

   *Taylor Series

   *Runge-kutta

   *systems of differential equations (Euler's method, Runge-kutta)

  Final Exam Time and locations

   Friday Aug 8th , 10:20-12:20