7/19-7/23 notes
7/19
Numerical Differentiation
Limit of the Difference Quotient
ƒ'(x) = limit(h→0) [ƒ(x+h)-ƒ(x)]/h
We approximate derivative by choosing a small h
How small does h need to be?
-depends on ƒ(x)
we start with a large h and make it smaller and smaller until we see that Dk (the approximation) is loosing precision.
Example: | Dk-Dk-1| < ε
Dk = [ ƒ(x+hk) - ƒ(x )] / hk
Example: ƒ(x)=ex, we want to approximate ƒ'(x)
Dk = (e^(x+hk)-ex) / hk
k hk Dk
0 .1 2.8588
1 .01 2.7319
2 .001 2.7196
3 .0001 2.7184
4 .00001 2.7183
5 .000001 2.7183
we stop at k = 5, hk = .000001
The iterative approach is slow. There are alternatives:
Central Differences Formulas
-Centered Formula of order O(h2)
start with Taylor expansion
x-x0 → x=x0+h → x-x0=h
1. ƒ(x+h) = ƒ(x) + ƒ'(x)h + ƒ''(x)h2/2! + ƒ'''(c1)h3/3! ← truncation error
2. ƒ(x-h) = ƒ(x) - ƒ'(x)h + ƒ''(x)h2/2! - ƒ'''(c2)h3/3! ← truncation error
subtracting 2 from 1
ƒ(x+h) - ƒ(x+h) = 2 ƒ'(x)h + [ƒ'''(c1)+ƒ'''(c2)]h3 /3!
ƒ(x) = 1/2h[ ƒ(x+h) - ƒ(x-h) - [ƒ'''(c1)+ƒ'''(c2)]h3 /3! ]
= [ƒ(x+h)-ƒ(x-h)]/2h - [ƒ'''(c1)+ƒ'''(c2)]h2 / (2*3!) ← error O(h2)
ƒ'(x) ≈ [ƒ(x+h)-ƒ(x-h)]/2h
Example: ƒ(x) = ex , h=.0001
ƒ'(1) = [e(1+.0001) -e(1-.0001)] / [2(.0001)] = 2.7183
-Centered Formula of order O(h4)
Do the Taylor expansion
ƒ(x+h) = ƒ(x) + ƒ'(x)h + ƒ''(x)h2/2! + ƒ'''(x)h3/3! + ƒ4(x)h4/4! + ƒ5(c1)h5/5!
ƒ(x+h) = ƒ(x) - ƒ'(x)h + ƒ''(x)h2/2! - ƒ'''(x)h3/3! + ƒ4(x)h4/4! - ƒ5(c1)h5/5!
1. ƒ(x+h) - ƒ(x-h) = 2ƒ'(x)h + 2ƒ3(x)h3/3! + [ƒ5(c1)+f5(c2)]h5/5!
using step size 2h instead of h in 1 we get
2. ƒ(x+2h) - ƒ(x-2h) = 4ƒ'(x)h + 16ƒ3(x)h3/3! + [ƒ5(c1)+ƒ5(c2)]32h5/5!
now multiply 1 by 8 and 2
8[ƒ(x+h) - ƒ(x-h)] - [ƒ(x+2h) - ƒ(x-2h)] = 8ƒ(x+h) - 8ƒ(x-h) - ƒ(x+2h) + ƒ(x-2h)
= 12ƒ'(x)h + kƒ5(c3)h3 ← error (k constant)
so:
ƒ'(x) = [ -ƒ(x+2h) + 8ƒ(x+h) - 8ƒ(x-2h) + ƒ(x+h) ] / 12h + [ kƒ5(c3)h4 ] / 12 ← error O(h4)
Example: let ƒ(x)=sin(x), h=.001 Centered formula O(h4)
ƒ'(π/3) = [ -sin(π/3+.002) + 8sin(π/3+.001) - 8sin(π/3-.002) + sin(π/3-.001) ] / 12(.001) = .5
centered formula O(h2)
ƒ'(π/3) = [ sin(π/3+.001) - sin(π/3-.001) ] / 2(.001) = .499999917
using difference quotient
ƒ'(π/3) = [ sin(π/3+.001) - sin(π/3) ] / .001 = .499566904
7/20
Numerical Integration
Trapezoidal Rule: approximates area under the curve with trapezoids

A1 = (ƒ0+ƒ1)h/2
A2 = (ƒ1+ƒ2)h/2
A3 = (ƒ2+ƒ3)h/2
A1+A2+A3 = (ƒ0+ƒ1)h/2 + (ƒ1+ƒ2)h/2 + (ƒ2+ƒ3)h/2
= h [ ƒ0/2 + ƒ1/2 + ƒ1/2 + ƒ2/2 + ƒ2/2 + ƒ3/2 ]
= h [ ƒ0/2 + ƒ1 + ƒ2 + ƒ3/2 ]
In general: ∫a→b ƒ(x)dx = h[ (ƒ(a)+ƒ(b))/2 + ∑k=1→M-1ƒ(xk) ] for M+1 points x0...xM
Example: ∫-1→1e^(-x2)dx , M=4, 4areas
h = (b-a)/M = [1-(-1)]/4 = 1/2
∫-1→1e^(-x2)dx ≈ 1/2 [ [(e^-(-1)2 + e^-(1)2]/2 + e^-(-.5)2 + e^-(0)2 + e^-(.5)2 ]
= 1.4627
Example: ∫0→π/2sin(x)dx , M=3, 3areas
h = π/6
∫0→π/2sin(x)dx ≈ π/6 [ [sin(0)+sin(π/2)]/2 + sin(π/6) + sin(π/3) ]
= .9770486
7/21
Simpson Rule
-It approximates the integral using the area under a parabola every 3 points.
-The quadratic polynomial gives a better approximation than the line used in the trapezoidal rule.

x1-x0= h
x2-x0= 2h
x2-x1= h
Using Lagrange Polynomials we can obtain P2(x)
P2(x) = ƒ0 ((x-x1)(x-x2)) / ((x0-x1)(x0-x2)) + ƒ1 ((x-x0)(x-x2)) / ((x1-x0)(x1-x2)) + ƒ2 ((x-x0)(x-x1)) / ((x2-x0)(x2-x1))
∫x0→x2P2(x)dx = ∫x0→x2[ ƒ0 ((x-x1)(x-x2)) / ((-h)(-2h)) + ƒ1 ((x-x0)(x-x2)) / ((h)(-h)) + ƒ2 ((x-x0)(x-x1)) / ((2h)(h)) ]
To do the integral, we do the substitution x = x0+ht
→ t = (x-x0)/h , dx = hdt
t0 = (x0-x0)/h = 0
t2 = (x2-x0)/h = 2
∫x0→x2P2(x)dx = ∫0→2[ ƒ0 ((x0+ht-x1)(x0+ht-x2)) / 2h2 + ƒ1 ((x0+ht-x0)(x0+ht-x2)) / -h2 + ƒ2 ((x0+ht-x0)(x0+ht-x1)) / 2h2 ] hdt
= ∫0→2[ ƒ0 ((ht-h)(ht-2h)) / 2h2 + ƒ1 ((ht)(ht-2h)) / -h2 + ƒ2 ((ht)(ht-h)) / 2h2 ] hdt
= ∫0→2 [ƒ0/2 *(t-1)(t-2)] hdt + ∫0→2 [ƒ1/-1 *(t)(t-2)] hdt + ∫0→2 [ƒ2/2 *(t)(t-1)] hdt
= ƒ0h/2 ∫0→2 (t2-3t+2) dt - ƒ1h ∫0→2 (t2-2t) dt + ƒ2h/2 ∫0→2 (t2-t) dt
= ƒ0h/2 *(2/3) - ƒ1h *(-4/3) + ƒ2h/2 *(2/3)
= (ƒ0+4ƒ1+ƒ2 )h/3
This equation is only for 3 points. If the interval [a,b] is subdivided into 2M intervals [xk,xk+1] of equal width we can use Simpson Rule every 3 points.
Example: 2M=6, M=3
2M intervals
2M+1 points
M parabolas

A = A1 + A2 + A3
= h/3(ƒ0+4ƒ1+ƒ2) + h/3(ƒ2+4ƒ3+ƒ4) + h/3(ƒ4+4ƒ5+ƒ6)
= h/3( ƒ0 + 4ƒ1 + 2ƒ2 + 4ƒ3 + 2ƒ4 + 4ƒ5 + ƒ6 )
In general (Simpson Rule):
A = h/3( ƒ0 + 4ƒ1 + 2ƒ2 + 4ƒ3 + 2ƒ4 + ... + 4ƒ2M-1 + ƒ2M )
Example: ∫0→π/2sin(x)dx , 2M=2, M=1
h = π/4
A = π/4/3 [ sin(0) + 4sin(π/4) + sin(π/2) ] = 1.002279878
Example: ∫0→π/2sin(x)dx , 2M=4, M=2
h = π/8
A = π/8/3 [ sin(0) + 4sin(π/8) + 2sin(π/4) + 4sin(π3/8) + sin(π/2) ] = 1.000134585
Exact solution: ∫0→π/2sin(x)dx = 1
7/22
Numerical Optimization
-Minimization of a function f(x)
let I=[a,b] an interval
-Local maximum value at x=p, in the interval I=[a,b]
if ƒ(x)≤ƒ(p) for all x є I
-Local minimum value at x=p, in the interval I=[a,b]
if ƒ(x)≥ƒ(p) for all x є I
-A function is increasing in the interval I = [a,b]
if x1< x2 and ƒ(x1) < ƒ(x2) for all x є I
if ƒ'(x)>0 for all x є I then ƒ(x) is increasing in the interval I
-A function is decreasing in the interval I = [a,b]
if x1< x2 and ƒ(x1) > ƒ(x2) for all x є I
if ƒ'(x)<0 for all x є I then ƒ(x) is decreasing in the interval I
-If there is a maximum of a minimum value at x=p then ƒ'(p) = 0

If ƒ''(p)<0 then ƒ(p) is a local maximum If ƒ''(p)>0 then ƒ(p) is a local minimum
Minimization Using Derivatives
Assume we want to minimize ƒ(x) and it has a unique minimum at p, a<p<b
We can solve the function ƒ'(x)=0 using any method for non-linear equations: Newthon, Secant, Bisection, Regula-Falsi
ƒ'(x) can be computed numerically using the centered formulas.
Golden Ratio
Assume ƒ(x) is unimodal (one minimum) in the interval [a,b]
we divide the interval in three subintervals a<c<d<b

Then we evaluate ƒ(x) in c and d
-if ƒ(c) < ƒ(d) then the new interval to search will be reduced to [a,d]
-if ƒ(c) > ƒ(d) then the new interval to search will be reduced to [c,b]
Why? if ƒ(c) < ƒ(d) then the minimum is in the left side of d
if ƒ(c) > ƒ(d) then the minimum is in the right side of c
c and d could be any two values such that a<c<d<b
We choose c and d in such a way that we reduce the number of computations done.
we choose c = a+(1-r)(b-a)
d = a+r(b-a)

we want:
(1-r)/r = r/1 → 1-r = r2
r2+r = 0
Golden Ratio: r = (-1±√(1+4))/2
= (√5 - 1)/2
By using the Golden Ratio to choose c,d we reduce the number of operations that we need to do
when ƒ(c) < ƒ(d)
the new interval becomes [a,d]
the old d will be the new b
d → b
the old c becomes the new d
c → d
the new c will have to be recomputed as well as ƒ(c)
when ƒ(c) > ƒ(d)
the new interval becomes [c,b]
the old c will be the new a
c → a
the old d becomes the new c
d → c
the new d will have to be recomputed as well as ƒ(d)
So at every iteration, we need to computer only one value either of ƒ(c) or ƒ(d) instead of two
Example: ƒ(x)=x2+1, r = (√5 - 1)/2 = .61803, start interval [-2,1]
i=1 a1=-2 b1=1
c1 = -2+(1-.61803)(1-(-2)) = -.8541
d1 = -2+.61803(1-(-2)) = -.14589
ƒ(a1)=5
ƒ(b1)=2
ƒ(c1)=1.72949
ƒ(d1)=1.02128
ƒ(c1) > ƒ(d1) → new interval [c1,b1]
7/23
continue example:
i=2 a2=-.8541 b2=1
c2 = -.8541+(1-.61803)(1-(-.8541)) = -.14589 (equal to d1)
d2 = -.8541+.61803(1-(-.8541)) = .291789
ƒ(a2)=1.72949 →ƒ(c1)
ƒ(b2)=2 →ƒ(b1)
ƒ(c2)=1.02128 →ƒ(d1)
ƒ(d2)=1.08514 →new value
ƒ(c2) < ƒ(d2) → new interval [a2,d2]
i=3 a3=-.8541 b3=.291789
c3 = -.8541+(1-.61803)(.291789-(-.8541)) = -.4164
d3 = c2 = -.14589
ƒ(a3)=1.72949 →ƒ(a2)
ƒ(b3)=1.08514 →ƒ(d2)
ƒ(c3)=1.17339 →new value
ƒ(d3)=1.02128 →ƒ(c2)
ƒ(c3) > ƒ(d3) → new interval [c3,b3]
Steepest Descent of Gradient Method:
-Assume we want to minimize ƒ(X) of N variables where X = (x1,x2,...xn)
-The gradient ƒ(X) is a vector defined as:
Δƒ(X) = (dƒ(X)/dx1, dƒ(X)/dx2, ... dƒ(X)/dxN)
From the concept of fradient we know that the gradient vector points in the direction of the greatest increase of ƒ(X)
Then -Δƒ(X) will point to the direction of greatest decrease.
To obtain the minimum using the gradient method, starting at point P0 we have to move along the line in the direction of greatest decrease that is -Δƒ(X)
P1= P0 - G0h where G0 is gradient at P0 and h is a smaller constant
Pk+1= Pk - Gkh
Example: y = x2+1
y' = 2x, G = 2x
let h = .1
x0 = -1
G0 = 2(-1) = 2 x1 = x0-G0h
= -1-(-2)(.1) = -.8
G1 = 2(x1) = -1.6 x2 = x1-G1h
= -.8-(-1.6)(.1) = -.64
G2 = 2(-.64) = -1.28 x3 = -.64-(-1.28)(.1) = -.512
G3 = -1.024 x3 = -.4096
G4 = -.8192 x4 = -.32768
... ...
G∞ = 0 x∞ = 0
Numerical Methods for Differential Equations
Consider the differential equation y=g(t,y)
dy/dt = ky →solution y=f(t)
∫ dy/y = ∫ kdt
ln(y) = k1t+k2 →solution y=e^(k1t+k2)
Some differential equations do not have an analytical solution so they have to be approximated using numerical methods.
Euler's Method
Let [a,b] be the interval over which we want to find the solution y'=f(t,y)