7-19-04

 

Homework 4 is posted (curve fitting, polynomials, and splines)


Midterm solution is posted (see lab handouts)


------------------------------------------------------------------------------


Numerical Differentiation
- Derivative can be approximated by its definition using a small h
- h can be chosen via a large value, decreasing until precision loss
Example


f(x) = e^x
k        h[k]        D[k]
0        .1          2.8588
1        .01         2.7319
2        .001        2.7196
3        .0001       2.7184
4        .00001      2.7183
5        .000001     2.7183
6        .0000001    2.7183


h[k] = 10^(-(k+1))


Centered formula of O(h^2)
- f'(x) ≈ (f(x+h)-f(x-h))/2h
- Approximation will be better than the derivative definition
- For example, f'(1) = 2.7183 with h = .0001 and f(x) = e^x


Centered formula of O(h^4)
- f'(x) ≈ (-f(x+2h)+8f(x+h)-8f(x-h)+f(x-2h))/12h
- Again, approximation is better than before

 

7-20-04

 

Numerical Integration
- Trapezoidal Rule
    * Approximates area under the curve using trapezoids
    * A[1] = ((f[0]+f[1])/2)(Δx), etc.
    * ∫[a;b] f(x)dx = h[(f(a) + f(b))/2 + ∑[k=1;m-1] f(x[k])], for m+1 points
    * Example:
    ∫[-1;1] e^(-x^2)dx, m = 4
    Δx = (b-a)/m = .5
    ∫[-1;1] e^(-x^2)dx ≈ .5[(e^(-1)+e^(-1))/2 + e^(-.25) + e^0 + e^(-.25)]
    ≈ 1.4627
    * Example:
    ∫[0;π/2] sin(x)dx, m = 3
    Δx = (b-a)/m = π/6
    ∫[0;π/2] sin(x)dx ≈ π/6[(sin(0) + sin(π/2))/2 + sin(π/6) + sin(π/3)]
    ≈ .9770486
    * Can be used to effectively calculate Fourier Transforms

 

7-21-04

 

Simpson Rule
- Approximates the integral using the area under a parabola every 3 points
- P[2](x) = y[0](((x-x[1])(x-x[2]))/((x[0]-x[1])(x[0]-x[2])))
    + y[1](((x-x[0])(x-x[2]))/((x[1]-x[0])(x[1]-x[2])))
    + y[2](((x-x[0])(x-x[1]))/((x[2]-x[0])(x[2]-x[1])))
- ∫[x[0];x[2]] P[2](x) = y[0]Δx/2 * ∫[0;2] (t^2-3t+2)dt
    - y[1]Δx * ∫[0;2] (t^2-2t)dt
    + y[2]Δx/2 * ∫[0;2] (t^2-t)dt
    = y[0]Δx/3 + 4y[1]Δx/3 + y[2]Δx/3
- Given 2m+1 points, there are 2m intervals and m parabolas
- A = (Δx/3)(y[0] + 4y[1] + 2y[2] + 4y[3] + ... + 4y[2m-1] + y[2m])
- Example:
sin(x), from 0 to π/2, 2M = 2
Δx = π/4
A = (π/12)(sin(0) + 4sin(π/4) + sin(π/2))
    = 1.002279878
2M = 4
Δx = π/8
A = (π/24)(sin(0) + 4sin(π/8) + 2sin(π/4) + 4sin(3π/8) + sin(π/2))
    = 1.000134585

 

7-22-04

Numerical Optimization
- Minimization of a function f(x), interval I = [a,b]
- Local maximum value at x = p in the interval if f(x) ≤ f(p) for all x in the interval, local minimum is the same but reversed sign
- A function is increasing over an interval if f(x[i]) > f(x[j]) for all i > j over the interval (positive f'(x) over interval, opposite for decreasing)
- Maximum or minimum indicated when f'(x) = 0
- If f"(x) < 0 at f'(x) = 0, maximum (>0 means minimum)


The Golden Ratio
- Assume that f(x) is unimodal (one minimum) over the interval I = [a,b]
- Divide the interval into 3 subintervals so that a < c < d < b
- Evaluate f(x) at c and d
- If f(c) < f(d), then the interval is reduced to [a,d]
- Else, if f(c) > f(d), then the interval is reduced to [c,b]
- c = a + (1-r)(b-a)
- d = a + r(b-a)
- r = (√5 - 1)/2, which is known as the Golden Ratio
- Operations needed are reduced, since we preserve one of the original internal subdivisions as its opposite internal subdivision (c->d or d->c)
- Therefore, at every iteration beyond the first, only 1 computation is done


Example
- f(x) = x^2 + 1
- a = -2, b = 1 to start
- c = -.8541, d= -.14589
- f(a) = 5, f(b) = 2
- f(c) = 1.72949, f(d) = 1.02128

 

7-23-04

Example (continued)
-------------------
i = 2
a[2] = -.8541, b[2] = 1
c[2] = -.14584, d[2] = .291789
f(a[2]) = 1.72949
f(b[2]) = 2
f(c[2]) = 1.02128
f(d[2]) = 1.08514
Interval changed to [a[2], d[2]]


i = 3
a[3] = -.8541, b[3] = .291789
c[3] = -.4164, d[3] = -.14584
f(a[3]) = 1.72949
f(b[3]) = 1.08514
f(c[3]) = 1.17339
f(d[3]) = 1.02128
Interval changed to [c[3],b[3]]


------------------------------------------------------------------------------


Steepest Descent or Gradient Method
- Assume we want to minimize f(x) of n variables, x = (x[1], x[2], ... x[n])
- The gradient function is a vector function defined as (∂f(x)/∂x[1], ∂f(x)/∂x[2], ... ∂f(x)/∂x[n])
- The gradient vector points in the direction of greatest increase of f(x)
- Therefore, 0 - G (G is the gradient vector) points to the greatest decrease
- Start at point p[0] and move along in the direction of greatest decrease
- p[k] = p[k-1] - G[k](h), where h is a small constant < 1


Example
-------
y = x^2 + 1
y' = 2x
G = 2x
h = .1
x[0] = -1
G[0] = -2
x[1] = x[0] - G[0]h
x[1] = -.8
G[1] = -1.6
x[2] = -.64
G[2] = -1.28
x[3] = -.512
G[3] = -1.034
x[4] = -.4096
G[4] = -.8192
x[5] = -.32768
G[∞] = 0
x[∞] = 0


------------------------------------------------------------------------------


Numerical Methods to solve Differential Equations
- Some differential equations don't have an analytical solution, so they must be approximated using numerical methods
- Euler's method