CS314 – Notes (Week 7)

 

 

Created by: Hans Kristian Hartono

CS314 – Summer 2010

Monday, July 26, 2010

Numerical Integration:

We want to estimate A.

 

Trapezoidal Rule:

In General:

(Trapezoidal Rule)

 

Example:

a.

M = 4 (x0, x1, x2, x3, x4)

h = 0.5

 

b.

M = 3

h =

Exact Solution:

 

Simpson Rule:

Approximates the function using a quadratic polynomial every 3 points.

 

Using Lagrange Polynomials to obtain P2(x):

Let,

Now if the interval [a, b] is subdivided into 2M subintervals:

 of equal width

In General:

Simpson Rule

Tuesday, July 27, 2010

Example Simpson Rule:

a. 2M = 4; M = 2

*where the exact value is 1

b. with 2M = 2; M = 1 (using only 1 parabola)

 

Numerical Optimization:

Obtain the minimum (or maximum) of a function.

Definitions:

- A local maximum value at x = p exists if f(x) <= f(p) for all .

- A local minimum value at x = p exists if f(x) >= f(p) for all .

If there is a maximum / a minimum value at x = p then f’(p) = 0.

- If f”(p) > 0 then f(p) is a local minimum.

- If f”(p) < 0 then f(p) is a local maximum.

 

Minimization Using Derivatives:

We can obtain the minimum or maximum of a function f(x) by derivating the function and finding the solution of the equation f’(x) = 0 with bisection or secant or newthon methods.

 

Minimization using steepest Descent / Gradient Method:

Assume we want to minimize f(X) of N variables where X = (x1, x2 … xN). The gradient f(X) is a vector function defined as:

grad f(X) =

From the concept of gradient we know that the gradient vector points in the direction of greatest increase of f(x).

 points in the direction of greater increase. We can design an optimization method where we take a small step in the direction of the gradient and then reevaluate the gradient and take another smaller step and so on until we reach the top.

- Maximization: ; (h is small constant)

- Minimization:

 

Example:

Find minimum:

 

Numerical Solution of Differential Equations:

Consider the equation:

Solution:

Some differential equations are impossible to solve analytically so we have to approximate it with numerical methods.

 

Euler’s Method:

Let [a, b] be the internal over which we want to find the solution y’=f(t, y) with y(a) = y0.

We will find a set of points (t0, y0), (t1, y1) … (tk, tk) that approximate

First we divide the interval [a, b] into m equal subintervals.

  (h is called the step size).

Also we make ; for k = 0, 1, 2 … m

We want to solve y’ = f(t, y) over [t0…tm] with y(t0) = y0

Using taylor expansion to approximate y(t) around to

Now obtain

If the step size h is small enough we can neglect the second order error.

Euler’s approximation

In general:

  for k = 0, 1 … M-1

 

Wednesday, July 28, 2010

Example:

* Analytical solution of  is

 ; where the Euler’s estimate = 0.648

 ; where the Euler’s estimate = 0.5123

 

Heun’s Method:

We want to solve  over [a, b]with

We can use the fundamental theorem of calculus and integrate y’(t) over [t0, t1]

or

Now we can use any numerical integration method to approximate the integral.

 

Using the trapezoidal rule:

Observe that we still need to know . For this value we use Euler’s approximation:

 

In general:

Predictor:   

Correction: 

Euler’s approximation is used as a predictor and the integral is used as a correction.

Example:

Exact = 0.6897 ; Euler = 0.648

Exact = 0.5907 ; Euler = 0.5123

 

Taylor Expansion Method:

Using Taylor’s approximation

We want to solve

From Taylor expansion:

However, we need  etc. We can obtain them from y’ = f(x, t)

Example:

Solve

 use N = 3 (up to y’’’ term)

Exact = 0.5907 ; Euler = 0.59123 ; Heun = 0.6001

 

Thursday, July 29, 2010

Runge-Kutta Method of order 4:

It simulates the accuracy of Taylor series method of order N=4 but it does not require the computation of derivatives.

Where,

*see an advanced text on numerical analysis for proof.

Example:

Exact = 0.6897

 

Predictor Corrector Methods:

The methods of Heun, Taylor and Runge Kutta are single-step methods, that is, use only  to compute

A multistep method uses more than one previous value of .

Adams-Bashforth-Moulton Method:

It uses a Lagrange polynomial to approximate f(t, y(t)) and then integrates over the interval  to obtain the predictor.

And then a corrector is applied to . The corrector is also developed by constructing a second order Lagrange polynomial.

 

Systems of differential Equations:

Assume we have the equations:

with

The solution are functions x(t) and y(t) that when derivated they transform into f(t, x, y) and g(t, x, y).

 

Example:

1) x’ = x + 2y       x(0) = 6

2) y’ = 3x + 2y     y(0) = 4

Exact solution:

3)

4)

You can verify this by derivating 3) and 4) and substituting into 1) and 2).

 

Euler Method for Systems of Differential Equations:

We can extend Euler’s method for systems of differential equations.

We can substitute

 

In general:

Euler’s method for a system of linear equations.

 

Friday, July 30, 2010

However, the Euler method does not give good accuracy.

Runge-Kutta:

Runge-Kutta can be extended for systems of linear equations.

The formulas are:

Where,

 

 

Higher order Differential Equations:

Higher order differential equations involve higher derivatives, not only x’(t) but also x’’(t), x’’’(t)…

 

For example:

We can make this second order differential equation into a system of two first order differential equations by using substitution. Get x’’ first:

The second order differential equation can be reformulated as:

Then the second order differential equation becomes:

 

Example:

With x(0) = 1 and x’(0) = 3

Let,

So, we have the system of differential equations:

So, now this system can be solved using Euler or Runge-Kutta methods.

 

Final Exam – Wednesday, August 4, 2010:

90% second part of the course

10% first part

- Curve fitting:

   - least squares line

   - least squares for non-linear equations

   - transformations for data linearization

   - polynomial fitting

   - spline functions

      + The 5 properties I to V

      + proof

      + How to obtain the coefficient using formulas

      + end point constraints

- Numerical Differentiation:

   - Limit of the difference quotient

   - central difference formula of order O

   - central difference formula of order O

- Numerical Integration:

   - Trapezoidal rule

   - Simpson rule (use quadratic equation using 3 points)

- Numerical Optimization:

   - Minimization using derivatives f’(x) = 0

   - Steepest Descent / Gradient method

- Solution of Differential Equations:

   - Euler’s method

   - Heun’s method

   - Taylor series method

   - Runge-Kutta method of order 4

   - Predictor-corrector methods

   - Systems of differential equations

      + Euler method

      + Runge-Kutta of order 4

   - Higher order Differential equations

      + Transformation to system of first order differential equations.

* In the exam you can bring 1 page one side with formulas from 2nd half of the course & 1 page one side with formulas from 1st half of the course.