7/14/03

CS314 Midterm Review

·        Representation of Floating Point Numbers

o       Floating Point Numbers

§         Mantissa and Exponent

o       Fixed Point Numbers

o       SPARC Representation

·        Propagation Errors

·        Solution of nonlinear equations

o       Fixed point of an equation

o       Fixed point theorem

o       Types of convergence and divergence

§         Monotone convergence

§         Oscillating convergence

§         Monotone divergence

§         Oscillation divergence

o       Bisection Method

o       False Position Method (Regula Falsi)

§         Criteria of convergence

·        Horizontal Convergence: |yk|<e

·        Vertical Convergence: |xk-xk-1|<e

o       Newton Raphson Method

§         Similarities of Newton Raphson and Taylor Expansion

§         Numerical Approximation of the derivative of a function

§         Order of Convergence

§         Limits of Newton Raphson Method

o       Secant Method

·        Solution of linear systems AX=B

o       Properties of Vectors

o       Vector Algebra

o       Matrices

o       Properties of Matrices

o       Special Matrices

§         Zero Matrix

§         Identity Matrix

o       Matrix Multiplication

o       Inverse of  a Matrix

o       Determinants

o       Upper Triangular Systems and solution using back substitution

o       Elementary Matrix Transformations

o       Gauss Elimination

§         Pivoting and choosing pivots

o       LU Factorization (Triangular Factorization)

§         Forward and back substitution

o       Gauss Elimination vs. LU Factorization (when to use each one)

o       Iterative Methods for linear equations

§         Jacobi

§         Gauss-Seidel

§         Criteria of Convergence using strictly diagonal matrices

o       Solutions of systems of nonlinear equations using the Newton Method

·        Interpolation and polynomial approximation

o       Taylor expansion

o       Horners method for evaluation polynomials (factor the x’s)

o       Lagrange approximation

o       Newton polynomials

§         Divided Differences

o       Padι Approximation using the quotient of two polynomials

 

Materials to study

·        Notes

·        Book

·        Homework (Problems from homework 2 and 3)

·        HW1 (FPRepresentation)

·        Do more problems in the book

 

Exam:

Thursday 7-9 pm in BRNG 2280

 

Bring:

Calculator

Half-page 1 side with anything you want written on it

 

7/15/03

Curve Fitting

 

Suppose we have (x1, y1)… (xn, yn) and we want to fit the points to an exponential curve

f(x) = y = CeAx

 

We want to obtain the A and c that best fit the data points.

 

Using least-squared procedure,

E(C,A) = εk=1n (f(xk)-y)2= εk=1n (CeAx -y)2

cE/cC = εk=1n 2 (CeAxk -y) eAxk= 2 εk=1n (Ce2Axk - eAxk yk) = 0

(1)  C εk=1n (e2Axk) = εk=1n (eAxkyk)

cE/cA =εk=1n 2 (CeAxk -y) CxkeAxk = 0

εk=1n (C2xke2Axk-ykCxkeAxk) = 0

 

(2) C2εk=1n (xke2Axk) – C εk=1n (ykxkeAxk) = 0

 

(1) and (2) give us a system of two non-linear equations that can be solved using Newton method. 

 

This method will give an accurate result for reducing the least-square error.  However it is possible to get a less accurate result using linearization method that will be covered next and it does not need Newton’s method

 

Transformations Using Data Linearization

 

It is possible to reduce the problem of fitting data into curves like y=CeAx , y=A ln(x) +B by transforming these equations into a linear equation.  Then, the constants can be obtained using least squares for a line. 

 

Example:

We want to fit data into the curve y=CeAx.

ln y = ln(CeAx)

ln y = ln C + ln(eAx)

ln y = ln C + Ax

Let Z = ln y and B = ln C, then

Z = Ax + B

 

Then we try to fit data into the linearized function and obtain A and B. 

 

Example:

Assume the data

xk         yk

0          1.5

1          2.5

2          3.5

3          5.0

4          7.5

 

We want to fit this data into the curve y=CeAx

ln y = ln C + Ax

Z = Ax + B

Find the least squared function for Z = Ax + B. 

 

xk         yk         zk=ln(yk)           xk2        xkzk

0          1.5       .4055               0          0

1          2.5       .9163               1          .9163

2          3.5       1.2528             4          2.5056

3          5.0       1.6094             9          4.8282

4          7.5       2.0149             16        8.0596

10                    6.1989             30        16.3097

 

Least squared line solution

 

0 = A εk=1n (xk2) + B εk=1n (xk) - εk=1n (xkzk)

0 = A εk=1n (xk) + BN - εk=1n (zk)

 

Plugging in the summations into the least squared line solution:

 

0 = A (30) + B (10) – (16.3097)

0 = A (10) + B(5) – (6.1989)

 

Multiplying the top row by 1 and the second row by -2 and then adding yields

 

0 = A(30-20) + B(10-10) -16.3097 + 2(6.1989)

A = (16.3097 – 2(6.1989))/10 = .3912

From the first equation

 

B = (16.3097 – A(30))/10 = (16.3097 – .3912(30))/10 = .4574

 

We have that B = ln C

C = eB = e.4574 = 1.579

Therefore, the equation that best fits the data is

y=1.579e.3912x

 

Using the non-linear squares method and the Newton Method (the method explained first to fit into y=CeAx) we get

y=1.6108995e.3875705x

 

Other substitutions that you can use to linearize an equation:

y=A/x + B        ΰ        Z=1/x and y=AZ+B

 

y = CxA           ΰ        Z = ln x

ln y = ln(CxA)              W = ln y

ln y = ln C + A ln x       B = ln c

                                    W = AZ + B

 

Other substitutions can be found in the book. 

 

7/16/03

Linear Least Squares

 

Suppose that you have N data points (x1,y1), …, (xN,yN) and you want to fit these points to the sum of M independent functions:

 

f(x) = c1f1(x) + c2f2(x) + …+ cMfM(x)

 

We want c1, c2, …, cM to minimize the square error to approximate (x1, y1), …,(xN,yN)

 

E(c1, c2, …, cM) = Sk=1N(f(xk)-yk)2 = Sk=1N(c1f1(x k) + c2f2(x k) + …+ cMfM(x k) – y k) 2 = Sk=1NSj=1M(c jf j(x k) – y k) 2

 

ΆE/Άci = Sk=1N 2*Sj=1M(c jf j(x k) – y k)*(fi (x k)) = 0

Sk=1N Sj=1M(c jf j(xk) fi (x k)) – Sk=1N (y kfi (x k)) = 0

for i = 1, …, M

 

This gives M linear equations to solve c1, c2, …, cM (M variables).

Solving this systems we get the best c1, c2, …, cM to fit (x1, y1), …,(xN,yN)

f(x) = c1f1(x) + c2f2(x) + …+ cMfM(x)

 

Specific case of linear least squares Polynomial fitting

 

Given a set of data points (x1, y1), …,(xN,yN), we want to find the c1, c2, …, cM+1 in the function

f(x) = c1+ c2x + c3x2 + …+ cM+1xM

that best fits the data.  (f1 (x) = 1, f2 (x) = x, f3(x) = x2, etc. )

 

Example:

We want to find A, B, C for

f(x) = Ax2 + Bx + C

that best fits (x1, y1), …,(xN,yN) minimizing the square error. 

 

E(A,B,C) = Sk=1N (Ax k 2 + Bx k + C – y k) 2

 

ΆE/ΆA = Sk=1N 2*(Ax k 2 + Bx k + C – y k)* xk 2 = 0

(1) ASk=1N xk4 + BSk=1N xk3 + CSk=1N xk2 = Sk=1N xk2 yk

 

ΆE/ΆB = Sk=1N 2*(Ax k 2 + Bx k + C – y k)* xk = 0

(2) ASk=1N xk3 + BSk=1N xk2 + CSk=1N xk = Sk=1N xk yk

 

ΆE/ΆC = Sk=1N 2*(Ax k 2 + Bx k + C – y k) = 0

(3) ASk=1N xk2 + BSk=1N xk + CN = Sk=1N yk

 

(1), (2), and (3) form a system of 3 linear equations of 3 variables that will give the best A, B, C to fit the function

f(x) = Ax2 + Bx + C

 

The problem with using high degree polynomials to approximate a set of data is that the higher the degree of the polynomial, the more maximum and minimum points it will have, making the curve wiggle. 

 

To avoid this problem, we use spline functions. 

 

Interpolation by spline functions

 

Splines are a set of curves that fit a set of points piecewise.  Instead of having one single polynomial to fit all the points, we have multiple polynomials connected together. 

S1(x) and S2(x) are continuous in y but they are not continuous in y’. 

S1(x2) = S2(x2)

S1’(x2) Ή S2’(x2)

 

The simplest spline is using a polynomial of degree 1. 

S1(x) = y0 + ((y1-y0)/(x1-x0))*(x-x0)

when x0 £ x £ x1

S2(x) = y1 + ((y2-y1)/(x2-x1))*(x-x1)

when x1 £ x £ x2

etc.

 

We could have the same approach using quadratic splines.  If that is the case, we use a different spline every 3 points.  However, we have that where 1 spline is connected to the next one, we will have an abrupt change in the deviation. 

 

Piecewise Cubic Splines

 

To have continuity also in the first and second derivatives, we use points before and beyond the interval of the spline.

Cubic splines are the most common because they can be built to satisfy the following properties:

 

S(xk) = yk

Guarantee that the splines pass through the data points

 

Sk(xk+1) = Sk+1(xk+1)

Continuous in y

 

Sk’(xk+1) = Sk+1’(xk+1)

Continuity in the first derivative

 

Sk’’(xk+1) = Sk+1’’(xk+1)

Continuity in the second derivative

 

7/17/03

Piecewise Cubic Splines

We want to fit N+1 data points (x0, y0), …, (xN, yN), where a = x0 < x1 < … < xn = b, into N cubic polynomials Sk(x) with coefficients Sk,0, Sk,1, Sk,2 and Sk,3

We want Sk(x) k = 0, …, N-1 to satisfy

I.                    S(x) = Sk(x) = Sk,0 + Sk,1(x-xk) + Sk,2(x- xk)2 + Sk,3 (x- xk)3

for xk £ x £ xk+1 and k = 0, 1, …, N-1

We will use a different cubic polynomial between each consecutive pair of points.

II.                 S(xk) = yk  for k = 0, 1, …, N

Spline passes through all data points.

III.               Sk(xk+1) = Sk+1(xk+1)  for k = 0, 1, …, N-2

The end of a spline touches the beginning of the next spline

IV.              k(xk+1) = S’k+1(xk+1)  for k = 0, 1, …, N-2

Continuity in the first derivative at the point of contact between 2 splines

V.                 k(xk+1) = S”k+1(xk+1)  for k = 0, 1, …, N-2

Continuity in the second derivative at the point of contact between two splines. 

 

For each polynomial Sk(x) there are four coefficients to be determined. 

Sk,0, Sk,1, Sk,2, Sk,3 (4 degrees of freedom)

 

Also we have N splines S0(x), S1(x), …,  SN(x)

 

Therefore there are 4N coefficients to be determined. 

 

How do we obtain these coefficients?

 

We need 4N equations that have to come from 4N constraints. 

 

From II  and with N+1 data points = N+1 equations

 

From III, IV, and V each supplies N-1 equations = 3(N-1)

 

N+1+3(N-1) = 4N-2 equations. 

There are 2 equations missing to have the problem fully constrained.  This gives us two degrees of freedom that will involve S’(x) and S”(x) at x0 and xN.  They will be discussed later. 

 

Since Sk(x) is cubic, the second derivative is a line.  We can obtain the equation of this line using Lagrange approximation. 

 

k(x) = S”(xk)(x-xk+1)/(xk- xk+1) + S”(xk+1)(x- xk)/(xk+1- xk)

so S”k(x) = S”(xk) when x = xk

and S”k(x) = S”(xk+1) when x = xk+1

Assume: (we create constants)

mk = S”(xk)

mk+1 = S”(xk+1)

hk = xk+1- xk

 

k(x) = mk(x- xk+1)/- hk  + mk+1(x- xk)/ hk

= mk(xk+1-x)/hk  + mk+1(x- xk)/ hk

 

To obtain Sk(x) we integrate twice:

Sk(x) = ςς S”(x) dx dx = ςς mk(xk+1-x)/hk  + mk+1(x- xk)/ hk dx dx

= ς -½(mk / hk)( xk+1-x)2 + ½(mk+1/hk)(x-xk)2 + C1 dx

= ½(1/3)(mk/hk)(xk+1-x)3 + ½(1/3)(mk+1/hk)(x- xk)3 + C1x + C2

(We express C1x + C2 = pk(xk+1-x) + qk(x- xk))

= (mk/6hk)(xk+1-x)3 + (mk+1/6hk)(x-xk)3 + pk(xk+1-x) + qk(x- xk)

 

Substitute xk, xk+1 and S(xk) = yk and S(xk+1) = yk+1

Sk(xk) = yk = (mk/6hk)(xk+1-xk)3 + (mk+1/6hk)(xk-xk)3 + pk(xk+1-xk) + qk(xk- xk)

yk = (mk/6hk)(hk)3 + pk(hk) = 1/6(mkhk2) + pkhk

 

Sk(xk+1) = yk+1 = (mk/6hk)(xk+1-xk+1)3 + (mk+1/6hk)(xk+1-xk)3 + pk(xk+1-xk+1) + qk(xk+1- xk)

= (mk+1/6hk)(hk)3 + qk(hk) = 1/6(mk+1hk2) + qkhk

 

(1) yk = 1/6(mkhk2) + pkhk

(2) yk+1 = 1/6(mk+1hk2) + qkhk

 

We use (1) to obtain pk

pk = (yk- mkhk2/6)(1/ hk) = (yk/ hk- mkhk/6)

 

We use (2) to obtain qk

qk = (yk+1-mk+1hk2/6)(1/hk) = (yk+1/hk -mk+1hk/6)

 

Substitute (1) and (2) into

Sk(x) = mk/6hk)(xk+1-x)3 + (mk+1/6hk)(x-xk)3 + pk(xk+1-x) + qk(x- xk)

yields

Sk(x) = (mk/6hk)(xk+1-x)3 + (mk+1/6hk)(x-xk)3 + (yk/hk- mkhk/6)(xk+1-x) + (yk+1/hk-mk+1hk/6)(x- xk)

 

We are almost done.  The only terms we need to find are mk and mk+1