Notes for July 14th to July 18th

 

 

Monday, July 14

 

Midterm Review

 

Representation of floating point numbers

            Floating point numbers

                        Mantissa & exponent

            Fixed point numbers

            SPARC representation of floating point numbers

 

Propagation of Errors

 

Solution of Non-Linear Equations

            Fixed point of an equation

            Fixed point theorem

            Types of converge & divergence

                        Monotone converge & Monotone divergence

                        Oscillating convergence & oscillating divergence

            Bisection Method

            False Position Method [AKA Regular Falsi]

            Criteria for Convergence

                        Horizontal Convergence – Point is within a horizontal band (Yk < Eps)

                        Vertical Convergence – Difference between 2 X’s < Eps

            Newton-Raphson

                        Similarities with Taylor expansion

                        Numerical Approximation of the derivative of a function

                        Limitations of Newton Raphson

            Secant Method – Similar to Newton Raphson but uses 2 points, not a line.


Solution of Linear Systems of form Ax=B

            Properties of Vectors

            Vector Algebra

            Matrices

            Properties of Matrices

            Special Matrices (Zero & Identity Matrices)

            Matrix Multiplication

            Inverse of a Matrix (and how to obtain it)

            Determinants

            Upper Triangular System & Solution via Back substitution

            Elementary Matrix Transformations (three kinds)

            Gaussian Elimination & Choosing points

            LU Factorization/Triangular Factorization

            Gaussian Elimination v. LU, when to use which

            Iterative Methods for Linear Equations for Approximate Solutions

                        Gauss Seidel

                        Jacobi

                        When to use – sparse matrices

                        More convergence criteria for using Strictly Diagonal Matrices

 

Solutions of Systems of Non-Linear Equations

            Newton Method

Interpolation & Polynomial Approximation

            Taylor Expansion

            Horner’s Method for evaluating polynomials [factor all the x]

            Lagrange Approximation – Easy to build, hard to evaluate

            Newton Polynomials

                        Divided Differences

            Pade’s Approximations using 2 polynomials divided [Qx/Rx]

 

Material to Study

-         Notes

-         Book

-         Homework

-         Projects

-         Online Notes

 

Midterm is on Thursday, 7:00-9:00 PM, BRNG 2280

 

 

Tuesday, July 15th

 

Curve Fitting

 

Suppose we have (X1, Y1) .. (Xn, Yn) & we want to fit them to an exponential curve, e.g. f(x) = y = ce^(Ax)

 

We want to get the A & C to best fit, again using the least squares procedure.

Thus we want to minimize the error function:

            E (C, A) = ∑(for k = 1 to n) (f(Xk) – Yk)^2

            E (C, A) = ∑(for k = 1 to n) (ce^(A*Xk) – Yk)^2

 

Take the partial derivative of E with respect to C & A to get the two equations needed:

1)      C * ∑ (for k = 1 to n) e^(2*A*Xk) = ∑ (for k = 1 to n) Yk*e^(A*Xk)

2)      C^2*∑ (for k = 1 to n) e^(2*A*Xk) – C*∑ (for k = 1 to n) Yk*Xk*e^(A*Xk) = 0

 

Now we have two non-linear systems that are solvable, in this case using Newton. This will give an accurate result for least square error. However, also possible to get a more or less accurate solution by using linearization. This will be less accurate but simpler.

 

Transformations using data linearization

 

It is possible to reduce the problem of fitting data to a curve such as y = ce^(A*x) or y = A*ln(x) + B by transforming these equations into linear equations. Then constants can be figured out by using the least squares for a line method.

 

Ex: fitting data into y = c * e ^ (A*x)

            First you must transform the function..

            ln(y) = ln (c*e^(A*x))

            ln(y) = ln C + ln (e^(A*x))

 

            Then you linearize by substituting as necessary..

            ln (y) = ln c + A*x

            z = B + A*x  [ln C = B; Z = ln Y]

            z = A*x + B

 

At this point you have the equation of a line that can be easily solved using the linear least squares method.

 

Other substitutions to linearize an equation

            Y = A/x + B à z = 1/x; y = Az + B

            Y = C*x^A à ln Y = ln C + A*ln (x); z = ln C; w = ln x; s=A*w+z

            Etc.. More substitutions are listed in the table in the book which should be used

for reference (p. 269).

 

Wednesday, July 16th

 

Polynomial Curve Fitting

 

Essentially a specific case of linear least squares

Given a set of data points, find the best constants for the polynomials of:

            f(x) = c1 + c1*x + c2*x^2 + c3*x^3 + … + cn*x^n

 

Different from prior polynomial approximations because # of data points does not imply the number of degree of the polynomial f(x)

 

As before, you wish to minimize the error function E, where E is the summation of f(x) – yk. For example with a order 2 polynomial, you would have

 

            E = ∑ [ (Ax^2 + Bx + C – Yk )^2 ]

 

Minimizing is done by taking the partial derivative of E, in this case with respect to A, B, and C.

 

You will get three equations similar to:

 

A*∑ [x^4] + B*∑ [x^3] + C*∑ [x^2] = ∑ [x^2*y]

A*∑ [x^3] + B*∑ [x^2] + C*∑ [x] = ∑ [x*y]

A*∑ [x^2] + B*∑ [x] + C*N   = ∑ [y]

 

So all that one needs to do to find said constants is calculate the summations of these various items (x^4, x^3, x^2, etc) and then solve the system of equations.


This concept is the same no matter the order of the polynomial, it simply involves more variables & equations. It should be noted that one does not want to use a larger polynomial than the function requires as this will introduce a factor called polynomial wiggle. This is a phenomenon where the approximation curve is not very consistent against the actual data points, going up and down much more often than required. This is due to the order of the polynomial implying the number of minimums & maximums of the polynomial when graphed.

 

Since we can’t simply crank the order of the polynomial up to achieve better precision, the next best step is to use spline functions.

 

Spline Functions

 

Popular in computer graphics

 

Splines are a set of curves to fit a function in a piece-wise manner

 

In other words, instead of only one polynomial to approximate a curve, you get multiple polynomials fitting together to approximate the curve.

 

Simplest splines are of polynomials of degree 1 (lines.) In order to achieve better precision though, one normally takes it to the next step and uses quadratic splines. This however does not solve the problem of having abrupt changes when the splines connect. In order to resolve this problem, we must have cubic splines.

 

Piecewise Cubic Splines

 

Most common splines in use

 

To have continuity in the 1st & 2nd derivatives, we use points before and beyond the interval of the splines. Having continuity in the 1st & 2nd derivatives guarantees smooth transitions between splines instead of the sharp transitions seen with 1st & 2nd order splines.

 

Most common because they satisfy the following properties:

            Continuous in y [ S(Xk) = Yk ]

            Continuous between splines [Sk(Xk+1) = Sk+1(Xk+1)]

            Derivatives continuous between splines [S’k(Xk+1) = S’k+1(Xk+1)]

            2nd derivatives continuous between splines [S’’k(Xk+1) = S’’k+1(Xk+1)]

 

Thursday, July 17th

 

Pulling from the 4 properties covered previously, we get five sets of equations

 

I           S(x) = Sk(x) = Sk,0 + Sk,1(x-Xk) + Sk,2(x- Xk)^2 + Sk,3 (x- Xk)^3

                        For k = 0 to n-1

II          S(Xk) = Yk

                        For k = 0 to n

III        Sk(Xk+1) = Sk+1(Xk+1)

                        For k = 0 to n-2

IV        S’k(Xk+1) = S’k+1(Xk+1)

                        For k = 0 to n-2

V         S’’k(Xk+1) = S’’k+1(Xk+1)

                        For k = 0 to n-2

 

For each spline, we must solve for Sk,0, Sk,1, Sk,2 and Sk,3.

 

Given 4 coefficients and N splines, we need 4N coefficients. For 4N coefficients, we need 4N equations. Using the above constraints, we find we have all but two of these equations (4N-2 from above.)

 

[Long derivation not in my notes, please see other’s notes. Sorry]