Monday, July 12, 2010
CS314 Midterm on Wednesday (25% of grade)
Review
Floating Point Representation vs. Fixed
Propagation of Error
Solution of non-linear Equations f(x)=0
Fixed Point Theorem
Types of convergence/divergence
Monotone/oscillating
Bisection Method
False Position Method
Horizontal/Vertical Convergence
Well/Ill Conditioned function
Newton Rhapson
Similarities of NR and Taylor Expansion
Order of convergence
Solutions of linear equations
Properties of Vectors
Vector Algebra
Matrices
Properties of Matrices
Special Matrices
Zero Matrix
Identity of Matrix
Matrix Multiplication
Inverse of Matrix
Determinants
Upper Triangular Systems
Backward Substitution
Elementary Substitution
Gauss Elimination
LU (Triangulation) Factorization
Gauss vs. LU Factorization
Interpolation & Polynomial Approximation
Taylor Approximation
Lagrange Approximation
Evaluation of polynomial using Horner’s Method
Newton Polynomials
Divided differences
Pade Approximations with quotient of polynomials
*(amendment) Additionally:
Jacobian
Backward Substitution
Gauss Seidel
Strictly diagonal dominant matrix
Solution of System
Convergence and Strictly Dominant Matrices
Bring notes on one side of a standard piece of paper, and it will be stapled to the exam
Study from notes, book, homework, and old exams
Curve Fitting
Given a set of points, come up with the curve that best fits the points
Polynomial approximation
Polynomial that passes through all points
Curve Fitting
Does not necessarily touch all points, but passes very close to them
Ex:
An experiment produces a set of points and we would like to determine a function y = f(x) that relates these variables.
Try to build a function that matches these points as best as possible.
Least Squares Line
Assume we have recorded values
(x_1, y_1), (x_2, y_2), … (x_N, y_N)
And we want to find the line
y = f(x) = Ax + B that best fits the recorded values
So we have that
f(x_k) = y_k + e_k
appx = sample + error
]
There are several norms that tell us how far the curve f(x) is from the data.
Max Error E_inf() (f) = max_1<=K<=N {|f(x_k) – x_k|}
Avg Error E_1(f) = 1/N * Sum(|f(x_k) – y_k|, 1, N) --> Not Continuous
Root Mean Square Error E_2(f) = 1/N * Sum(|f(x_k) – y_k|^2, 1, N) --> Easier to take derivative, and we don’t use any others due to complexity/cancellation
Tuesday, July 13, 2010
Given the points (x_1, y_1), (x_2, y_2), …, (x_N, y_N), the least squares line y=f(x) = Ax+B is the line that minimizes the square error E_2(f).
The square error of using A, B is
E(A, B) =Sum(|f(x_k) – y_k|^2, 1, N)
= Sum((Ax_k + B – y_k)^2, 1, N)
To obtain the minimum,
(1) *del E(A,B)/ *del A = Sum(2*(Ax_k + B – y_k)*x_k, 1, N)
= Sum(2*(A(x_k)^2 + Bx_k – y_k*x_k), 1, N)
(2) *del E(A,B)/ *del B = Sum(2*(Ax_k + B – y_k), 1, N)
Set the partials equal to zero,
From (1), 0 = Sum(2*(A(x_k)^2 + Bx_k – y_k*x_k), 1, N)
0 = Sum((A(x_k)^2 + Bx_k – y_k*x_k), 1, N)
(3) 0 = A*Sum((x_k)^2) + B*Sum(x_k) – Sum(x_k*y_k)*
*From here, I may abbreviate the sum with this notation unless specified otherwise.
From (2), 0 = Sum(2*(Ax_k + B – y_k), 1, N)
(4) 0 = A*Sum(x_k) + B*Sum(1) – Sum(y_k)
(3) and (4) give us 2 eq’s that will allow us to find A and B.
Ex: Assume the following data
|
![]() |
|
From (3) and (4),
{0 = A*Sum((x_k)^2) + B*Sum(x_k) – Sum(x_k*y_k), and
0 = A*Sum(x_k) + B*Sum(1) – Sum(y_k)}
Using (3) and (4) and subbing,
{1043A +67B -155 = 0 = R_1 = (5)
67A + 5B -17 = 0 = R_2 = (6)}
R_1*-5 + R_2*67
=A(1043*-5 +67^2) + 155*5 – 17*67 = 0
A = -.5013
A --> 5
1043A +67B -155 = 0
-> B(155 + 1043 *A)/67 = 10.11
Therefore, f(x) = -.5013x + 10.11 is the least square line.
Process is very important. What would happen if the data wasn’t supposed to be a line?
Wednesday, July 14, 2010
EXAM
Thursday, July 15, 2010
Sometimes we would like to fit data points that are not necessarily in a line.
Ex:
f(x) = Ax^M, where M is known, A is not. For data such as (x_0,y_0),…,(x_N,y_N)
E(A) = Sum((A(x_k)^M – y_k)^2)
Minimum:
*del E(A)/ *del A = Sum(2*(A(x_k)^M – y_k)*(x_k)^M = 0
= Sum((x_k)^(2M)*A – Sum(y_k) (x_k)^(M) = 0
--> Sum(A(x_k)^(2M) = Sum(y_k*(x_k)^M
A = Sum((x_k)^(2M) / Sum(y_k*(x_k)^M
Given pts:
(x_0, y_0), …, (x_N, y_N). Find the value of A s.t. y = Ax^M with minimum error.
Ex: y = Ax^3 therefore M=3
| x_k | y_k | y_k*(x_k)^3 | (x_k)^6 |
| 2 | 5.9 | 47.8 | 64 |
| 2.3 | 8.3 | 100.986 | 148.04 |
| 2.6 | 10.8 | 188.06 | 308.98 |
| 2.9 | 13.7 | 334.13 | 594.88 |
| 3.3 | 17 | 557.06 | 1073.74 |
| sum= | 1227.44 | 2189.52 |
A = 1227.44/2189.52 = .5606
The f(x) = Ax^3 that minimizes error is
Y=.5606*x^3
Curve Fitting
Nonlinear Least Squares
Method for y = C*exp(Ax)
We want to approximate (x_0, y_0), …, (x_N, y_N) to an experimental function y = C*exp(Ax)
Since nonlinear least-squares procedure
E(A,C) = Sum((f((x) – y_k)^2)
= sum((C*exp(Ax_k) – y_k)^2)
To find the minimum we need to derive E(A,C) with respect to A and C.
*del E(A.C) / *del A = 0 and *del E(A.C) / *del C = 0, then find A, C.
0 = *del E(A,C) / *del A = sum(2*(c*exp(Ax^K – y_K)*c*x_k*exp(A*x^K)
= sum(c^2*exp(Ax^K)) – sum(y_k * c*x_k* exp(A*x^K))
(1) = c*sum(exp(A*x^K)) – sum(y_K*x_K*exp(Ax^K))
0 = *del E(A,C) / *del(C) = sum(2*(C*exp(A*x^K – y_K)*exp(A*x^K))
(2) = c*sum(exp(2Ax^K)) – sum(y_k*exp(Ax^K))
--> (1) and (2) give a system of nonlinear equations that can be solved using Newton’s method for a system of non-linear equations
Friday, July 16, 2010
Transformations of Data Linearization
It is possible to fit curves such as y = c*exp(Ax), y = A*ln(x) + B and y = A/x + B by transforming these equations into a linear curve.
Ex:
Y = A*ln(x) + B , let z = ln(x)
--> y = Az + B or y = A/x + B, let z = 1/x
--> y = Az + B
Now, a little harder
Y= c*exp(Ax)
Ln(y) = ln(c) + ln(exp(Ax))
Ln(y) = ln(c) + Ax
Let z = ln(y) --> ln(c) + Ax
Let B = ln(c)
Z = B + Ax
Ex:
Assume the pts:
(x_k,y_k)
(0, 1.5)
(1, 2.5)
(2, 3.3)
(3, 5.0)
(4, 7.5)
We want to fit the points into the f(x) y = c*exp(Ax)
Solution. Try the transformation
Z = Ax+B, where z = ln(y), B = ln(c)
| x_k | y_k | z_k = ln(y) | (x_k)^2 | x_k*z_k | |
| 1 | 0 | 1.5 | .4055 | 0 | 0 |
| 2 | 1 | 2.5 | .9163 | 1 | .9163 |
| 3 | 2 | 3.3 | 1.2528 | 4 | 2.5056 |
| 4 | 3 | 5 | 1.6094 | 9 | 4.8282 |
| 5 | 4 | 7.5 | 2.0149 | 16 | 8.0596 |
| sum= | 10 | 6.1989 | 30 | 16.3097 |
Using a Least Square Line
{0 = A*sum((x_k)^2) + B*sum(x_k) – sum(x_k*z_k)
0 = A*sum(x_k) + BN – sum(z_k)}
-->
0 = 30A + 10B – 16.3097
0 = 10A + 5B – 6.1989
-->
0 = (30-20)A -16.3097 + 2*6.1989
-->
A = .3912, B = .4574
So the least square line is
z = .3912x +.4574 with B = ln(c)
--> c = exp(B) = exp(.4574) = 1.5799
So we have that
y = 1.5799*exp(.3912*x)
Polynomial Fitting
When least squares is adapted to a polynomial we have
f(x) = c_1 + c_2*x + c_3*x^2 + c_(M+1)*x^M
We want to find c_1, c_2, c_3,…, c_(M+1) that best fits (x_0, y_0), (x_1, y_0), …, (x_N, y_N)
Ex: Find A, B, C for f(x) = Ax^2 + Bx + C that fits (x_0, y_0), (x_1, y_0), …, (x_N, y_N)
The square error is
E(A,B,C) = sum((f(x)_k – y_k)^2) = sum((A(x_K)^2 + B*(x_K) + C – y_k)^2),
Error now minimized by *del (wrt A, B and C)= 0, obtaining:
*del E / *del A = 0 = sum(2*(A(x_k)^2 + B*x_k+ c – y_k)*(x_k)^2 = 0,
*del E / *del B = 0 = sum(2*(A(x_k)^2 + B*x_k+ c – y_k)*(x_k) = 0,
*del E / *del C = 0 = sum(2*(A(x_k)^2 + B*x_k+ c – y_k) = 0,
Yields:
A*sum((x_k)^4 + B*sum((x_k)^3) + c*sum((x_k)^2) – sum(y_k*(x_k)^2)
A*sum((x_k)^3 + B*sum((x_k)^2) + c*sum((x_k)) – sum(y_k*x_k)
A*sum((x_k)^2 + B*sum((x_k)) + c*N – sum(y_k)
Respectively.
We have 3 eq’s, use them to find A, B, C.