Midterm
- 7:00-9:00PM
- Wednesday, July 14th, 2004
- WTHR104
- Will not involve large amounts of handwritten code
- Old exams will be available for study
- HW3 (preparation for the exam) is posted, due 7-13-04
- Bring a half-page of notes (will be stapled to exam)
------------------------------------------------------------------------------
Notes about approximation
- Each additional level of refinement causes an adjustment of the previous
approximation, causing it to be closer to the true value
- a[1] is the slope of the line between 2 points, a[2] is the slope of
the slope (2nd derivative), a[3] is the slope of the slope of the slope,
etc.
Padé approximation
- Approximations of functions using a quotient of 2 polynomials
- R[n,m](x) = P[n](x)/Q[m](x)
P[n](x) = p[0] + p[1]x + p[2]x^2 + ... p[n]x^n
Q[m](x) = 1 + q[1]x + q[2]x^2 + ... q[m]x^m
- More precise than Newton or Lagrage polynomials
- The polynomials P and Q are constructed so that f(x) and R[n,m](x) agree at
x = 0 and their derivatives also agree at x = 0 up to the n+m derivative
- The Maclaurin expansion (which is the Taylor expansion when x = 0) is f(x)
= a[0] + a[1]x + a[2]x^2 + ... a[k]x^k + ....
- f(x)Q[m](x) - P[n](x) = 0
(a[0]-p[0]) + x(a[1]+a[0]q[1]-p[1]) + x^2(a[2]+a[1]q[1]+a[0]q[2]-p[2]).... =
0
To make the left side 0 independently of the value of x, we need each coefficient
to be 0 (meaning that we solve the following:
a[0]-p[0] = 0
a[1]+a[0]q[1]-p[1] = 0
....
We must solve n + m + 1 equations to get n + m + 1 coefficients
Example
-------
Find the Padé approximation R[2,2] for f(x) = ln(1 + x)/x
f(x) = 1 - x/2 + x^2/3 - x^3/4 + ....
R[2,2](x) = (p[0] + p[1]x + p[2]x^2)/(1 + q[1]x + q[2]x^2)
(1-p[0]) = 0
(-1/2+q[1]-p[1]) = 0
(q[2]-q[1]/2+1/3-p[2]) = 0
(-1/4+q[1]/3-q[2]/2) = 0
(1/3-q[1]/4+q[2]/3) = 0
q[2] = 3/10
q[1] = 6/5
p[1] = 7/10
p[0] = 1
R[2,2](1) = .6933
Approximating ln(x) takes 6 multiplications, 5 additions, and 1 division
Today: Review of material for next Tuesday's midterm
-----------------------------------------------------
Binary representation
- Floating point
- Fixed point
- SPARC
Propogation of error
- In sums
- In products
Solution of nonlinear equations of the form f(x) = 0
- Fixed point theorem
- Types of convergence/divergence
* Monotone convergence
* Oscilating convergence
* Monotone divergence
* Oscilating divergence
- Bisection method
- Regula Falsi (False Position) method
- Newton-Raphson method
* Relation to Taylor expansion
* Order of convergence
* Conditions for non-convergence
- Secant method
- Criteria for convergence
* Horizontal
* Vertical
- Well and ill conditioned functions
Solution of linear equations of the form Ax = B
- Upper triangular systems
- Back substitution
- Elementary matrix transformations
- Gaussian Elimination
- Pivoting/choosing pivots (largest remaining in column)
- LU factorization for multiple systems of equations
- Comparison of Gaussian Elimination vs. LU factorization
Iterative methods for systems of linear equations
- When to use (ans: large sparse matrices)
- Gauss-Seidel method
- Jacobi method
- Convergence in strictly diagonal matrices
Interpolation and polynomial approximation
- Taylor series
- Horner's method to optimally compute a polynomial
- Lagrange polynomials
- Newton polynomials (w/ divided differences)
- Padé approximations (w/ ratio of 2 polynomials)
------------------------------------------------------------------------------
Bring to the exam
- Calculator
- Half-page (one side) note sheet, will be stapled to exam
How to study
- Do homework #3 (due Tuesday in class, solutions posted same day)
- Study notes
- Study project 2 minorly (for familiarity, but won't test complex info)
- Solve previous midterm
- Do exercises in the book
Reminder: Send in your lecture notes (weeks 1-4) if you are supposed to (with
your name appearing in the list of notes) and have not yet done so.
Start studying immediately!
------------------------------------------------------------------------------
Curve fitting
- Given a set of points, come up with a curve that best fits the points
- The curve DOES NOT need to touch the points
- Least Squares method for a line
* Have recorded values (x[1],y[1]),(x[2],y[2]), ... (x[n],y[n])
* Want to find the line y = f(x) = Ax+B that best fits the data
* For a given (x[k],y[k]), f(x[k]) = y[k] + e[k], where e is the error
* Would like to minimize the error for every point
* Maximum error (E[∞]) = max(|f(x[k]) - y[k]|) over all k
* Average error (E[1]) = (1/n) sum[k=1;n] |f(x[k]) - y[k]|
* Root mean square error (E[2]) = (1/n) sum[k=1;n] (f(x[k]) - y[k])^2
Curve fitting (continued from yesterday)
- Error is the distance between the best-fit line and a particular point
- Choose which function type you want to use to approximate first
- Want to find the parameters of the chosen function so that the sum of the squares
of the error at each point is as small as possible
- Minimizing E[2](x) = (1/n)(sum[k=1;n] (f(x[k])-y[k])^2)
- Least Square Line
* Given the points (x[1],y[1]), (x[2],y[2]), ... (x[n],y[n])
* Line is the form f(x) = Ax + B
* E(A,B) = (1/n)(sum[k=1;n] (Ax[k] + B - y[k])^2), though this will be compressed
in the manner of a derivative, so the (1/n) will be removed
* To obtain the minimum, we take the partial derivative with respect to A and
B, and then set these equal to 0
d E(A,B)/d A = (1/n)(sum[k=1;n] 2(Ax[k] + B - y[k])(x[k])) = 0
(1) A(sum[k=1;n] x[k]^2) + B(sum[k=1;n] x[k]) = sum[k=1;n] y[k]x[k]
d E(A,B)/d A = (1/n)(sum[k=1;n] 2(Ax[k] + B - y[k])(1)) = 0
(2) A(sum[k=1;n] x[k]) + Bn = sum[k=1;n] y[k]
By solving (1) and (2) as a system of equations, we are able to find the line
y = Ax + B that minimizes the error.
------------------------------------------------------------------------------
Example
-------
Points: (6,7), (9,6), (14,3), (17,1), (21,0)
sum[k=1;n] x[k] = 67
sum[k=1;n] y[k] = 17
sum[k=1;n] x[k]^2 = 1043
sum[k=1;n] x[k]y[k] = 155
1043A + 67B = 155
67A + 5B = 17
A(1043*-5)(67^2) = (155*-5) + (17*67)
A = -.5013
B = 10.11
Least square line is y = -.5013x + 10.11
------------------------------------------------------------------------------
Minimum squares for y = Ax^m
- m is known constant, A is unknown
- E(A) = sum[k=1;n] (Ax^m - y[k])^2
- A = (sum[k=1;n] y[k]x^m)/(sum[k=1;n] x^2m)
------------------------------------------------------------------------------
Example
-------
y = Ax^3
Points: (2.0, 5.9), (2.3, 8.3), (2.6, 10.7), (2.9, 13.7), (3.2, 7.0)
sum[k=1;n] y[k]x^m = 1227.44
sum[k=1;n] x^2m = 2056.284
A = .5969
Least square cubic polynomial is y = .5969x^3