*No Class
*Midterm
*Homework 3 Prep for Midterm Due Tuesday 13th in class
Example from July 2nd
continued:
Ex.
Build degree 1, 2, and 3 polynomials to approximate f(x) = sin(x) over the
interval (0, π/2)
P1(x) = a0 + a1(x - x0)
P2(x) = a0 + a1(x - x0) + a2(x x1)(x x0)
P3(x) = a0 + a1(x - x0) + a2(x x1)(x x0) + a3(x x2)(x x1)(x x0)
x0 = 0 x1 = π/6 x2 = π/3 x3 = π/2
xk f(xk) f(xk-1,xk) f(xk-2,xk-1,xk) f(xk-3,xk-2,xk-1,xk)
--- ------ ----------- ---------------- ---------------------
0 0 ίa0
π/6 0.5 0.9549 ίa1
π/3 0.8659 0.699 -0.244 ίa2
π/2 1 0.236 -0.4230 -0.1137 ίa3
P1(x) = x0 + a1(x
- x0) = 0 + 0.9549(x 0) = 0.9549x
P2(x) = a0 + a1(x - x0) + a2(x x1)(x x0) = P1(x) + a2(x x1)(x x0) = 0.9549x -0.244(x 0)(x - π/6)
= 0.9549x 0.244x2 +
π/6(0.244)x = -0.244x2 + x(0.9549 + 0.244(π/6))
P3(x) = a0 + a1(x - x0) + a2(x x1)(x x0) + a3(x x2)(x x1)(x x0) = P2(x) + a3(x x2)(x x1)(x x0)
= -0.244x2
+ x(0.9549 + 0.244(π/6)) 0.1137(x(x -
π/6)(x - π/3))
Assume: x = π/4 = 0.7854 sin(π/4) = 0.7071
P1(π/4) = 0.749978
P2(π/4) = 0.749978 + 0.244(0.9854)(0.7854 0.5236) = 0.699725
{ adjustment }
P3(π/4) = 0.699725 + (-0.1137)(0.7854
0)(0.7854 0.5236)(0.7854 1.047) = 0.705841
{ adjustment }
* P3(π/4) is very close to the actual solution!
Polynomial
Approximation:
ΰ Lagrange Polynomials
ΰ
* They approximate: f(x) = a0 + a1x2 + a2x3
ΰ Padι Approximation
- They are approximations to functions using a quotient of 2 polynomials.
RN,M(x) = (PN(x))/(QM(x)) for a <= x <= b
(1) PN(x) = P0 + P1x + P2x2 + + PNxN
(2) QM(x)
= 1 + q1x + q2x2 +
+ qMxM
*Padι
Approximations are more precise than simple polynomial approximation with
- The polynomials P and Q are constructed so f(x) and RN,M(x) agree at x = 0 and their derivatives also agree at x = 0
up to the N+M derivative.
- Assume the Maclaurin
expansion (Maclaurin expansion = Taylor Expansion when x0 = 0)
(3) f(x) = a0 + a1x + a2x2 + + akxk
- Assume that we want:
f(x) = RN,M(x) = (PN(x))/(QM(x))
f(x)QM(x) = PN(x)
(4) f(x)QM(x) - PN(x) = 0
* substitute (1), (2), and (3) into (4)
(a0+a1x+a2x2+ +akxk)(1+q1x+q2x2+ +qmxm) (P0+P1x+P2x2+ +Pnxn) = 0
Factor x, x2, x3,
:
(a0-P0) + x(a1+a0q1-P1) + x2(a2+a1q1+a0q2-P2) + = 0
*To make the left side = 0 as well independently of the value of x, we need each coefficient to be 0
Therefore:
a0-P0 = 0
a1+a0q1-P1 = 0
a2+a1q1+a0q2-P2 = 0
Qnan+qm-1an+1+ +an+m = 0
- Then we solve N+M+1 equations to obtain N+M+1 coefficients.
Example: Find the Padι Approximation R2,2(x) for f(x) = (ln(1+x))/x
*Start with the Maclaurin Expansion
f(x) = 1 (x/2) + (x2/3) - (x3/4) + (x4/5)
Rn,m(x) = Pn(x) / Qm(x) ΰ R2,2(x) = P2(x) / Q2(x) = (P0 + P1x + P2x2) / (1 + q1x + q2x2)
- We want f(x) = R2,2(x) ΰ f(x) = P2(x) / Q2(x) ΰ f(x)Q2(x) P2(x) = 0
- Substitute for f(x), P2(x), Q2(x)
(1 (x/2) + (x2/3) - (x3/4) + (x4/5))( 1 + q1x + q2x2) (P0 + P1x + P2x2) = 0
(1-P0) + x((-1/2)+q1-P1) + x2((1/3)-(q1/2)+q2-P2) + x3((1/5) (q1/4) + (q2/3)) = 0 *Ignore other higher terms
1 P0 = 0 P0 = 1
(-1/2) + q1 P1 = 0 P1 = q1 (1/2)
(1/3) (q1/2) + q2 P2 = 0 P2 = q2 (q1/2) + (1/3)
(-q2/2) + (q1/3) (1/4) = 0 (-3/4) + q1 (3/2)q2 = 0
(q2/3) + (q1/4) + (1/5) = 0 + (4/5) q1 + (4/3)q2 = 0
(-3/4) + (4/5) + ((3/2) + (4/3))q2 = 0
q2 = 3/10
q1 = (3/2)q2 + (3/4) q1 = 6/5
P1 = q1 (1/2) = (6/5) (1/2) P1 = 7/10
P2 = q2 (q1/2) + (1/3)
= (3/10) ((6/5)/2) + (1/3) P2 = 1/30
Therefore, we get:
f(x) = R2,2(x) = (1 + (7/10)x + (1/3)x2) / (1 + (6/5)x + (3/10)x2)
Numeric
Example:
f(1) = (ln(1+1))/1 = 0.6931
R2,2(1) = 0.6933
- We can obtain ln(y) from f(x) = (ln(1+x))/x
(ln(1+x))/x = R2,2(x) ln(1+x) = xR2,2(x)
y = 1 + x
x = y -1
ln(y) = (y-1)R2,2(y-1)
ln(y) = (y-1)((1 + (7/10)(y-1) + (1/3)(y-1)2) / (1 + (6/5)(y-1) + (3/10)(y-1)2))
Therefore, this expression can be used by the computer to approximate ln(y)
Using: 6 multiplications, 5 additions/subtractions, and 1 division
Midterm Review
- Floating Point Binary Representation
o Floating-Point
o Fixed-Point
o SPARC Floating-Point representation
- Propogation of Errors
o In Sums
o In Products
- Solution of Non-linear equations of the form f(x) = 0
o Fixed Point Theorem
o Types of convergence/divergence
§ Monotone convergence/divergence
§ Oscillating convergence/divergence
o Bisection Method (4)
o False Position Method (Regula-Falsi) (3)
o Newton-Rapson Method (2)
§ Similarities of N-R with Taylor Expansion
§ Numerical approximation of the derivative
§ Order of convergence
§ Examples of when N-R may not converge
o Secant Method
o Criteria for convergence
§ Horizontal convergence
§ Vertical convergence
o Well / Ill conditioned functions
- Solution of Linear Equations (Ax = B)
o Upper/Lower Triangular Systems
o Back Substitution
o Elementary Transformations to make upper triangular matrices
§ Scale
§ Swap
§ Addition of 2 rows
o Gaussian Elimination
§ Make A upper triangular O(n3)
§ Use Back Substitution O(n2)
o Pivoting / Choosing pivot
o LU Factorization (On3 + mO(n2)) for multiple systems of equations
o Gaussian Elimination vs. LU Factorization
- Iterative Methods for systems of Linear Equations
o When to use iterative methods
§ Large, sparse matrices
o Gauss-Seidel
o Jacobi
o Convergence in strictly diagonal matrices
- Interpolation and Polynomial Approximation
o Taylor Series
o Horners Method to optimally compute a polynomial
o Lagrange Polynomials
o
§ Divided Differences
o Padι Approximations using the ratio of 2 polynomials
Bring to Exam:
-Calculator
-1/2 page, one side formula sheet with anything you want (stapled to exam)
Study:
-Do homework #3 (Due Tuesday in class)
-Study the notes
-Solve previous midterm
-Send lecture notes (weeks 1-4)
-Solve Exercises in the book
Curve Fitting:
-Given a set of points come up with a curve that best fits the points
*The curve does not need to touch the points
Polynomial Approximation passes through all the points
Curve Fitting does not necessarily touch all the points, but passes close by them
-Example of when to use curve fitting:
-Assume you have an experiment that produces a set of points and
you would like to find the best line that fits the data points
Least Squares Line
-Assume we have recorded values (x1,y1), (x2,y2), , (xn,yn)
and we want to find the line y = f(x) = Ax + B that best fits the recorded data
-We have that for value (xk,yk) we have the error
f(xk) = yk + ek where ek is the error
-We would like to minimize the error for all points
-We can use several norms that tell us how far f(x) is from the data
Maximum Error ΰ Einf(f) = max[ f(xk) 1<=k<=N-yk]
n
Average Error ΰ E1(f) = (1/n)( Σ | f(xk) yk | )
k=1
n
Root Mean Square Error ΰ E2(f) = ((1/n) Σ ( f(xk) yk )2)1/2
k=1
-We want to find the parameters of the function so the total error is minimized
-We choose to minimize the sum of the squares of the errors at each point (Root Mean Square Error)
-We want to find the parameters that minimize E2(f)
Minimum Square Line
-Given the points (x1,y1), (x2,y2), , (xn,yn), the least square line
y = f(x) = Ax + B is the line that minimizes the square error E2(f)
n
E(A,B) = Σ (Ax + B - yk)2
k=1
-To obtain the minimum we do the partial derivative of E(A,B) with respect to A and B
and make them equal to 0


-(1) and (2) give us a system of linear equations to solve A and B
-Solve A and B to find the line y = Ax + B that minimizes the error
Example:
-Assume the following points xk,yk. We want to obtain the line y = Ax + B that minimizes the error.

-We can apply minimum squares to other functions
