Newthon-Rapson
- It is used to find the solution for f(x)=0
- It uses also derivatives f'(x)
- It uses f0(x), f0'(x) and a point
X0 to find an approximation to a line of the function f(x)

m=f(x0) - using the points (x0
,f(x0)) and (x1, 0)
m=f'(x0) = f(x0)
/ (x0 - x1)
f'(x0)(x0 - x1
) = f(x0)
(x0 - x1) = f(x0)
/ f'(x0)
x1 = x0 - f(x0)
/ f'(x0)
- We can arrive to the same iteration equation of Newthon-Rapson using the
Taylor expansion
- We can approximate a continuous function using a Taylor polynomial expansion
around x0
f(x)=f(x0)+f'(x0)(x-x0) + f''(x0)(x-x0)^2 / 2 + f'''(x0)(x-x0)+ ... f^n * (x-x0)^n / n!
-To approximate Newthon-Rapson, we are going to use only two terms in
the expansion
- Newthon-Rapson approximates f(x) using a line, the quadratic cubic, etc terms
are eliminated from the Taylor expansion
f(x) = f(x0)+f'(x-x0)+e (e is the error of using a line instead of the complete polynomial)
we want to obtain x, using the approximation
f(x0) = f(x0)+f(x0)(x1-x0)
f(x0)-f(x0) = f'(x0)(x1-x0)
f(x1)-f(x0) / f'(x0) = x1 - x0
( f(x1)-f(x0) ) / f(x0) + x0 = x1
0 - f(X0) / f'(x0) + x0 = x1
x1 = x0 - f(X0) /f'(x0)
In the same way you could create a new numerical method faster than
Newthon-Rapson that uses three terms in the approximation instead of two
- The approximation will be a quadratic instead of a line
- The approximation will require a second derivative

f(x)=f(x0)+f'(x0)(x-x0)+f''(x0)(x-x0)^2 / 2
0 = f(x0)+f'(x0)(x1-x0)+f''(x0)(x1-x0)^2 / 2
Use the term of x1 closer to x0
Example of Newthon - Rapson
f(x)=sin(x) - 1/2 = 0 f'(x) = cos (x) (In radians)
Start with x0=0
x = x0 - f(x0) /'f'(x0)
Iteration equation Newthon - Rapson
| i | xi | f(Xi) | f'(xi) |
กก กก |
| 0 | 0 |
= sin(0)-.5 = - .5 |
f'(x) = cos(0) = 1 |
x = 1 = .5 |
| 1 | .5 |
f(x) = sin(.5) - .5 = -.02 |
f'(x)=cos(.5) =.8795 |
x = .5 = -(-.02/.8795) =.522 |
| 2 | .522 |
f(x)=sin(.522) - .5 = -.00139 |
f'(x) = cos(.522) =.8668 |
x= .522 = -(-.00139/.8668) = .5235 |
Exact solution for sin(x)-.5 = 0 In radius is 30 degree that is 30 * pie/180
rad = .5235
- One of the disadvantages of Newthon-Rapson is that you need the derivative
f'(x)
- It is possible to obtain the derivative of f(x) numerically
f'(x) = f(x+e) - f(x) / error
The exact solution requires that e=0 (we can do an approximation by using a small e (like 1x10^-5)

Exact Solution f'(.5) ~= cos(.5) = .8773
When Newthon-Rapson may not give a good approximation or it may not converge
Newton-Rapson uses a line to approximate the fucntion. If the function is not approximates well using a line at the starting point, the method may not converge.

Order of convergence
- Some methods are faster than others finding the solution of f(x)=0
- Assume P is a zero (solution) + E = P-P is the error in the iteration "n"
+If two positive constants exist such that A=/=0 and R>0 and

Then the sequence is said to converge to P with an order of convergence R
This means that En+1 = A |En|^R
Example : Let En = .001 and A =1 R = 2
-> En+1 = 1 | .001| ^2
= 1 x 10 ^ -6 = .000001
If R=1, the convergence is called "linear"
If R=2 the convergence is called "quadratic"
If f(x) has a simple root and we use Newthon-Rapson the R=2 (quadratic convergence)
Usually , the more roots an equations has, the slower the convergence will be.
Secant Method
- The Newthon-Rapson method requires the evaluation of two functions at each
iteration
- The secant method will require only one evaluation of f(x)
- The secant method's order convergence with a single root is r ~= 1.618
- The second method uses two initial points P0, P1,
close to the root


Secant Method
The Secant Method approximates Newthon-Rapson if X tends to X0 (X1 -> X0)


In summary, to solve f(x)=0 you may use
1. Bisection
2. False Position (Regular False)
3. Newthon-Rapson
4. Secant Method
The solution of Linear System
Ex)
6x+3y+2z = 29
3x+ y+2z = 17
1x+3y+2z = 21
This can be expressed as |A x = B when |A is a matrix and |B is a vector

-Each linear equation represents a place in the n-th dimention
- The solution x,y,z represents the intersection of the planes. Not all the
system of linear equations have solutions

Geometrically these equations represents 2plans that never intersect
You also can have a system of equations with an infinite number of solutions

(1) & (2) are the same equation. So both are the same plane. So the number of
solutions is infinite (multiply (1) by 2 and you obtain (3)
Upper Triangular systems of linear equations.
A matrix A is called lower triangular if aij = 0 when i <j

Example)
This IA is upper
triangular
A matrix A is called lower triangular if aij =0 when i< j

We can have a upper triangular system of linear equation

- An upper triangular system of linear equation is easy to solve by obtaining the value of Xn from the last equation and substituting the Xn into the (n-1)th equation to obtain Xn and so on. This is called back substitution
Back Substitution

Example

Gauss Elimination
_ It is a method to solve systems of linear equations
- It transforms an arbitrary system of linear equations into upper triangular
- It solves the upper triangular system of equations using back substitution
- The transformation of upper triangular using the following properties that do
not affect the solution
1. Interchange equations
The order of two equations do not affect the solution

2. Multiplying an equation by a constant will not affect the solution
(1) X- Y = 5
multiply (1) by 2
2x-2y=10
(2) 3X + 2 = 6 <----------------------------->
3x+2 = 6
3. Replacement
An equation can be replaced by the sum of itself and a non-zero multiple of another equation

We will use these three properties to transform an arbitrary system of linear equations into an upper triangular system and then solve it with back substitution
Ex)
1 X + 3 Y + 4Z = 19
8 X + 9 Y + 3Z = 25
X + Y + Z = 6
Solve system using Gaussian elimination
Augmented matrix

We need to make element a21 = 8 to be 0
so we multiply the first equation by 8 and subtract the result from second equation

Now we need to make element a32 = 0
So we need to multiply second equation by 2 and add it to the third equation

Implementing Gauss implementation using Matlab

% Input - A is a Nx1 non singular
% - B is a Nx1 matrix that is the solution of Ax=b
% Output - X is a Nx1 matrix that is the solution of Ax =B
% Get dimensions of A
[N N] = size(A);
% Initialize X with zeros
x = zeros (N,1);
% Obtain augmented matrix
Aug = [A, B];
% Gaussian Elimination
%For all row
%Pivot 1 to N
For P=1=N
%Choose a pibot. We will ignore the case when pibot is 0 for now
P1v = Aug (P,P);
%Divide pivotal row bypivot
" for k=p:N+1
Aug (P,k)=Aug(P,K)/Aug(P,K)/Piv;
" end % " ~ " short notation Aug(P1 P: N+1)
% Make 0's in the other elements
%in pivotal column
%for all remaining row
for k=P+1=N
%for all remaining columns
m = Aug(k,p); % element that we want to be 0
for i=p=N+1
Aug(k,i) = Aug(k,i)
- m*Aug(P,I)
end
end
กก
%Back substitution
for k=n:1
Sum = 0
for i=k+1:N
Sum=sum+Aug(k,i)*x(i,1)
end
x(k,1)=Aug(k,N+1) - SUM
end
%Result is in X
- Prevent pivot = 0
= If pivot =0 switch rows to prevent division by zero
- To reducing computing error, you may choose as the next pivot the largest number in the rows (or column) that are still left to transform (use absolute value)

Triangular Factorization
* A matrix IA has a triangular factorization if it can be expressed as the product of a lower-triangular matrix and an upper triangular matrix
- Triangular factorization is useful when you want to solve multiple systems of linear equations that have the same

Triangular factorization will save time instead of using Gauss elimination

Assume the linear system IA = IB where IA has a triangular factorization IA=LU
then
IAx = IB
||
(LU)x = IB
Then if we define UX=Y (1)
IL (UX) =IB
IL Y = IB
Then we can solve X by first solving (1). Solving y in (2) that is lower
triangular system of linear equations using "forward substitution"
Once y is found, we can solve (1) using back substitution.
- We solve (2) using forward substitution
- We solve (1) using back subsitution
LU - factorization
| 6 1 4 | Ex) 6x1
+ 1x2 - 4x3 = -3
| 5 3 2 | 5x1
+ 3x2 + 2x3 = 21
| 1 4 3 | 1x1
- 4x2 + 3 x3 = 10
Add identity matrix to the left
A =
| 1 0 0 | | 6 1 4 |
| 0 1 0 | | 5 3 2 |
| 0 0 1 | | 1 -4 3 |
Apply Gaussian Elimination to the matrix in the right hand side and also do the same steps in the matrix in the left

LU factorization is better than Gaussian elimination when you have more than one system of equations with the same IA.
Factor A in LU = O (N^3)
Gaussian Elimination = O (N^3)
Back Substitution = O (N^2)
Forward Substitution = O (N^2)