CS 314 CLASS NOTES
Created by John A. Cotiguala
Upper Triangular System
A matrix A = [aij]
aij = 0 when i > j
Lower Triangular System
A matrix A = [aij]
aij = 0 when i < j
An Upper Triangular System can be solved easily
a11x1 + a12x2 + a13x3 + ... + a1nxn = b1
a22x2 + a23x3 + ... +a2nxn = b2
a33x3 + ... + a3nxn = b3
a4nxn = b4
xn = (bn/an)
This is called "Back Propogation"
- # operations =
- # divisions = N
- # subtractions = 1 + 2 + 3 + ... + N = N*(N-1)/2 = O(N2)
- # multiplications = O(N2)
- O(N2)
Back Substitution
- 3x + 4y + z = 2
- 3y + 2z = 1
- 5z = 10
z = 10/5 = 2
substitute in 2nd equation: 3y + 2(2) = 1 => 3y = -3 => y = -1
substitute in 3rd equation: 3x + 4(-1) + 2 = 2 => x = 4/3
- Solving an Upper Triangle System of linear equations is O(N2)
- We can convert an arbitrary system of linear equations to be Upper Triangular and then use Back Substitution
This is what Gauss substitution does
3 elementary transformations that do not affect a linear system of equations:
- Interchange
   The order of equations can be changed without affecting the solution
- Scaling
   A linear equation can be multiplied by a constant without affecting the solution
-    Ex:
- x + y = 3
- 2(x + y) = 3(2)
- 2x + 2y = 6
- Replacement
   A linear equation can be replaced by the sum of itself and a non-zero multiple of another equation in the system
Gauss Elimination
- Convert a system of linear equations to Upper Triangular using the transformations
- Solve it with back substitution
x + 3y + 4z = 19
8x + 9y + 3z = 35
x + y + z = 6
Augmented Matrix
- Choose a pivot row and pivot element in that row
- Divide the entire row by the pivot element
- Apply transformations to make the elements in the same column as the pivot a

Back Substitution
z = [(2*117) - (13*15)] / [(2*29) - (3*15)] = 39/3 = 3
y + (29/15)z = (117/15) => y = (117/15) - (29/15)3 = 2
x + 3y + 4z = 19 => x = 19-3(2) - 4(3) = 1
Implementation in Matlab
function x = gauss(A,B)
% Input - A is a NxN nonsingular matrix
% B is a Nx1 matrix
% Output - x is a Nx1 matrix with the solution to Ax=B
% Get the dimensions of A
[N N] = size(A)
% Initialize x vector with zero's
x = zeros(N,1)
% construct the augmented matrix
Aug = [A B]
% Gauss Elimination
% pivot all rows
for p=1:N
% get pivot
% include code to handle when pivot is 0
piv = Aug(p,p) % not putting a ";" on the end will print out the results to the screen
% Divide pivotal row by the pivot
for k = p:N+1
Aug(p,k) = Aug(p,k)/piv
end
% this for loop would be the same with: Aug(p,p:N+1) = Aug(p,p:N+1)/piv
%put 0's in the other elements of the pivotal column
for k = p+1:N % for all remaining rows
m = Aug(k,p);
for i = p:N+1
Aug(k,i) = Aug(k,i) - m*Aug(p,i)
end
% Back Substitution
for k = N:1
% Accumulate all elements remaining in that row times x
sum = 0
for i = k+1:N
sum = sum + Aug(k,i) * x(i,1)
end
x(k,1) = Aug(k,N+1) - sum;
end
Some remarks about Gauss elimination
- If the pivot is 0, then switch the rows
- To reduce arithmetic arror, choose as pivot the largest element in the remaining elements in the pivotal column
LU Factorization
It is useful when you want to solve the same system of linear equations with different B vectors.
- Ax1 = B1
- Ax2 = B2
- Axk = Bk
A matrix has a triangular factorization if it can be expressed as the product of a lower triangular (L) and upper triangular (U) matrix.
A = LU
Assuming A can be factorized as A = LU:
Ax = B L(Ux) = B
Now we have 2 systems of linear equations.
Size A = (n,n)
Size X = (n,1)
Size B = (n,1)
Size L = (n,n)
Size y = (n,1)
Once you have an LU factorization of A, then you can solve Ly = B using forward substitution to find y. Once you have y, then you can solve Ux = y using backward substitution to find x.
Three steps for solving a system on linear equations, using LU Factorization:
- Factor A into 2 matrices LU
- Solve Ly = B using forward substitution
- Solve Ux = y using backward substitution


Complexity of Gauss Elimination and LU Factorization
- Factor A into LU -> O(n3)
- Gauss Elimination (no backward substitution) -> O(n3)
- Back Substitution -> O(n2)
- Forward Substitution -> O(n2)
If you want to solve Ax = B for a single B, then use Gaussian Elimination because it is faster than LU factoring (even though both are O(n3), Gaussian Elimination has a smaller constant).
If you want to solve multiple systems for Ax = B; i = 1...m, then use LU factorization. The cost will be:
m*O(n3) if you use Gaussian elimination.
Iterative methods to solve systems of linear equations.
- They are going to be useful when you have many variables (Ex. 100,000 vars) and A is sparse (a large % of the coefficients in A are 0's).
- This situation often happens when solving partial differential equations numerically.
- Also arises when computing a web page ranking based on the ranking of pages that point to it.
Jacobi Iteration
Assume the system:
3x + y = 5
x + 3y =7
We can write the equations as:
x = (5-y)/3
y = (7-x)/3
This suggests the following iterative method:
xk+1 = (5-yk)/3
yk+1 = (7-xk)/3
This iterative method to solve linear equations is the Jacobi Iteration method.
Gauss Seidel Iteration
We can speed up the convergence by using the most recent computed values instead of the one from the previous iteration. This is called the Gauss Seidel Iteration method.
Jacobi: xk+1 = (5-yk)/3 yk+1 = (7-xk)/3
Gauss Seidel: xk+1 = (5-yk)/3 yk+1 = (7-xk+1)/3
Gauss Seidel Example
x0 = 0
y0 = 0
x1 = (5-0)/3 = 1.667
y1 = (7-1.667)/3 = 1.778
x2 = (5-1.778)/3 = 1.074
y2 = (7-1.074)/3 = 1.975
x3 = (5-1.975)/3 = 1.008
y3 = (7-1.008)/3 = 1.99
convergence of Gauss Seidel is faster than Jacobi (most of the time)
The Jacobi and Gauss Seidel methods may not converge in come cases
Ex. Rearrange the same equation in the example:
- 3x + y = 5 => x + 3y = 7
- x + 3y = 7 => 3x + y = 5
then...
xk+1 = 7-3yk
yk+1 = 5-3xk
x0 = 0
y0 = 0
x1 = 7-0 = 7
y1 = 5-7 = -2
x2 = 7-3(-2) = 13
y2 = 5-3(13) = -34
....
does not converge...Each iteration is taking us away from the solution
Strictly Diangonal Dominant Matrix
A matrix where the absolute value of the diagonal element is larger than the sum of the other elements in the same row.
Jacobi Iteration will converge if A os a strictly diagonal dominant matrix. This criteria can be used for Gauss Seidel in most of the cases.
Example:
- 3x + y = 5 => xk = (5-yk-1)/3
- x + 3y = 7 => yk = (7-xk-1)/3

Before using an iterative method, rearrange columns and rows to make the matrix strictly diagonal dominant to ensure convergence.
This Web page was created by John A. Cotiguala
and was last updated on July 10, 2003