How to compute the determinant of a matrix?
Det A11= [a11] = a11 when N=1,
When N >= 2,,
for some ith
row, Aij is the submatrix
obtained by removing ith row and jth
column.
Example:
Det(A)= det= 6(-1)2det
+3(-1)3det
+4(-1)4det
= 6(-1)2 * [8(-1)2(3) + 1(-1)3(2)] + 3(-1)3 * [7(-1)2(3) + (1)(-1)3(-1)] +
4(-1)4[7(-1)2(2)+8(-1)3(-1)]
= 6(24-2) + (-3)(21+1) + 4(14+8) = 154
# of operations is O(N!) to obtain a determinant this way.
Upper Triangular System
Lower Triangular System
Solving Upper Triangular Systems with Back Propagation
Back substitution:
Example:
3) 3x + 4y + z = 2
2) 3y + 2z = 1
1) 5z = 10
from 1): z = 10/5 = 2
subsition into 2): 3y + 2(2) = 1 => y = -1
substitution into 3):3x + 4(-1) + 2 = 2 => x = 4/3
Solving an Upper Triangular System of linear equations id O(N2)
We can convert a arbitrary system of linear equations to be upper triangular the use back substitution. This is what Gaussian Substitution does
Three Elementary transformations that do not affect a linear system of equations
Interchange:
The order of equations can be changed without affecting the solution Scaling:
A linear equation can be multiplied by a constant without affecting the solution
Example
2(x+y=3) σ 2x + 2y = 6
Replacements:
A linear equation can be replaced by the sum of itself and a non-zero multiple of another equation in the system
Gaussian
Elimination:
Convert
a system of linear equations to upper triangular while using the
transformations
Solve
with back substitution.
Example:
1x + 3y + 4z = 19
8x + 9y + 3z = 35
x + y + z = 6
augmented matrix (including B vector)
| 1 3 4 19 |
| 8 9 3 35 |
| 1 1 1 6 |
Choose a pivot row and pivot element in that row
Divide entire row by pivot element 1
| 1 3 4 19 | ί Pivot Row
| 8 9 3 35 |
| 1 1 1 6 |
Apply transformations to make the upper triangular
First row multiplied by -1 and added to the third row
First row multiplied by -8 and added to second row
| 1 3 4 19 |
| 0 -15 -29 -117 | ί New pivot row and pivot element
| 0 -2 -3 -13 |
Divide pivot row by pivot element
| 1 3 4 19 |
| 0 1 29/15 117/15 |
| 0 -2 -3 -13 |
Multiple second row by 2 and add to third row
| 1 3 4 19 |
| 0 1 29/15 117/15 |
| 0 0 13/15 39/15 | ί New pivot row and pivot element
Divide pivot row by pivot column
| 1 3 4 19 |
| 0 1 29/15 117/15 |
| 0 0 1 3 |
Back Substitution
z = 3
y = -87/15 + 117/15 = 2
x = 19 3(2) 4(3) = 1
Implementation
in MatLab (File gauss.)
function x = gauss(A,B)
% Input A is a NxN matrix
% B is a Nx1 matrix
% Output x is a Nx1 matrix with the solution Ax=B
% Get the dimensions of A
[N N] = size(A)
% Initialize the x vector with 0
x = zero(N,1)
% Construct the augmented Matrix
Aug = [A B]
%Gaussian Elimination
% Pivot all rows
for p = 1:N
% get pivot to simplify doesnt include code where pivot is 0 you will have to %swap rows
piv = Aug[p,p]
% Divide pivot row by pivot
for k = p:N+1
Aug[p,k] = Aug[p,k]/ piv
end
% short notation
% Aug[p,p:N+1] = Aug[p,p:N+1]/piv
%Put 0s in other elements of pivot columns for all remaining rows
for k = P+1:N
m = Aug[k,p];
for i = p:N+1
Aug[k,i] = Aug[k,i] m * Aug[p,i]
End
% short notation
% Aug[k,p:N+1] = Aug[k,p:N+1] m * Aug[p,p:N+1]
% Back Substitution
for k = N:1
% Accumulate all elements remaining in that row times x
sum = 0
for i=k+1:N
sum = sum + Aug[k,i] * x[I,1]
end
x[k,1] = Aug[k,N+1]-sum
end
end
Some remarks about Gaussian Elimination
If the pivot is 0 then switch rows
To reduce arithmetic errors choose the maximum element in the remaining elements in the pivot column as the pivot
LU Factorization
It is useful when you want to solve the same system of linear equations with different B vectors
Ax1=B1
Ax2=B2
Axk=Bk
A matrix has a triangular factorization if it can be expressed as the product of a lower triangular(L) and a upper triangular matrix(U). A=LU.
Assume A can be factored to A=LU
Ax = B
LUx = B
L(Ux) = B
Y = UX
LY = B
Now we have 2 systems of linear equations
Once you have a LU factorization of A the you can solve Ly=B using forward substitution
Once you have Y you can solve Ux=Y using backwards substitution to find x
Three steps for solving a system of linear equations using LU factoring
§ Factor A into 2 matrices L and U
§ Solve LY=B using forward substitution
§ Solve UX=Y using backwards substitution
Example
6x1 + x2 4x3 = -1
5x1 + 3x2 + 2x3 = 21
x1 4x2 + 3x3 = 10
A= | 6 1 -4 |
| 5 3 2 |
| 1 -4 3 |
A= | 1 0 0 | | 6 1 -4 |
| 0 1 0 | | 5 3 2 |
| 0 0 1 | | 1 -4 3 |
Multiple first row by 1/6
| 1/6 0 0 | | 1 1/6 -4/6 |
| 0 1 0 | | 5 3 2 |
| 0 0 1 | | 1 -4 3 |
Multiply first row by -5 and add to second row
Multiply first row by -1 and add to third row
| 1/6 0 0 | | 1 1/6 -4/6 |
| -5/6 1 0 | | 0 13/6 32/6 |
| -1/6 0 1 | | 0 -25/6 22/6 |
Multiple second row by 6/13
| 1/6 0 0 | | 1 1/6 -4/6 |
| -5/13 6/13 0 | | 0 1 32/13 |
| -1/6 0 1 | | 0 -25/6 22/6 |
Multiple second row by 25/6 and add to third row
| 1/6 0 0 | | 1 1/6 -4/6 |
| -5/13 6/13 0 | | 0 1 32/13 |
| -150/78 25/13 1 | | 0 0 827/78 |
Third row multiplied by 78/827
| 1/6 0 0 | | 1 1/6 -4/6 |
| -5/13 6/13 0 | | 0 1 32/13 |
| -150/822 150/822 78/822 | | 0 0 1 |
L = | 1/6 0 0 |
| -5/13 6/13 0 |
| -150/822 150/822 78/822 |
U = | 1 1/6 -4/6 |
| 0 1 32/13 |
| 0 0 827/78 |
§ Solve LY = B using forward substitution
y1 = -18
y2 = 61/2
y3 = 135/4
§ Solve UX = Y using backwards substitution
x1 = 2069/156
x2 = -1367/26
x3 = 135/4
Complexity of Gaussian Elimination and LU factorization:
§ Factor A into LU-> O(N3)
§ Gaussian Elimination (No Backwards Substitution) -> O(N3)
§ Forward and Backwards Substitution -> O(N2)
§ If you want to solve Ax=B for a simple B then use Gaussian Elimination because it is faster the LU factoring (Even though both are O(N3) Gaussian elimination has a smaller constant (The cost will be M * O(N3))
§ If you want to solve multiple systems Ax=B for i=1..m you will use LU factoring (The cost will be O(N3) + @M O(N2))
Iterative Methods to solve Systems of Linear Equations:
§ These methods are useful when you have a lot of variables (e.g. 100000) and A is sparse (a large percentage of coefficients are 0s)
§ This situation happens when solving a partial differential equation numerically
Jacobi Method:
Assume system
3x+y=5
x+3y=7
We can write the equations as
x=(5-y)/3
y=(7-y)/3
This suggests the following iterative method
xk+1 = (5-yk)/3
yk+1 = (7-xk)/3
Assume x0 = 0, y0 = 0
x1 = (5-0)/3 = 1.667 y1 = (7-0)/3 = 2.333
x2 = (5-2.333)/3 = .889 y2 = (7-1.667)/3 = 1.77
x3 = (5-1.77)/3 = 1.074 y3 = (7-.889)/3 = 2.037
x4 = (5-2.037)/3 = .987 y4 = (7-1.074)/3 = 1.975
Continues to xn =1 and yn =2
Suggested Criteria of convergence when abs(xK xK+1)+abs(yK yK+1) < epsilon.
Gauss Seidel Iteration:
We can speed up the convergence by using the most recent computed values instead of the ones from the previous iteration
3x+y=5
x+3y=7
Jacobi: xk+1 = (5-yk)/3 Gauss Seidel: xk+1 = (5-yk)/3
yk+1 = (7-xk)/3 yk+1 = (7-xk+1)/3
Example(Gauss Seidel):
x0 = 0 y0 = 0
x1 = (5-0)/3 = 1.667 y1 = (7-1.667)/3 = 1.778
x2 = (5-1.778)/3 = 1.074 y2 = (7-1.074)/3 = 1.975
x3 = (5-1.975)/3 = 1.008 y3 = (7-1.008)/3 = 1.99
· The Jacobi and Gauss Seidel methods may not converge in some cases
o Such as if you rearranged the above equations to
xk+1 = 7-3yk
yk+1 = 5-3xk
o Strictly diagonal matrix
A matrix that, k=1,2,
,N, and j
k.
In other words, it is a matrix where the absolute value of diagonal element is larger than the sum of the other elements in the same row.
Jacobi Iteration will converge if A is a strictly diagonal dominant matrix. This criteria can be used for Gauss Seidel in most of the cases.