a logo for the course

Matrix Computations

David Gleich

Purdue University

Fall 2017

Course number CS-51500

Tuesday and Thursday, 10:30-11:45am

Location Forney B124


Readings and topics

Main reference

GvL
Matrix computations 4th edition. Johns Hopkins University Press. Gene H. Golub and Charles F. Van Loan

Other references

Demmel
James Demmel, Applied Numerical Linear Algebra, 1997

Trefethen

Saad
Saad, Iterative Methods for Sparse Linear Systems, 2nd edition, SIAM 2003

Lecture 29

Quick notes on how SVD computations work based on eigenvalues. Then we looked at how the Gauss-Seidel method is a "bug" in the Jacobi method. We'll later see how another seemingly un-related method is equivalent to Gauss-Seidel.

Notes
Lecture 29
Julia
Iterative Methods (html) (jl) (ipynb)
SVD Methods (html) (jl) (ipynb)

Lecture 28

More on eigenvalue algorithms.

Notes
Lecture 28
Reading
Trefethen and Bau - Lecture 28
Julia
Tridiagonal Reduction (html) (jl) (ipynb)
Sparse Eigenvalues (html) (jl) (ipynb)

Lecture 27

We discussed the multi-grid preconditioner and then started back into eigenvalue algorithms. To learn about three things: deflation, the block power method and subspace iteration, full subspace iteration. The final thing we saw was the QR method for eigenvalues. In the next class, we'll explain how this is equivalent to subspace iteration and how it can be optimized. We focused on algorithms and intuition instead of proofs (which are in the book)

Notes
Eigenvalue Algorithms (not complete)
Lecture 27
Reading
Trefethen and Bau - Lecture 28
Julia
Eigenvalue Algorithms (html) (jl) (ipynb)

Lecture 26

We discussed the types and examples of preconditioners.

Notes
Preconditioning A discussion on types and examples of preconditioners.
Reading
Saad Section 10.3
GvL, 11.5
Tref Lecture 40

Lecture 25

Orthogonal polynomials and CG

Notes
Orthogonal Polynomials A discussion
on orthogonal polynomials and how to derive CG via them!
Lecture 24
Reading
Saad Section 6.7.2
GvL, 10.3.6
[A General Framework for Recursions
for Krylov Space Solvers](ftp://ftp.sam.math.ethz.ch/pub/sam-reports/reports/reports2005/2005-09.pdf), M. Gutknecht.
From orthogonal polynomials to iteration schemes for linear systems: CG and CR revisited, B. Fisher
Orthogonal polynomials in Matlab, W. Gautschi.
Orthogonal polynomials, G. Meurant.
Julia
Orthogonal Polynomials (html) (jl) (ipynb)

Lecture 24

A lecture on efficient implementations of GMRES and the answers We also mentioned the FOM (Full Orthogonalization Method), which is the analog of CG for non-symmetric systems

Reading
Efficient GMRES Notes
Templates for the solution of lienar systems
Saad Section 6.4
Saad Section 6.5
Tref Lecture 35
Notes
Lecture 24
Slides
Lecture 24
Julia
Efficient GMRES (html) (jl) (ipynb)

Lecture 23

We continued working on the CG method, and derived an optimized CG method from the Lanczos perspective. We then saw variations for the Krylov subspace methods for when we are given a vector that is very close to the solution.

Reading
Golub and van Loan, section 10.3

Lecture 22

We finished up GMRES and derived the Lanczos algorithm, then we started with CG which is another Krylov subspace method. We started with deriving the procedure from the Lanczos perspective and then moved to optimizing the algorithm.

Reading
Conjugate Gradients
Julia
GMRES (html) (jl) (ipynb)

Lecture 21

We finished the introduction of Arnoldi and Lanczos methods to produce a basis for solving linear systems. We then introduced the Krylov subspace and showed the relationship between the Q factor in the Arnoldi decomposition and the Krylov basis. We also introduced GMRES which is our first iterative method that we will explore as part of this family of iterative solvers.

Reading
Saad Section 6.5
Trefethen and Bau, Lecture 35

Lecture 20

We finished the discussion about Property A, and introduced the generalized Eigenvalue problem. We saw a demo that compares Jacobi to Gauss Seidel. We then introduced the Arnoldi process.

Reading
Saad Section 6.3
Trefethen and Bau, Lecture 33
Julia
Gauss Seidel demo (html) (jl) (ipynb)
Jacobi (html) (jl) (ipynb)
SOR (html) (jl) (ipynb)

Lecture 19

We continued discussing PageRank and specifically saw when the method converges. We then went over a recap of the stationary methods we saw so far and introduced a couple more. We introduced the Richardson Splitting and the SOR method. We then saw specific cases when these methods converge and introduced property A.

Reading
Golub and van Loan, section 11.2
Golub and van Loan, section 11.3.1, 11.3.2
Golub and van Loan, theorem 7.2.1

Lecture 18

We continued our discussion of eigenvalues. We then introduced the QR iteration and the power methods for finding eigenvalues. We also went over a convergence discussion of the power method. Then, we introduced PageRank, and saw how it is associated with a linear system.

Reading
Golub and van Loan, section 7.3
Saad Chapter 4
Julia
Eigenvalues (html) (jl) (ipynb)

Lecture 17

We saw a convergence analysis of the stationary methods. We saw how it relates to the spectral radius of a matrix and that led us to introduce eigenvalues and eigenvectors. We then defined eigenvalues and eigenvectors more formally as the solution of the characteristic polynomial. We studied types of eigenvalues and eigenvectors and went over some facts related about them.

Reading
Golub and van Loan, section 7.1,7.3
Eigenvalues
(http://math.stackexchange.com/questions/82467/eigenvectors-of-real-symmetric-matrices-are-orthogonal)

Lecture 16

We started the class by going over some of the exam questions on the midterm. Then, we discussed how to assess the quality of a solution to a linear system using the idea of a residual. We also introduced the Jacobi and Gauss-Seidel iterative methods to solve a linear system. Towards the end of class, we started deriving the convergence of the Jacobi method.

Reading
Golub and van Loan, section 10.1
Saad Chapter 5

Lecture 15

We started the second unit on sparse matrices and iterative methods. We covered an intro to what iterative methods are in contrast to direct methods. What sparse matrices are and how we represent them, and how to do sparse matrix vector products. At the end of the class, we went over the midterm questions.

Reading
Direct methods for sparse linear systems (http://epubs.siam.org.ezproxy.lib.purdue.edu/doi/pdf/10.1137/1.9780898718881.ch2)
-- this is probably the best introduction to sparse matrix operations
Saad Chapter 3
Julia
Sparse Matrices (html) (jl) (ipynb)

Lecture 14

We had the midterm!

Lecture 13

Midterm review

Notes
Lecture 13
Slides
Lecture 13
Julia
Variance computations (html) (jl) (ipynb)

Lecture 12

We saw the Gram-Schmidt process for doing a QR factorization, and discussed why it is not the recommended procedure. We then briefly saw the geometry of the least-squares problem, before tackling the rank-deficient problem.

Readings
Trefethen and Bau, Lecture 8
Trefethen and Bau, Lecture 11
Golub and van Loan, section 5.5
Notes
Lecture 12

Lecture 11

We discussed the full-rank least squares problem today in terms the QR factorization, normal equations, and why you should use the QR factorization over the normal equations. Then we saw how to compute a QR factorization with Givens rotations as well as how to use these on structured matrices.

Readings
Golub and van Loan, section 5.1.8, 5.2.5, 5.3
Trefethen and Bau, Lecture 11
Notes
Lecture 11

Lecture 10

Then we moved into a discussion of numerical accuracy and stability. The first piece of our discussion was about the conditioning of a problem. A problem's conditioning just tells us how sensitive the problem is to changes in the input. We wouldn't expect to accurately solve a problem that is highly sensitive to changes in the input on a computer. Each problem has a condition number, and there is a number that arises so frequently that we call it the condition number of a matrix.

We first established what the condition number of a matrix is. Then we discussed the difference between forward and backward error analysis for a problem. These were both based on notes that aren't in the book, or largely come from non-assigned sources, so I've provided lecture notes below.

Reading
Tref Lecture 13, 14, 17
Golub and van Loan, sections 2.7, 3.1, 3.3.2
Numerical stability
Condition number
Floating-point
Stability
Error analysis of LU
Class workshop for error analysis of LU
Notes
Lecture 10

Lecture 9

We briefly derived the Cholesky factorization. We did a detailed flop count for LU and a quick estimate to show why Cholesky takes half the flops of LU; and we saw how the LINPACK benchmark works.

We saw two reasons why linear systems give bad answers on the computer in our Julia demos

Readings
GvL 4.2
Gvl 2.6, 2.7
Tref Lecture 12
Positive definite matrix
Notes
Lecture 9
Julia
Stability (html) (jl) (ipynb)

Lecture 8

We studied the LU factorization again in detail this time.
Then we saw some the Julia code to compute the LU factorization without pivoting. To fix the stability issues we encountered with LU without pivoting, we saw how to reorder the rows of the matrix to ensure that elements of the L matrix never get too large. This involved a quick aside on permutation matrices.

Readings
GvL 3.2, 3.4
GvL 4.2
Tref Lecture 20, 21
Notes
Lecture 8
Julia
LU Decompositions (html) (jl) (ipynb)
Wikipedia
LU Decomposition
Permutation matrix
How ordinary elimination became Gaussian elimination (also doi:10.1016/j.hm.2010.06.003)

Lecture 7

We worked through the derivation of the QR decomposition of a matrix. This involved Householder reflectors. We showed how to use them iteratively to compute the QR decomposition of an entire matrix.

Readings
GvL 3.2
GvL 5.1, 5.2
Tref Lecture 10
Julia
Householder Reflectors (html) (jl) (ipynb)
QR Algorithms (html) (jl) (ipynb)
Notes
Lecture 7

Lecture 6

We completed the proof that the SVD exists and started into solving simple systems of linear equations. We are trying to build to the QR and LU factorizations at this point.

Readings
GvL 5.1, 5.2
Tref Lecture 10
Julia
Functions and diagonal solves (html) (jl) (ipynb)
rank function in julia
rank.m The octave source code for the rank.m function that uses the SVD
Notes
Lecture 6

Lecture 5

We started with a demo of how to find similar images using cosine distance.

Then we started with a quick overview of orthogonal matrices and their relationships with vector norms. (There is a homework question about their relationship with matrix norms.) The bulk of the class was spent proving that the SVD of a matrix exists.

Readings
GvL 2.4, Tref Lecture 4.
Julia
Cosine Similarity (html) (jl) (ipynb)
Notes
Lecture 5

Lecture 4

Today, we saw vector norms and matrix norms, induced p-norms, the Frobenius norm.

Readings
GvL Section 2.3, 2.4
Tref Lecture 3, 4
Notes on how to prove that norms are equivalent by Steven Johnson
Julia
Induced Matrix Norms (html) (jl) (ipynb)
Notes
Lecture 4

Lecture 3

We did a lot of work introducing Julia and Latex

We began with a quick Julia demo of some fun geometric transformations you can do with matrix-vector and matrix-matrix products.

We then discussed invertible matrices and worked out that you need a full-rank matrix in order to have an inverse, and then discussed the case of square, full-rank matrices to find the general matrix inverse.

Readings
Inverses
Vector norms
Section 2.1.3 GvL
Section 2.2 GvL
Lecture 1, Trefethen and Bau
Julia
Julia Intro (html) (jl) (ipynb)
Geometry modification (html) (jl) (ipynb)
Notes
Lecture 3
Rank and linear systems

Lecture 2

Today, we saw the basic matrix notation and operations we'll use in class.

You should also see the notes below for We also saw a bunch of matrix structure that will arise: diagonal, triangular, symmetric, and orthogonal. We reviewed the relationship between a matrix and a vector space. Then we saw how the rank of a matrix reveals the dimensions of various spaces, what rank has to do with the inverse, and finally, we saw an introduction to the decomposition view of matrix computations.

Readings
Section 1.1--1.3 in GvL
Course notation (pdf)
Matrix structure (pdf)
Matrices and linear algebra (pdf)
Section 2.1 in GvL
The decompositional approach to matrix computations by G.W. Stewart
Notes
Lecture 2

Lecture 1

We introduced the class and reviewed the syllabus.

Readings
Syllabus
Handouts
First week survey
Slides
Lecture 1
Notes
Lecture 1