a logo for the course

Matrix computations &
Numerical linear algebra

David Gleich

Purdue University

Fall 2012

Course number CS-51500

Tuesday and Thursday, 10:30-11:45am

Location Lawson 1106


Readings and topics

Main reference

GvL
Matrix computations 3rd edition. Johns Hopkins University Press. Gene H. Golub and Charles F. Van Loan

Other references

Tref
Trefethen & Bau, Numerical linear algebra, SIAM 1996
Saad
Saad, Iterative Methods for Sparse Linear Systems, 2nd edition, SIAM 2003

Lecture 30

Class wrap-up!

Slides
Final topic list

Lecture 29

Functions of matrices, generalized eigenvalues, and back to the Optimal SOR. One misunderstood point was about eigenvalues of block diagonal matrices.

Readings
GvL Chapter 11
GvL Section 8.7
Saad 4.2.4
Cheat sheet solution to worksheet
http://galton.uchicago.edu/~lekheng/courses/302/notes18.pdf

Lecture 28

Optimal omega for SOR

Reading
Saad 4.2.4
Worksheets
Optimal SOR worksheet

Lecture 27

Kronecker products and eigenvalues of the Laplacian.

Readings
GvL 4.5.5
http://www.stat.uchicago.edu/~lekheng/courses/302/notes14.pdf

Lecture 26

Preconditioners and CG convergence

Reading
GvL 10.3
GvL Page 530

Lecture 25

Missed class, sorry. We would have talked about reduction to tridiagonal form.

Reading
Trefethen

Lecture 24

Ahmed Sameh talked about GMRES

Lecture 23

Orthogonal polynomials and CG

Reading
Saad Section 6.7.2
GvL, 10.3.6
A General Framework for Recursions for Krylov Space Solvers, M. Gutknecht.
From orthogonal polynomials to iteration schemes for linear systems: CG and CR revisited, B. Fisher
Orthogonal polynomials in Matlab, W. Gautschi.
Orthogonal polynomials, G. Meurant.

Lecture 22

We did a detailed study of the CG method from the Lanczos perspective.

Reading
Conjugate Gradients A mini-tutorial on the three ways to derive the CG method. (only one done at posting)

Lecture 21

We reviewed the GMRES method, Krylov sub-spaces, and introduced symmetric Lanczos methods.

Matlab
gmres.m

Lecture 20

We covered the homework and the basics of the Arnoldi method.

Reading
Saad Section 6.3
Tref Lecture 31

Lecture 19

We returned to linear systems and started studying their convergence properties in terms of eigenvalues. We saw three methods as splittings: Gauss-Seidel, SOR, and Richardson.
We showed that Gauss-Seidel converges in half the iterations for matrices with property A, and mentioned quite a few other properties of Gauss-Seidel and SOR.

Reading
GvL Section 10.1
Saad Chapter 4
Matlab
linsys_sor.m

Lecture 18

We began by looking at an example of a sparse linear system: The PageRank method to rank nodes. Then we looked at the convergence of the power method, and I ended by mentioning the Gershgorin disk theorem.

Reading
PageRank
GvL Section 7.3
Saad Chapter 4

Lecture 17

The question of the convergence of the Jacobi method relates to whether or not the eigenvalues of the iteration matrix are less than one. This was our introduction to eigenvalues. We then defined eigenvalues as solutions of the eigenvalue, eigenvector equation, related this to the characteristic polynomial, and studied types of eigenvalues and eigenvectors. We discussed the power method to compute then, and I alluded to the QR method.

Reading
GvL Section 10.1
Eigenvalues
Matlab
eigen.m

Lecture 16

We introduce the idea of using a residual to check converge of a linear system. We briefly talked about direct methods for linear systems and how they are different from the iterative methods we’ll see in this class. Then I introduced the Jacobi and Gauss-Seidel methods. We ended class on the question of convergence of the Jacobi method.

Reading
GvL Section 10.1
Saad Chapter 4

Lecture 15

We reviewed the midterm, planned the rest of the class, and started on sparse matrices. In particular, we covered what a sparse matrix is, and how to do sparse matrix-vector products in coordinate storage format.

Matlab
spmatvec.m
sparse_matrices.m
Reading
Direct methods for sparse linear systems (Chapter 2) – this is probably the best introduction to sparse matrix operations
Saad Chapter 3

Lecture 14

This was the midterm.

Lecture 13

Today was a review of the homeworks and a review for the midterm.

Lecture 12

We had a guest lecture by Faisal Saied

Lecture 11

We first looked at one example of an unstable algorithm that shows up surprisingly often.

We looked at least squares problems today, and saw how our linear system to rank sports teams actually was hiding a least squares problem.

Then we saw the normal equations and the QR factorization methods to solve least squares problems. We briefly discussed the stability of these approaches.

Readings
GvL 5.3
Tref Lecture 11
Computing variance
Matlab
var1.m Two-pass algorithm to compute the variance
var2.m One-pass algorithm to compute the variance (unstable!)
var3.m One-pass algorithm to compute the variance (stable!)
leastsquares.m A quick example of curve fitting with least-squares
illcondls.m Generate an ill-conditioned least-squares problem

Lecture 10

Today, we discussed the difference between forward and backward error analysis for a problem. These were both based on notes that aren’t in the book, or largely come from non-assigned sources, so I’ve provided lecture notes below.

Readings
Stability
Backwards Error Analysis of LU
Tref Lecture 14, 17
Numerical stability

Lecture 9

We reviewed the Cholesky factorization, and saw how to compute the true square root of a matrix. We did a quick estimate to show why Cholesky takes half the flops of LU. Then we moved into a discussion of numerical accuracy. The first piece of our discussion was about the conditioning of a problem. A problems conditioning just tells us how sensitive the problem is to changes in the input. We wouldn’t expect to accurately solve a problem that is highly sensitive to changes in the input on a computer. Each problem has a condition number, and there is a number that arises so frequently that we call it the condition number of a matrix.

We also briefly reviewed floating point arithmetic.

Readings
GvL 4.2
Gvl 2.4, 2.7
Tref Lecture 12, 13
Condition number
Floating-point

Lecture 8

We reviewed the LU factorization again. Then we saw some the Matlab code to compute the LU factorization without pivoting. To fix the stability issues we encountered with LU without pivoting, we saw how to reorder the rows of the matrix to ensure that elements of the L matrix never get too large. This involved a quick aside on permutation matrices. Then we ran Linpack on your phones, discussed the flop count for LU, and saw a brief intro to positive definite matrices. We barely started discussing the Cholesky factorization for solving a positive definite linear system.

Readings
GvL 3.2, 3.4
GvL 4.2
Matlab
lecture_lu.m
lusimple.m
lusimple_show_work.m
Wikipedia
LU Decomposition
Permutation matrix
Positive definite matrix

Lecture 7

We saw how to generate a linear system to score a few sports teams. Then we saw a recap of the QR factorization, and we got a brief survey of the LU factorization without pivoting.

Readings
GvL 3.2
Matlab
householder_qr.m
sports_linsys.m
Wikipedia
LU Decomposition

Lecture 6

Today, we wrote a Matlab function to solve a diagonal linear system, and compared it with Matlab’s built in function using ‘tic’ and ‘toc’. Then we showed how to solve an upper-triangular system using a backsolve.

We then introduced the QR decomposition of a matrix. The key step here was finding out how to build the QR decomposition of a vector (the base case!)

Once we had that, we showed how to use it iteratively to compute the QR decomposition of an entire matrix.

Readings
GvL 5.1, 5.2
Tref Lecture 10
Matlab
diagsolve.m
diagsolve_time.m
qr_reflector.m

Lecture 5

We first saw how to build a really simple search engine using cosine distance, vector norms, and a matrix-vector product. Then, we revisited the proof that the SVD of a matrix exists, and showed that the largest singular value is the 2-norm of a matrix. We then moved onto showing how to solve simple structured linear systems such as diagonal and orthogonal matrices. We used these with an SVD to show how to solve a general linear system.

Readings
GvL 1.2, GvL 2.7.1, GvL 3.1
Matlab
cosine_similarity.m

Lecture 4

Today, we saw matrix norms, more about orthogonal matrices, and proved that the SVD of a matrix exists.

Readings
GvL Section 2.3, 2.5
Tref Lecture 3, 4
Matlab
cell_mode.m Example of cell-mode in Matlab
lecture_4.m Examples of some matrix norms.

Lecture 3

We began with a quick Matlab demo of some fun geometric transformations you can do with matrix-vector and matrix-matrix products.

We then discussed invertible matrices and worked out that you need a full-rank matrix in order to have an inverse, and then discussed the case of square, full-rank matrices to find the general matrix inverse.

Finally, we introduced vector norms.

Readings
Inverses
Vector norms
Section 2.1.3 GvL
Section 2.2 GvL
Matlab
geometry_modification.m An example of using matrix-vector and matrix-matrix product to rotate and shear a set of points that make up a figure.

Lecture 2

Today, we saw the basic matrix notation and operations we’ll use in class. We also saw a bunch of matrix structure that will arise: diagonal, triangular, symmetric, and orthogonal. We reviewed the relationship between a matrix and a vector space. Then we saw how the rank of a matrix reveals the dimensions of various spaces, what rank has to do with the inverse, and finally, we saw an introduction to the decomposition view of matrix computations.

Readings
Section 1.1–1.3 in GvL
Course notation
Matrix structure
Matrices and linear algebra
Section 2.1 in GvL
The decompositional approach to matrix computations by G.W. Stewart
Matlab
lecture_2.m A summary of commands I typed in

Lecture 1

We introduced the class and reviewed the syllabus.

Readings
Syllabus