Class wrap-up!
Functions of matrices, generalized eigenvalues, and back to the Optimal SOR. One misunderstood point was about eigenvalues of block diagonal matrices.
Optimal omega for SOR
Kronecker products and eigenvalues of the Laplacian.
Preconditioners and CG convergence
Missed class, sorry. We would have talked about reduction to tridiagonal form.
Ahmed Sameh talked about GMRES
Orthogonal polynomials and CG
We did a detailed study of the CG method from the Lanczos perspective.
We reviewed the GMRES method, Krylov sub-spaces, and introduced symmetric Lanczos methods.
We covered the homework and the basics of the Arnoldi method.
We returned to linear systems and started studying their convergence properties in terms of eigenvalues. We saw three methods as splittings: Gauss-Seidel, SOR, and Richardson.
We showed that Gauss-Seidel converges in half the iterations for matrices with property A, and mentioned quite a few other properties of Gauss-Seidel and SOR.
We began by looking at an example of a sparse linear system: The PageRank method to rank nodes. Then we looked at the convergence of the power method, and I ended by mentioning the Gershgorin disk theorem.
The question of the convergence of the Jacobi method relates to whether or not the eigenvalues of the iteration matrix are less than one. This was our introduction to eigenvalues. We then defined eigenvalues as solutions of the eigenvalue, eigenvector equation, related this to the characteristic polynomial, and studied types of eigenvalues and eigenvectors. We discussed the power method to compute then, and I alluded to the QR method.
We introduce the idea of using a residual to check converge of a linear system. We briefly talked about direct methods for linear systems and how they are different from the iterative methods we’ll see in this class. Then I introduced the Jacobi and Gauss-Seidel methods. We ended class on the question of convergence of the Jacobi method.
We reviewed the midterm, planned the rest of the class, and started on sparse matrices. In particular, we covered what a sparse matrix is, and how to do sparse matrix-vector products in coordinate storage format.
This was the midterm.
Today was a review of the homeworks and a review for the midterm.
We had a guest lecture by Faisal Saied
We first looked at one example of an unstable algorithm that shows up surprisingly often.
We looked at least squares problems today, and saw how our linear system to rank sports teams actually was hiding a least squares problem.
Then we saw the normal equations and the QR factorization methods to solve least squares problems. We briefly discussed the stability of these approaches.
Today, we discussed the difference between forward and backward error analysis for a problem. These were both based on notes that aren’t in the book, or largely come from non-assigned sources, so I’ve provided lecture notes below.
We reviewed the Cholesky factorization, and saw how to compute the true square root of a matrix. We did a quick estimate to show why Cholesky takes half the flops of LU. Then we moved into a discussion of numerical accuracy. The first piece of our discussion was about the conditioning of a problem. A problems conditioning just tells us how sensitive the problem is to changes in the input. We wouldn’t expect to accurately solve a problem that is highly sensitive to changes in the input on a computer. Each problem has a condition number, and there is a number that arises so frequently that we call it the condition number of a matrix.
We also briefly reviewed floating point arithmetic.
We reviewed the LU factorization again. Then we saw some the Matlab code to compute the LU factorization without pivoting. To fix the stability issues we encountered with LU without pivoting, we saw how to reorder the rows of the matrix to ensure that elements of the L matrix never get too large. This involved a quick aside on permutation matrices. Then we ran Linpack on your phones, discussed the flop count for LU, and saw a brief intro to positive definite matrices. We barely started discussing the Cholesky factorization for solving a positive definite linear system.
We saw how to generate a linear system to score a few sports teams. Then we saw a recap of the QR factorization, and we got a brief survey of the LU factorization without pivoting.
Today, we wrote a Matlab function to solve a diagonal linear system, and compared it with Matlab’s built in function using ‘tic’ and ‘toc’. Then we showed how to solve an upper-triangular system using a backsolve.
We then introduced the QR decomposition of a matrix. The key step here was finding out how to build the QR decomposition of a vector (the base case!)
Once we had that, we showed how to use it iteratively to compute the QR decomposition of an entire matrix.
We first saw how to build a really simple search engine using cosine distance, vector norms, and a matrix-vector product. Then, we revisited the proof that the SVD of a matrix exists, and showed that the largest singular value is the 2-norm of a matrix. We then moved onto showing how to solve simple structured linear systems such as diagonal and orthogonal matrices. We used these with an SVD to show how to solve a general linear system.
Today, we saw matrix norms, more about orthogonal matrices, and proved that the SVD of a matrix exists.
We began with a quick Matlab demo of some fun geometric transformations you can do with matrix-vector and matrix-matrix products.
We then discussed invertible matrices and worked out that you need a full-rank matrix in order to have an inverse, and then discussed the case of square, full-rank matrices to find the general matrix inverse.
Finally, we introduced vector norms.
Today, we saw the basic matrix notation and operations we’ll use in class. We also saw a bunch of matrix structure that will arise: diagonal, triangular, symmetric, and orthogonal. We reviewed the relationship between a matrix and a vector space. Then we saw how the rank of a matrix reveals the dimensions of various spaces, what rank has to do with the inverse, and finally, we saw an introduction to the decomposition view of matrix computations.
We introduced the class and reviewed the syllabus.