Readings and topics
Main reference
 GvL
 Matrix computations
4th edition. Johns Hopkins University Press.
Gene H. Golub and
Charles F. Van Loan
Other references
 Demmel
 James Demmel, Applied Numerical Linear Algebra, 1997
Trefethen
 Saad
 Saad, Iterative Methods for Sparse Linear Systems, 2nd edition, SIAM 2003
Lecture 30
We finished up on bipartite matrices, then did a brief tour of randomized
algorithms (see the Julia code), then did a review for the final (see slides!)
 Notes
 Lecture 30
 Reading
 Final review
 Julia
 Randomized Matrix Methods (html)
(jl)
(ipynb)
Lecture 29
Algorithms for the SVD and Bipartite Matrices (really, matrices with Property A
that are consistently ordered).
 Notes
 Lecture 29
 Reading
 Singular Value Algorithms (not complete)
 Julia
 Bipartite Matrices (html)
(jl)
(ipynb)
Lecture 28
The equivalence between QR and subspace iteration, tridiagonal reduction,
and a brief introduction to sparse eigenvalue algorithms.
 Notes
 Lecture 28
 Reading
 Trefethen and Bau  Lecture 28
 Julia
 Tridiagonal Reduction (html)
(jl)
(ipynb)
 Sparse Eigenvalues (html)
(jl)
(ipynb)
Lecture 27
We discussed the multigrid preconditioner and then started back into
eigenvalue algorithms. To learn about three things: deflation,
the block power method and subspace iteration, full subspace iteration.
The final thing we saw was the QR method for eigenvalues. In the
next class, we'll explain how this is equivalent to subspace iteration
and how it can be optimized. We focused on algorithms and intuition
instead of proofs (which are in the book)
 Notes
 Eigenvalue Algorithms (not complete)
 Lecture 27
 Reading
 Trefethen and Bau  Lecture 28
 Julia
 Eigenvalue Algorithms (html)
(jl)
(ipynb)
Lecture 26
We discussed the types and examples of preconditioners.
 Notes
 Lecture 26
 Reading

 Preconditioning A discussion
 on types and examples of preconditioners.
 Saad Section 10.3
 GvL, 11.5
 Tref Lecture 40
Lecture 25
This was an old lecture on how to implement GMRES efficiently and also CG via
orthogonal polynomials.
At the moment, I can't post the video, so I'm just posting the reading notes.
the most important are Efficient GMRES Notes, and the Julia
file. Then the Orthogonal Polynomials file.
 Reading
 Efficient GMRES Notes

 Orthogonal Polynomials A discussion
 on orthogonal polynomials and how to derive CG via them!
 Templates for the solution of linear systems
 Saad Section 6.4
 Saad Section 6.5
 Tref Lecture 35
 Julia
 Efficient GMRES (html)
(jl)
(ipynb)
 Orthogonal Polynomials (html)
(jl)
(ipynb)
Lecture 24
We finished our derivation of the efficient CG method based on updating the Cholesky factorization.
 Notes
 Lecture 24
 Reading
 Conjugate Gradient
Lecture 23
We saw that the Lanczos process follows from the Arnoldi process on a symmetric
matrix. We also saw that the Arnoldi vectors span the Krylov subspace. Then
we briefly introduced CG.
 Notes
 Lecture 23
 Julia
 Arnoldi and Lanczos (html)
(jl)
(ipynb)
 A simple CG (html)
(jl)
(ipynb)
Lecture 22
We showed that the Krylov subspace must contain the solution if it stops
growing. Then we also stated the Arnoldi process.
 Notes
 Lecture 22
 Julia
 Simple Krylov Subspaces are Illconditioned (html)
(jl)
(ipynb)
 Reading
 Krylov methods
 Conjugate Gradient
 Saad Section 6.3, 6.5
 Trefethen and Bau, Lecture 33, 35
 GvL Algorithm 10.1.1
Lecture 21
We started working on Krylov methods after seeing some examples of matrix
functions for graphs.
 Notes
 Lecture 21
 Julia
 Linear Discriminant Analysis (html)
(jl)
(ipynb)
 Reading
 Krylov methods
 Saad Section 6.5
 Trefethen and Bau, Lecture 35
Lecture 20
We discussed more advanced versions of some of the problems we saw in class.
This includes sequences of linear systems, rank1 updates, functions of a
matrix, and generalized eigenvalues.
 Notes
 Lecture 20
 Reading
 ShermanMorrisonWoodbury Formula
 GvL 9.1  Functions of a matrix
 GvL 8.7  Generalized eigenvalue decomposition
 GvL 2.1.5  Update formula
Lecture 19
We talked about cases where I've encountered floating point issues. In solving
quadratic equations, in the Hurwitz Zeta function, and in the PageRank
linear system. Then we dove into the outline for the proof that Gaussian
elimiantion with partial pivoting is backwards stable.
 Reading
 Error analysis of LU
 PageRank (in Analysis notes)
 Notes
 Lecture 19
 Julia
 Accurate PageRank versions (html)
(jl)
(ipynb)
Lecture 18
We investigated how computers store floating point numbers in the IEEE
format.
This then led to an understanding of the guarantees they make about
basis arithmetic operations. We used this to understand why some implementations
of codes are better than others, which gives rise to the idea of a backwards
stable algorithm.
 Notes
 Lecture 18
 Julia
 Floating Point systems (html)
(jl)
(ipynb)
 Variance computations (html)
(jl)
(ipynb)
 Reading
 Tref Lecture 13, 14, 17
 Golub and van Loan, sections 2.7, 3.1, 3.3.2
 Numerical stability
 Floatingpoint
 Backwards Stability (in Analysis notes)
Lecture 17
We saw a lot today! The goal of the class was to establish kappa(A)
as a fundamental quantity. This involved:
2. the SVD of a matrix exists.
4. A_2 is orthogonally invariant.
5. A_2 = sigma_max. This follows from orthogonal invariance.
6. A^{1}_2 = 1/sigma_min. This again follows from orthogonal invariance.
3. the SVD is the eigenvalue decomposition for sym. pos. def matrices.
7. A_2, A^{1}_2 = lambda_max and 1/\lambda_min for sym. pos. def.
Then we showed that Richardson and Steepest Descent converge at
rates that depend on kappa(A) the 2norm condition number.
 Notes
 Lecture 17
 Reading
 GvL 2.4 (and other parts of Chapter 2)
 GvL 11.3.2.
Lecture 16
The first piece of our
discussion was about the conditioning of a problem. A problem's
conditioning just tells us how sensitive the problem is to changes
in the input. We wouldn't expect to accurately solve a problem
that is highly sensitive to changes in the input on a computer.
Each problem has a condition number, and there is a number that
arises so frequently that we call it the condition number of a
matrix. We first established what the condition number of a matrix is.
 Notes
 Lecture 16
 Julia
 The Hilbert Linear System (that Julia can't solve) (html)
(jl)
(ipynb)
 Reading
 Tref Lecture 13, 14, 17
 Golub and van Loan, sections 2.7, 3.1, 3.3.2
 Numerical stability
 Condition number
 Floatingpoint
Lecture 15
We went over Householder for QR, and then went through the second half of
the class. Then we counted flops for the LU factorization code we had.
(Beware, this count is not optimal)
 Notes
 Lecture 15
 Julia
 A linear system that Julia can't solve (html)
(jl)
(ipynb)
Note that if you perturb this system by eps(1.0)*randn(60,60), then it can!
 The Hilbert Linear System (that Julia can't solve) (html)
(jl)
(ipynb)
 Reading
 Analyzing matrix computations
 GvL 3.2.8
 Tref Lecture 20
 Smoothed analysis of Gaussian elimination explains why Gaussian elimination is not always as bad in practice as we might expect from that example.
 GvL 3.2
 GvL 5.1, 5.2
 Tref Lecture 10
Lecture 14
We had the midterm!
Lecture 13
We saw how to derive the QR factorization of a matrix as what arises with
Gram Schmidt. We also saw givens rotations for computing this.
 Notes
 Lecture 13
 Julia
 QR for Least Squares (html)
(jl)
(ipynb)
 Reading
 QR and Least Squares
 Midterm Review
 Trefethen and Bau, Lecture 8
 Trefethen and Bau, Lecture 11
 Golub and van Loan, section 5.5
Lecture 12
We saw pivoting in variable elimination to ensure that it always works
and also how to do variable elimination for least squares.
 Notes
 Lecture 12
 Julia
 Pivoting in LU (html)
(jl)
(ipynb)
 Elimination in Least Squares (html)
(jl)
(ipynb)
 Reading
 GvL 3.2, 3.4
 GvL 4.2
 Tref Lecture 20, 21
 Golub and van Loan, section 5.1.8, 5.2.5, 5.3
 Trefethen and Bau, Lecture 11
Lecture 11
We saw variable elimination and how it gives rise to the LU factorization
of a matrix.
 Notes
 Lecture 11
 Julia
 Variable Elimination (html)
(jl)
(ipynb)
 Reading
 Elimination
 GvL 3.2, 3.4
 GvL 4.2
 Tref Lecture 20, 21
Lecture 10
We saw that GaussSeidel is equivalent to Steepest Descent on symmetric matrices
with unit diagonal and some intro material on eigenvalue algorithms.
 Notes
 Lecture 10
 Reading
 GaussSeidel and Jacobi
 Eigenvalues and the Power Method
 Golub and van Loan, section 7.3
 Saad Chapter 4
 Julia
 Eigenvectors for Phase Retrieval (html)
(jl)
(ipynb)
 Eigenvalue Algorithms (html)
(jl)
(ipynb)
Lecture 9
We saw the Jacobi and GaussSeidel iterations today.
 Notes
 Lecture 9
 Readings
 GaussSeidel and Jacobi
 Golub and van Loan, section 10.1

 Saad Chapter 5
 Julia
 Jacobi and Gauss Seidel (html)
(jl)
(ipynb)
Lecture 8
We finished our work on the steepest descent algorithm and moved into the
coordinate descent algorithm.
 Notes
 Lecture 8
 Readings
 Steepest descent
 Julia
 Steepest descent and Ax=b (html)
(jl)
(ipynb)
Lecture 7
We did a bit on norms, then went back to study the quadratic function
that will give us another way to solve linear systems.
 Notes
 Lecture 7
 Reading
 Aside on norms
 GvL Section 2.3, 2.4
 Tref Lecture 3, 4
 Notes on how to prove that norms are equivalent by Steven Johnson
 Julia
 Quadratic equations and Ax=b (html)
(jl)
(ipynb)
 Norms (not covered in class) (html)
(jl)
(ipynb)
Lecture 6
We saw a number of derivations of the same type of algorithm called
Richardson's method. We also saw how to check the solution of a linear
system of equations.
 Notes
 Lecture 6
 Reading
 Simple Matrix Algorithms
 Other reading https://en.wikipedia.org/wiki/Modified_Richardson_iteration
 Julia
 Richardon method (html)
(jl)
(ipynb)
Notes
Lecture 5
We saw CSC and CSR formats as well as the Neumann series method to build
the "inverse" of a matrix (but you shouldn't do that), and the first
stab at an algorithm to solve a linear system of equations!
 Notes
 Lecture 5
 Reading
 Sparse Matrix Operations
 Simple Matrix Algorithms
 Julia
 Operations with sparse matrices in Julia (html)
(jl)
(ipynb)
Lecture 4
We looked at structure in matrices: Symmetric positive (semi) definite,
Orthogonal, Mmatrix, Triangular, Hessenberg.
Then we saw how you could model the game of Candyland as a matrix problem.
We used this as an example to study how to actually use sparse matrix operations
in Julia.
 Notes
 Lecture 4
 Reading
 Structure in matrices
 Sparse Matrix Operations
 Matrix structure (pdf)
 Julia
 Operations with sparse matrices in Julia (html)
(jl)
(ipynb)
Lecture 3
We looked at structure in matrices: Hankel, Toeplitz, Sparse.
 Notes
 Lecture 3
 Reading
 Structure in matrices
 Matrix structure (pdf)
 Julia
 Examples of Matrices and Least Squares (html)
(jl)
(ipynb)
 General reading on sparse matrices

 Direct methods for sparse linear systems (http://epubs.siam.org.ezproxy.lib.purdue.edu/doi/pdf/10.1137/1.9780898718881.ch2)
  this is probably the best introduction to sparse matrix operations
 Saad Chapter 3
Lecture 2
We started introducing our notation and discussed what a Matrix is, along
with the three fundamental problems of linear algebra.
 Notes
 Lecture 2
 Reading
 What is a matrix
 Course notation (pdf)
 Matrix structure (pdf)
 Matrices and linear algebra (pdf)
 Julia
 Examples of Matrices and Least Squares (html)
(jl)
(ipynb)
Lecture 1
We introduced the class and reviewed the syllabus.
 Readings
 Syllabus
 Handouts
 First week survey
 Slides
 Lecture 1