image of a large network

Network & Matrix Computations

David Gleich

Purdue University

Fall 2011

Course number CS 59000-NMC

Tuesdays and Thursday, 10:30am-11:45am

CIVL 2123


Readings and topics

References

Graph algorithms
Graph algorithms in the language of linear algebra SIAM 2011, edited by Jeremy Kepner and John Gilbert
Direct methods
Direct methods for sparse linear systems SIAM 2007, by Timothy Davis
Markov chains
Non-negative matrices and Markov chains
Network analysis
Network analysis, methodological foundations Springer, 2005. Edited by Ulrik Brandes and Thomas Erlebach

Lecture 23

Student presentations.

Lecture 22

Alex Pothen lectured about matchings and the Dulmage-Mendelsohn permutation.

Lecture 21

Jen Neville lectured about Kronecker product graphs and their lack of variability

Lecture 20

Today, we saw the SDP approximation for MaxCut, without the rounding proof (sorry). And we saw local algorithms for PageRank.

Reading
Goemans and Williamson, Improved Approximation Algorithms for Maximum Cut …
Anderson, Chung, and Lang, Local graph partitioning
Bonchi, Esfandiar, Gleich, et al.,
Matlab
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/maxcut_example.m
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/minbis_sdp_example.m
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/ppr_push_heap.m
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/ppr_push_queue.m
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/localpr.m

Lecture 19

We didn’t get to the proof of the Cheeger inequality last time. That’s what we saw today.

Reading
Fan Chung, Four proofs of the Cheeger inequality and graph partitioning algorithms
Fan Chung’s Spectral Graph Theory Course notes
Yuan Yao’s notes

Lecture 18

In this lecture, we’ll see how to relax the minimium bisection problem to yield the Fiedler partitioning problem. Then we’ll see the analog with a Markov chain, which gives rise to the minimum conductance cut problem, and the Cheeger inequality.

Reading
Dan Spielman’s Spectral Graph Theory Course Lectures 6 and 7
Fan Chung’s Spectral Graph Theory Course notes

Lecture 17

The goals for this lecture are:

  1. To introduce spectral graph theory and provide some framework for the results.
  2. Familarize everyone with the Laplacian and incidence matrices of a graph, and their relationship.
Reading
Network analysis Chapter 14

Lecture 16

The goals for this lecture are:

  1. Understand that PageRank is an analytic function of and what this means.
  2. Work through the result that PageRank has a unique limit as .
  3. Understand how the strong component structure of the web impacts computing PageRank and the choice of .
  4. See how to compute Personalized PageRank more efficiently.
Reading
PageRank as a function of the damping parameter, Models and algorithms for PageRank sensitivity (my thesis), Jordan canonical form of the Google matrix, Fast matrix computations for pair-wise and column-wise Katz scores and commute times.

Lecture 15

PageRank models redux and dangling nodes

Lecture 14

PageRank algorithms

Lecture 13

Introduction to PageRank models

Lecture 12

We began studying centrality measures. We saw the definition of a structure index. This motivated a quick review of permutation matrices and how they apply to graphs and the graph isomorphism problem. Next, we looked at the degree centrality, the eccentricity centrality, the closeness or transmission number centrality. We stated the Brandes’ betweenness-centrality measure, and defined the Katz centrality.

Reading
Network analysis: Chapter 3
Network analysis: Chapters 4 and 5 are also a good reference
A New Status Index Derived from Sociometric Analysis Psychometrika, 1953.

Lecture 11

We finished up with reversibility of Markov chains and saw how they can be used in random sampling algorithms.

Reading
http://www.math.ucsd.edu/~williams/courses/m28908/scullardMath289_Reversibility.pdf

Lecture 10

More on Perron-Frobenius and a reward function over the states of a Markov chain.

Reading
Markov chains: Chapter 1

Lecture 9

We finished the state space classification, went into the Perron-Frobenius theorem and the convergence of the probability distribution vector over states, i.e. powers of the transition matrix.

Reading
Markov chains: Chapter 1

Lecture 8

We first saw how to compute a few graph properties purely with the adjacency matrix including: the number of paths between nodes of length , and the number of triangles in a graph.

We also began looking at Markov chains and began the state space classification.

Reading
Fast Counting of Triangles in Large Real Networks: Algorithms and Laws Charalampos E. Tsourakakis (ICDM2008)
Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix Haim Avron and Sivan Toledo, JACM 2011.

Lecture 7

In this lecture, we began with semi-rings, a few examples of these things. We saw how to write matrix-vector products over semi-rings. We saw how the Bellman-Ford algorithm is really a matrix-with-a-semiring algorithm.

Reading
Graph algorithms (Chapter 4)

Lecture 6

We started this lecture by seeing how to use the PCG Matlab function to solve a linear system, and how then saw to use eigs to compute eigenvalues and eigenvectors of a sparse matrix. Following these demos, we saw how to implement the breadth-first search routine with only matrix-vector products.

Reading
Graph algorithms (Chapter 4)

Lecture 5

We finished up iterative methods by looking at the Richardson, Jacobi, and Gauss-Seidel methods. We then took a whirlwind tour of Krylov subspace methods for linear systems and eigenvalue problems.

Lecture 4

This lecture began with sparse matrix vector products, and how these can be used to search for tags overlap. We then moved onto iterative methods for linear systems and saw how the residual was a good bound on the error in an approximate solution.

Reading
Graph algorithms (Chapter 13)
Direct methods (Chapter 2)

Lecture 3

In this lecture, we reviewed eigenvalue problems, the spectral decomposition. We also saw how to store a sparse matrix as a triplet list or in compressed sparse row arrays.

Lecture 2

This lecture was a review of matrix norms and linear systems.

Lecture 1

The lecture was an introduction to the course with the intro slides. We also went over the syllabus and took a background quiz. Additionally, we saw how to do spectral clustering via a quick matlab demo.