# Network & Matrix Computations

### CIVL 2123

$\newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}}$

## References

Graph algorithms
Graph algorithms in the language of linear algebra SIAM 2011, edited by Jeremy Kepner and John Gilbert
Direct methods
Direct methods for sparse linear systems SIAM 2007, by Timothy Davis
Markov chains
Non-negative matrices and Markov chains
Network analysis
Network analysis, methodological foundations Springer, 2005. Edited by Ulrik Brandes and Thomas Erlebach

## Lecture 23

Student presentations.

## Lecture 22

Alex Pothen lectured about matchings and the Dulmage-Mendelsohn permutation.

## Lecture 21

Jen Neville lectured about Kronecker product graphs and their lack of variability

## Lecture 20

Today, we saw the SDP approximation for MaxCut, without the rounding proof (sorry). And we saw local algorithms for PageRank.

Goemans and Williamson, Improved Approximation Algorithms for Maximum Cut …
Anderson, Chung, and Lang, Local graph partitioning
Bonchi, Esfandiar, Gleich, et al.,
Matlab
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/maxcut_example.m
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/minbis_sdp_example.m
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/ppr_push_heap.m
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/ppr_push_queue.m
http://www.cs.purdue.edu/homes/dgleich/courses/nmcomp/matlab/localpr.m

## Lecture 19

We didn’t get to the proof of the Cheeger inequality last time. That’s what we saw today.

Fan Chung, Four proofs of the Cheeger inequality and graph partitioning algorithms
Fan Chung’s Spectral Graph Theory Course notes
Yuan Yao’s notes

## Lecture 18

In this lecture, we’ll see how to relax the minimium bisection problem to yield the Fiedler partitioning problem. Then we’ll see the analog with a Markov chain, which gives rise to the minimum conductance cut problem, and the Cheeger inequality.

Dan Spielman’s Spectral Graph Theory Course Lectures 6 and 7
Fan Chung’s Spectral Graph Theory Course notes

## Lecture 17

The goals for this lecture are:

1. To introduce spectral graph theory and provide some framework for the results.
2. Familarize everyone with the Laplacian and incidence matrices of a graph, and their relationship.
Network analysis Chapter 14

## Lecture 16

The goals for this lecture are:

1. Understand that PageRank is an analytic function of $\alpha$ and what this means.
2. Work through the result that PageRank has a unique limit as $\alpha \to 1$.
3. Understand how the strong component structure of the web impacts computing PageRank and the choice of $\alpha$.
4. See how to compute Personalized PageRank more efficiently.
PageRank as a function of the damping parameter, Models and algorithms for PageRank sensitivity (my thesis), Jordan canonical form of the Google matrix, Fast matrix computations for pair-wise and column-wise Katz scores and commute times.

## Lecture 15

PageRank models redux and dangling nodes

## Lecture 14

PageRank algorithms

## Lecture 13

Introduction to PageRank models

## Lecture 12

We began studying centrality measures. We saw the definition of a structure index. This motivated a quick review of permutation matrices and how they apply to graphs and the graph isomorphism problem. Next, we looked at the degree centrality, the eccentricity centrality, the closeness or transmission number centrality. We stated the Brandes’ betweenness-centrality measure, and defined the Katz centrality.

Network analysis: Chapter 3
Network analysis: Chapters 4 and 5 are also a good reference
A New Status Index Derived from Sociometric Analysis Psychometrika, 1953.

## Lecture 11

We finished up with reversibility of Markov chains and saw how they can be used in random sampling algorithms.

http://www.math.ucsd.edu/~williams/courses/m28908/scullardMath289_Reversibility.pdf

## Lecture 10

More on Perron-Frobenius and a reward function over the states of a Markov chain.

Markov chains: Chapter 1

## Lecture 9

We finished the state space classification, went into the Perron-Frobenius theorem and the convergence of the probability distribution vector over states, i.e. powers of the transition matrix.

Markov chains: Chapter 1

## Lecture 8

We first saw how to compute a few graph properties purely with the adjacency matrix including: the number of paths between nodes of length $k$, and the number of triangles in a graph.

We also began looking at Markov chains and began the state space classification.

Fast Counting of Triangles in Large Real Networks: Algorithms and Laws Charalampos E. Tsourakakis (ICDM2008)
Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix Haim Avron and Sivan Toledo, JACM 2011.

## Lecture 7

In this lecture, we began with semi-rings, a few examples of these things. We saw how to write matrix-vector products over semi-rings. We saw how the Bellman-Ford algorithm is really a matrix-with-a-semiring algorithm.

Graph algorithms (Chapter 4)

## Lecture 6

We started this lecture by seeing how to use the PCG Matlab function to solve a linear system, and how then saw to use eigs to compute eigenvalues and eigenvectors of a sparse matrix. Following these demos, we saw how to implement the breadth-first search routine with only matrix-vector products.

Graph algorithms (Chapter 4)

## Lecture 5

We finished up iterative methods by looking at the Richardson, Jacobi, and Gauss-Seidel methods. We then took a whirlwind tour of Krylov subspace methods for linear systems and eigenvalue problems.

## Lecture 4

This lecture began with sparse matrix vector products, and how these can be used to search for tags overlap. We then moved onto iterative methods for linear systems and saw how the residual was a good bound on the error in an approximate solution.

Graph algorithms (Chapter 13)
Direct methods (Chapter 2)

## Lecture 3

In this lecture, we reviewed eigenvalue problems, the spectral decomposition. We also saw how to store a sparse matrix as a triplet list or in compressed sparse row arrays.

## Lecture 2

This lecture was a review of matrix norms and linear systems.

## Lecture 1

The lecture was an introduction to the course with the intro slides. We also went over the syllabus and took a background quiz. Additionally, we saw how to do spectral clustering via a quick matlab demo.