Student presentations.
Alex Pothen lectured about matchings and the Dulmage-Mendelsohn permutation.
Jen Neville lectured about Kronecker product graphs and their lack of variability
Today, we saw the SDP approximation for MaxCut, without the rounding proof (sorry). And we saw local algorithms for PageRank.
We didn’t get to the proof of the Cheeger inequality last time. That’s what we saw today.
In this lecture, we’ll see how to relax the minimium bisection problem to yield the Fiedler partitioning problem. Then we’ll see the analog with a Markov chain, which gives rise to the minimum conductance cut problem, and the Cheeger inequality.
The goals for this lecture are:
The goals for this lecture are:
PageRank models redux and dangling nodes
PageRank algorithms
Introduction to PageRank models
We began studying centrality measures. We saw the definition of a structure index. This motivated a quick review of permutation matrices and how they apply to graphs and the graph isomorphism problem. Next, we looked at the degree centrality, the eccentricity centrality, the closeness or transmission number centrality. We stated the Brandes’ betweenness-centrality measure, and defined the Katz centrality.
We finished up with reversibility of Markov chains and saw how they can be used in random sampling algorithms.
More on Perron-Frobenius and a reward function over the states of a Markov chain.
We finished the state space classification, went into the Perron-Frobenius theorem and the convergence of the probability distribution vector over states, i.e. powers of the transition matrix.
We first saw how to compute a few graph properties purely with the adjacency matrix including: the number of paths between nodes of length , and the number of triangles in a graph.
We also began looking at Markov chains and began the state space classification.
In this lecture, we began with semi-rings, a few examples of these things. We saw how to write matrix-vector products over semi-rings. We saw how the Bellman-Ford algorithm is really a matrix-with-a-semiring algorithm.
We started this lecture by seeing how to use the PCG Matlab function to solve a linear system, and how then saw to use eigs to compute eigenvalues and eigenvectors of a sparse matrix. Following these demos, we saw how to implement the breadth-first search routine with only matrix-vector products.
We finished up iterative methods by looking at the Richardson, Jacobi, and Gauss-Seidel methods. We then took a whirlwind tour of Krylov subspace methods for linear systems and eigenvalue problems.
This lecture began with sparse matrix vector products, and how these can be used to search for tags overlap. We then moved onto iterative methods for linear systems and saw how the residual was a good bound on the error in an approximate solution.
In this lecture, we reviewed eigenvalue problems, the spectral decomposition. We also saw how to store a sparse matrix as a triplet list or in compressed sparse row arrays.
This lecture was a review of matrix norms and linear systems.
The lecture was an introduction to the course with the intro slides. We also went over the syllabus and took a background quiz. Additionally, we saw how to do spectral clustering via a quick matlab demo.