$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mSigma}{\ensuremath{\mat{\Sigma}}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Homework 7

Please answer the following questions in complete sentences in a typed, clearly prepared manuscript and submit the solution by the due date on Blackboard (Monday, October 30th, 2017, around 4-5 am)

## Problem 0: Homework checklist

• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.

• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.

## Problem 1: Prove or disprove

People seem to like these questions!

1. The eigenvalues of an $n \times n$ real-valued matrix are always real.

2. The solution to $(\mA^T \mA + \gamma I) \vx = \vb$ is unique for any $\gamma > 0$.

3. Every $n \times n$ matrix can be written as the sum of two non-singular matrices.

4. An $n \times n$ real-valued matrix has an orthogonal set of eigenvectors.

5. An $n \times n$ matrix $\mA$ and its transpose $\mA^T$ have the same set of eigenvalues.

## Problem 2: The power method, and beyond!

We'll show that the humble power-method is really rather more interesting than we saw in class! It's a flexible starting point that serves as the basis for almost all of the eigenvalue algorithms.

1. (Warm up) Consider the power method as a Julia code:

 maxit = 1000
lam = 0
for i=1:maxit
x = A*x
x = x./norm(x)
lam = x'*A*x
end


Let $\vx\itn{i}$ be the value of x at the start of the $i$th instance of this loop. And let $\lambda\itn{i}$ be the value of lam at the start of the $i$th iteration. Suppose that $\vx$ and $\lambda$ are the true eigenvector, eigenvalue pair that we are converging too. If $\normof{\vx - \vx\itn{i}} = \eps$, show that: $|\lambda - \lambda^{i}|$ is $O(\eps^2)$ (where the constant may depend on the matrix $A$).

2. (Warm up) Show that if $\vx$ is an eigenvector of $\mA$ with eigenvalue $\lambda$, then $\vx$ is an eigenvector of $\mA^{-1}$. What is the eigenvalue?

3. Let $\mA$ have a complete set of eigenvalues $\lambda_i \not= \lambda_j$, for all $i \not= j$ and associated eigenvectors $\vx_i$. For the sake of simplicity, let's suppose that all the eigenvalues are real. Suppose we run the following Julia code:

 maxit = 1000
F = lufact(A)
for i=1:maxit
x = F\x
x = x./norm(x)
lam = x'*A*x
end


Does this converge? If so, what does it converge to, and how fast does it converge?

1. Suppose you were in a parallel linear algebra class, taught by another professor, and you were answering the following question:

Make the power method converge faster to any eigenvalue, eigenvector pair! Dense linear algebra is cheap. Work in pairs. Your friend says that you should look at this algorithm:

     maxit = 1000
for i=1:maxit
x = (A - lam*eye(size(A,1)))\x)
x = x./norm(x)
lam = x'*A*x
end


This algorithm is called "Rayleigh Quotient Iteration" and the standard result is that $x\itn{k}$ converges to an eigenvector cubically. That is, of $\normof{\vx\itn{k} - \vx} = \eps$, then $\normof{\vx\itn{k+1} - \vx} = O(\eps^3)$. Look up this result in a textbook, or on the web, and explain the proof in your own words.

1. Implement this method. Do you observe cubic convergence if you try to find an eigenvalue, eigenvector pair for a symmetric matrix?

## Problem 3: PageRank and the power method

In class, I derived PageRank as the solution of the linear system: where $\mP$ is a column-stochastic matrix, $0 \le \alpha < 1$ and $\vv$ is a non-negative vector whose elements sum to 1.

1. (Warm up) Use eigenvalue properties of $(\eye - \alpha \mP)$ to show that it is a non-singular matrix. (Hint: recall that $\rho(\mP) \le \normof{\mP}$ for any norm. Is there a norm where we know $\normof{\mP}$ has a known value?)

2. The standard definition of PageRank is as the largest eigenvector of the matrix: normalized to have sum one (instead of the standard 2-norm scaling). Consider the power method, without normalization, applied to $\mM$ starting from the vector $\vv$: Show that the iterates from this method are always non-negative and sum to 1.

3. Use this fact to simplify the iteration and show that the power method will converge to the solution of the linear system $(\eye - \alpha \mP) \vx = (1-\alpha) \vv$. You should get the same iteration that I discussed in class.

4. Download the data purdue_web.mat from http://www.cs.purdue.edu/homes/dgleich/cs515-2017/homeworks/purdue_web.mat Load the data.

 using MAT
file = matopen("purdue_web.mat")
close(file)


It contains the matrix $\mP$ for Purdue's website from 2001. The url variable contains the URL for each row/column of the matrix. Use url[1] to show the first url.

1. Use the power method to compute the PageRank vector $\vx$ for $\mP$ using $\alpha = 0.85$, $v_i = 1/n$ where $n$ is the size of the matrix. In Julia
 n = size(P,1)
v = ones(n)./n
alpha = 0.85


What is x[1]? Use the following code to sort the vector and output the top 27 entries:

     p = sortperm(x,rev=true);
urls[p[1:27]]


Report the output from this code.