
# Homework 7

Please answer the following questions in complete sentences in a typed, clearly prepared manuscript and submit the solution by the due date on Blackboard (Monday, October 30th, 2017, around 4-5 am)

## Problem 0: Homework checklist

• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.

• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.

## Problem 1: Prove or disprove

People seem to like these questions!

1. The eigenvalues of an $n \times n$ real-valued matrix are always real.

2. The solution to $(\mA^T \mA + \gamma I) \vx = \vb$ is unique for any $\gamma > 0$.

3. Every $n \times n$ matrix can be written as the sum of two non-singular matrices.

4. An $n \times n$ real-valued matrix has an orthogonal set of eigenvectors.

5. An $n \times n$ matrix $\mA$ and its transpose $\mA^T$ have the same set of eigenvalues.

## Problem 2: The power method, and beyond!

We'll show that the humble power-method is really rather more interesting than we saw in class! It's a flexible starting point that serves as the basis for almost all of the eigenvalue algorithms.

1. (Warm up) Consider the power method as a Julia code:

 maxit = 1000
lam = 0
for i=1:maxit
x = A*x
x = x./norm(x)
lam = x'*A*x
end


Let $\vx\itn{i}$ be the value of x at the start of the $i$th instance of this loop. And let $\lambda\itn{i}$ be the value of lam at the start of the $i$th iteration. Suppose that $\vx$ and $\lambda$ are the true eigenvector, eigenvalue pair that we are converging too. If $\normof{\vx - \vx\itn{i}} = \eps$, show that: $|\lambda - \lambda^{i}|$ is $O(\eps^2)$ (where the constant may depend on the matrix $A$).

2. (Warm up) Show that if $\vx$ is an eigenvector of $\mA$ with eigenvalue $\lambda$, then $\vx$ is an eigenvector of $\mA^{-1}$. What is the eigenvalue?

3. Let $\mA$ have a complete set of eigenvalues $\lambda_i \not= \lambda_j$, for all $i \not= j$ and associated eigenvectors $\vx_i$. For the sake of simplicity, let's suppose that all the eigenvalues are real. Suppose we run the following Julia code:

 maxit = 1000
F = lufact(A)
for i=1:maxit
x = F\x
x = x./norm(x)
lam = x'*A*x
end


Does this converge? If so, what does it converge to, and how fast does it converge?

1. Suppose you were in a parallel linear algebra class, taught by another professor, and you were answering the following question:

Make the power method converge faster to any eigenvalue, eigenvector pair! Dense linear algebra is cheap. Work in pairs. Your friend says that you should look at this algorithm:

     maxit = 1000
for i=1:maxit
x = (A - lam*eye(size(A,1)))\x)
x = x./norm(x)
lam = x'*A*x
end


This algorithm is called "Rayleigh Quotient Iteration" and the standard result is that $x\itn{k}$ converges to an eigenvector cubically. That is, of $\normof{\vx\itn{k} - \vx} = \eps$, then $\normof{\vx\itn{k+1} - \vx} = O(\eps^3)$. Look up this result in a textbook, or on the web, and explain the proof in your own words.

1. Implement this method. Do you observe cubic convergence if you try to find an eigenvalue, eigenvector pair for a symmetric matrix?

## Problem 3: PageRank and the power method

In class, I derived PageRank as the solution of the linear system: where $\mP$ is a column-stochastic matrix, $0 \le \alpha < 1$ and $\vv$ is a non-negative vector whose elements sum to 1.

1. (Warm up) Use eigenvalue properties of $(\eye - \alpha \mP)$ to show that it is a non-singular matrix. (Hint: recall that $\rho(\mP) \le \normof{\mP}$ for any norm. Is there a norm where we know $\normof{\mP}$ has a known value?)

2. The standard definition of PageRank is as the largest eigenvector of the matrix: normalized to have sum one (instead of the standard 2-norm scaling). Consider the power method, without normalization, applied to $\mM$ starting from the vector $\vv$: Show that the iterates from this method are always non-negative and sum to 1.

3. Use this fact to simplify the iteration and show that the power method will converge to the solution of the linear system $(\eye - \alpha \mP) \vx = (1-\alpha) \vv$. You should get the same iteration that I discussed in class.

4. Download the data purdue_web.mat from http://www.cs.purdue.edu/homes/dgleich/cs515-2017/homeworks/purdue_web.mat Load the data.

 using MAT
file = matopen("purdue_web.mat")
close(file)


It contains the matrix $\mP$ for Purdue's website from 2001. The url variable contains the URL for each row/column of the matrix. Use url[1] to show the first url.

1. Use the power method to compute the PageRank vector $\vx$ for $\mP$ using $\alpha = 0.85$, $v_i = 1/n$ where $n$ is the size of the matrix. In Julia
 n = size(P,1)
v = ones(n)./n
alpha = 0.85


What is x[1]? Use the following code to sort the vector and output the top 27 entries:

     p = sortperm(x,rev=true);
urls[p[1:27]]


Report the output from this code.