$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mSigma}{\ensuremath{\mat{\Sigma}}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Homework 10

Please answer the following questions in complete sentences in a clearly prepared manuscript and submit the solution by the due date on Blackboard (around Sunday, November 18th, 2018.)

Remember that this is a graduate class. There may be elements of the problem statements that require you to fill in appropriate assumptions. You are also responsible for determining what evidence to include. An answer alone is rarely sufficient, but neither is an overly verbose description required. Use your judgement to focus your discussion on the most interesting pieces. The answer to "should I include 'something' in my solution?" will almost always be: Yes, if you think it helps support your answer.

## Problem 0: Homework checklist

• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.

• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.

## Problem 1: Experimenting with Lanczos-based methods.

1. Implement a Lanczos-based MINRES code that explictly builds $\mV_k, \mT_k$ and then finds the minimum residual vector within the Krylov subspace.

2. Compare the first 25 residuals from the Lanczos-based CG code we wrote in class that explictly builds $\mV_k, \mT_k$, with the a standard implementation of CG from: http://www.cs.purdue.edu/homes/dgleich/cs515-2018/homeworks/cg.jl for the linear system

  n = 100
on = ones(Int64,n)
A = spdiagm((-2*on[1:end-1],4*on,-2*on[1:end-1]),(-1,0,1))
b = ones(n)


as well as the MINRES code you developed above.

3. Using the cg.jl function, look at how many iterations are required for CG to converge to a tolerance of $10^{-8}$ for the matrix in the last part. Determine how this scales with $n$.

## Problem 2: Orthogonality of Lanczos

Let $\lambda_1 = 0.1$ and $\lambda_n = 100$, $\rho = 0.9$, $n=30$.
Consider the $n$-by-$n$ matrix with diagonal elements
(This is called the Strako\v{s} matrix.)

1. Use the Lanczos method starting from a a random vector $\vv_1$ and the vector $\vv_1 = \ve/\sqrt{n}$ and then plot the quantity $\log(\normof{\mV_k^T \mV_k - \mI}+10^{-20})$ for $k=1$ to $30$. Describe what you SHOULD find and what you actually find. Do your results depend on the starting vector?

2. Plot $\log(|\vv_1^T \vv_k| + 10^{-20})$ for the $k=1$ to $30$. Also plot $\log(|\vv_{k-2}^T \vv_k| + 10^{-20})$ for $k=3$ to $30$.

3. What is $\beta_{31}?$ What should it be?

4. Plot $\log(\normof{\mA \mV_k - \mV_{k+1} \mT_{k+1}} + 10^{-20})$ for $k=1$ to $60$.

## Problem 3: Krylov methods

Let $\mA = \mI + \ve \ve^T$ be a square matrix (where $\ve$ is the column vector of all 1s of appropriate dimensions). Suppose we use MINRES to solve the system $\mA \vx = \vb$. In exact arithmetic, how many steps will MINRES take to converge to the exact solution? Be sure to explain \textit{how you determine what the answer is. Note, your result should be self contained and not utilize the theory discussed in class -- this is good practice for the final; if you do use a theorem to justify your answer, you may lose a few points.}

## Problem 4: Solving non-symmetric systems.

Note that there are few ways to turn a non-symmetric linear system into a symmetric linear system. The first is to use the normal equations $\mA^T \mA \vx = \mA^T \vb$. The second is to use a set of augmented equations:

1. Show that the solution of the augmented system of equations exists for any square, full-rank non-symmetric matrix $\mA$.

2. Use CG and MINRES to solve the Candyland linear system from the beginning of class using these approaches. How many matrix-vector products do they take compared with your Neumann series-based solver to converge to a 2-norm relative residual of $10^{-8}$.