$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mSigma}{\ensuremath{\mat{\Sigma}}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Homework 6

Please answer the following questions in complete sentences in a typed, clearly prepared manuscript and submit the solution by the due date on Blackboard (Monday, October 23rd, 2017, around 4-5 am)

## Problem 0: Homework checklist

• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.

• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.

## Problem 1: Direct Methods for Tridiagonal systems

In class, we said the definition of a sparse matrix was one with enough zeros to take advantage of it. In this problem, we'll show how to take advantage of tridiagonal structure in a matrix.

Let $\mA$ be a symmetric, tridiagonal, positive definite matrix:

1. Let $\mA_{n-1}$ is the $n-1 \times n-1$ leading principal minor of $\mA$. Suppose you are given the Choleksy factorization of $\mA_{n-1} = \mL_{n-1} \mL_{n-1}^T$. Determine the structure of $\mL_{n-1}$ (hint, $\mL_{n-1} \mL_{n-1}^T$ must be tridiagonal!).

2. Now, show how to compute the Cholesky factorization of $\mA = \mL \mL^T$ from $\mL_{n-1}$ in a constant number of operations.

3. Use the result of the first two parts to write down an algorithm that computes the Cholesky factorization of $\mA$ in a linear amount of work (starting from scratch!)

## Problem 2: Sparse matrices in Julia

If you wish to use another language, please see me

In this problem, we'll return to our favorite linear system, Poisson's equation. (I told you that we'd see a lot of this system.)

We'll assume that the boundary conditions are zero everywhere. Consequently, we'll use the form of the system where we have removed all rows that are set to zero. Some of you did this already, so you may have an easier time on this question.
For the $N=4$ case, we have: where $U(i,j) = u(x_i,y_j)$ and $F(i,j) = h^2 f(x_i,y_j)$.

1. You can change an entry of a sparse matrix in Julia with $A[i,j] = v$. Write a Julia routine to create the matrix above for a general $N$ by changing each element one at a time.
A = spzeros((N-1)*(N-1),(N-1)*(N-1)); # this creates an all-zero sparse matrix
<fill in the rest>

1. As shown in class, another way to create a sparse matrix is to use the coordinate form. Create the arrays I,J,andV for the matrix above. Hint there are $(N-1)^2 + 4(N-1)(N-2)$ non-zeros. Use this information to initialize the arrays I,J, and V. Write a routine to create the matrix above for a general $N$ using the coordinate form
nz = (N-1)^2 + 4*(N-1)*(N-2);
I = zeros(nz);
J = zeros(nz);
V = zeros(nz);
<fill in the rest>
A = sparse(i,j,v,(N-1)*(N-1),(N-1)*(N-1));

1. Which way is faster? Try creating matrices with $N = 10, 50, 100, 500$.

2. Let $f(x,y) = 1$. Is solving the linear system using backslash faster or slower with a sparse matrix? Does it depend on the value of $N$? Investigate and report.

3. Again let $f(x,y) = 1$. Implement the Jacobi method for this problem. You should start from the iterate $\vx\itn{0} = \vb$, so start with the guess of the right hand side. How many iterations does it take to converge to a relative residual of $10^{-4}$? Use the same values of $N$ from part 3.

4. Does the number of iterations of your Jacobi solver depend on the the right hand side? Try comparing $f(x,y) = 1$, $f(x,y) = -1$, $f(x,y) = -(x-0.5)^2 - (y-0.5)^2$, $f(x,y) = \sin(100x)\cos(100y)$. Use the same values of $N$ from part 3.

## Problem 3: Fun with sparse matrices

In Julia, calling y = A*x on a sparse matrix A, calls a special function to perform this multiplication faster. To generate a sparse random matrix in julia, you can call A = sprand(M,N,d) where M and N are the dimensions of A, and d is the density of A. 1. Generate a random sparse matrix A and transform it to dense matrix B. Use a dense vector x of suitable dimensions and compute A*x and B*x. Vary M, N, and d and report performance changes.

1. Generate a random square sparse matrix A, and a dense vector x of suitable dimension and compute A*x and A'*x. Is one faster than the other? Vary the dimensions and the density and report your results. Provide an analysis of your results.

2. Write a sparse matrix vector routine where both, the matrix and the vector are sparse. You can generate a sparse vector in Julia using x = sprand(N,d), where N is the dimension and d is the density. Test your method and report performance results.

function spmatvec{T,F}(A::SparseMatrixCSC{T,Int64},x::SparseVector{F,Int64})
y = <fill in>
for j = <fill in>
for k = A.colptr[j]:(A.colptr[j+1]-1)
<fill in>
end
end
return y
end