$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mSigma}{\ensuremath{\mat{\Sigma}}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Homework 4

Please answer the following questions in complete sentences in a typed, clearly prepared manuscript and submit the solution by the due date on Blackboard (early morning on Monday, September 25th, 2017)

• Update 1 (Problem 1.2, corrected the syntax!)
• Fixed the $f(x_i, x_j)$ comment.
• Updated the pseudo-code for building the matrix to use row indexing to mirror the problem description

## Problem 0: Homework checklist

• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.

• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.

## Problem 1: Backsolve

For this problem, you must use Julia.

1. Implement backsolve and forward solve as functions in Julia. Show and document your code.

2. Construct a random upper-triangular linear system via:

A = triu(randn(n,n));
b = randn(n);


Compare the performance of your backsolve to Julia's backslash method to solve a linear system.

3. Write a routine to solve a linear system of equations using the LU factorization in Julia both with and without pivoting.
(Use lufact(A, Val{false}) for no pivoting in LU in Julia.)

4. Use your backsolve code, along with Julia's lu factorization in order to implement your own linear solver. Present a paragraph or two (and a figure or two) comparing it's speed and accuracy to Julia's own solver and focus on the effect of pivoting.

## Problem 2: Poisson's equation

In this problem, we'll meet one of the most common matrices studied in numerical linear algebra: the $2d$-Laplacian. We arrive at this matrix by discretizing a partial differential equation. Poisson's equation is: where $u(x,y)$ is a continuous function defined over the unit-plane (i.e. $0 \le x \le 1, 0 \le y \le 1$), $f(x,y)$ is a continuous function defined over the same region, and $\Delta$ is the Laplacian operator:

Given a function $f$, we want to find a function $u$ that satifies this equation. There are many approaches to solve this problem theoeretically and numerically. We'll take a numerical approach here.

Suppose we discretize the function $u$ at regular points $x_0, \ldots, x_n$, and $y_0, \ldots, y_n$ where $x_i = y_i = i/n$ so that we have: For this discretization, note that What we've done here is use the approximation: for both partial terms.
We need this equation to hold at each point $x_i, y_j$. But note that there are some issues with this equation at the boundary values (where x=0 or 1, or where y=0 or 1).
For this problem, we'll make it very simple and set:

Now, we'll do what we always do! Turn this into some type of matrix equation!

Let $\mU$ be an $n+1 \times n+1$ matrix that we'll index from zero instead of one: where $U_{i,j} = u(x_i, y_j)$. At this point, we are nearly done. What we are going to do is turn Poisson's equation into a linear system. This will be somewhat like how we turned image resampling into a matrix vector equation in the first homework.

There was a typo here before with row-vs-column indexing. Either is okay for this problem. Just be consistent and mention what you are using.

In order to write $\mU$ as a vector, we'll keep the convention from last time: Let $\vu$ be the vector of elements here. Note that our approximation to $\Delta u$, just involved a linear combination of the elements of $\vu$. This means we have a linear system: where the rows of $\mA$ and $\vf$ correspond to equations of the form:

1. Let $n=3$. Write down the $16 \times 16$ linear equation for $\vu$ including all the boundary conditions. Note that you can encode the boundary conditions by adding a row of $\mA$ where: $u_i = 0$.

2. Write a Julia or Matlab/Python code to construct the matrix $\mA$ and vector $\vf$ when $n=10$ and $f(x,y) = 1$. Here's some pseudo-code to help out:

function laplacian(n::Integer, f::Function)
N = (n+1)^2
A = zeros(N,N)
fvec = zeros(N)
# 2017-09-21 added transpose to mirror the row-indexing in the problem
# but see note above, you can do either, just be consistent.
G = reshape(1:N, n+1, n+1)' # index map, like we saw before;
h = 1.0/(n)
for i=0:n
for j=0:n
row = G[i+1,j+1]
if i==0 || j == 0 || i == n || j == n
# we are on a boudnary
fvec[row] = 0.0
# fill in A[row,:]
else
fvec[row] = f(i*h, j*h)*h^2
# fill in A[row,:]
end
end
end
return A, fvec
end
A, f = laplacian(10, (x,y) -> 1.0)

3. Solve for $\vu$ using Julia's or Matlab's backslash solver, and show the result using the mesh function (Matlab) or surface function (Plots.jl in Julia).

## Problem 3: The Schur Complement

Suppose we wish to solve and further suppose that you KNOW some of the values of $\vx$.

Let us permute and partition $\mM$ to be a block system: where $\vx_1$ is what you know.

1. Show how to solve for $\vx_2$ given $\vx_1$. Under what conditions is this possible?

2. Now, suppose that you have a very kind, but very confused dog that happened to eat your flash memory stick holding the piece of $\vx_1$ that you knew. However, you had saved your computed $\vx_2$ on your Purdue account, and so you have a backup. (This means you can assume that computing $\vx_2$ from $\vx_1$ is possible for this problem if you determined it wasn't always possible above.) Can you get $\vx_1$ back?

3. Combine these two parts to derive a single linear system to compute $\vx_1$ without computing $\vx_2$. The system you'll derive is called the Schur complement.

## Problem 4: Ranking with Linear Systems

Watch the video on sports linear systems available on Blackboard. (this was recorded in 2014). No questions about it, but this material is now fair-game for the Midterm. Your answer to this homework problem should be a question about the material presented in the lecture.