$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mSigma}{\ensuremath{\mat{\Sigma}}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Homework 5

Please answer the following questions in complete sentences in a typed, clearly prepared manuscript and submit the solution by the due date on Blackboard (Monday, October 2nd, 2017, around 4-5 am)

## Problem 0: Homework checklist

• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.

• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.

## Problem 1: The Cholesky Factorization

Through this problem, assume that the matrices are all symmetric. You may assume this in your code as well.

1. Implement a Julia function for the Cholesky factorization.

2. Research the Cholesky factorization and answer the following question, either providing a proof or a reference. Is the Cholesky factorization unique?

3. Compare your Cholesky factorization code to Julia's chol function on the following matrix.

n = 10
Q,R = qr(randn(n,n))
D = abs.(randn(n))
A = triu(Q*diagm(D)*Q')
A = A + triu(A,1)'

4. Your professor boldy asserted that using the Cholesky factorization was faster than using the LU factorization. Make sure he's still right. Show the performance of Julia's LU and Cholesky factorization for matrices with size that varies from 10 to 500.

5. Your professor also boldy asserted that we preserve positive definiteness after one step of the Cholesky factorization. Recall that this was: So we set $\gamma = \sqrt{\alpha}$ and $\vf = \vb/\gamma$. Then, we recursively compute the Cholesky factorization of This only works if $\mC - \vb \vb^T/\alpha$ is positive definite. Prove that this is true. (Hint: you can do it from the definition of positive definiteness: $\vx^T \mA \vx > 0$ for all $\vx$.)

6. Briefly explain why step 5 justifies that a Cholesky factorization exists for any positive definite matrix. (This is really like a one or two sentence answer.)

7. One of the most useful aspects of the Choleskfy factorization is as a way to check if a matrix is positive definite or negative definite. A matrix is negative definite if for all $\vx$. Change your Julia implementation to report if a matrix is not positive definite. (Hint: this relates to why a Cholesky factorization always exists for a positive definite matrix in the previous problem.) Use this to test if the matrix for Poisson's equation from Homework 4, Problem 7 is positive or negative definite.

## Problem 2: Stability analysis

1. Suppose that $x_1, \ldots, x_n$ are a sequence of samples from a random variable. The sample variance of $x_1, \ldots, x_n$ is: Compute the condition number of the sample variance.

2. This is a real problem that I ran into! I was using a program written by someone else to compute a particular statistical estimate. The details don't really matter, but at one step, the code needed to compute the Hurwitz zeta function: where $s$ ranged from 1 to 7 and where $q$ ranged from 1 to 500. They were using Matlab, which doesn't provide a function for the Hurtwitz zeta function, but does provide a function for the Riemann zeta function: We will use Julia (which also has the zeta function) to continue this analysis. The final result doesn't differ much from Matlab's. Note that the only difference between these two functions is that the first term in the Riemann zeta function is $1/(1^s)$ and the first term in the Hurwitz zeta function is $1/(q^s)$. To account for this difference, the code I was using did the following:

 function h=hzeta(s,q)
z = zeta(s)
h = z - sum((1:(q-1)).^(-s));


In Julia,

 using SpecialFunctions
function hzeta(s,q)
z = zeta(s)
h = z - sum(Float64.((1:(q-1))).^(-s))
return h
end


Only 1 part! Use the properties of floating point arithmetic to show that computing $\normof{\vx}_2$ is backwards stable.