$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Homework 4

Please answer the following questions in complete sentences in a typed manuscript and submit the solution on blackboard by on March 7th at noon. (These will be back before the midterm.)

## Problem 0: Homework checklist

• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.

• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.

## Problem 1: Gautschi Exercise 3.13

1. Construct a trapezoidal-like formula which is exact for $f(x) = \cos x$ and $f(x) = \sin x$.

2. Does this formula integrate constants exactly?

3. (Ignore part (b) if you are looking at the book.)

## Problem 2: Gautschi Exercise 3.36

The Gaussian quadrature rule for the Chebyshev weight function is known to be: where Use this fact to show that the unit disk has area $\pi$.

## Problem 3: Monte Carlo, Gaussian-Legendre, and Clenshaw-Curtis Quadrature

In this problem, we'll study how different quadrature methods converge on a variety of problems. For a technical paper on this idea, see Trefethen, Is Gaussian Quadrature better than Clenshaw-Curtis? SIAM Review, 2008. In this problem, we'll be studying and reproducing Fig. 2 from that paper.

The functions are all defined on the region $[-1,1]$ and are:

• $f_1(x) = x^{20}$
• $f_2(x) = e^{-x^2}$
• $f_3(x) = e^{-1/(x^2)}$
• $f_4(x) = e^x$
• $f_5(x) = 1/(1+16x^2)$
• $f_6(x) = |x|^3$

• Monte Carlo quadrature. Monte Carlo quadrature is a randomized method. We simply guess $n$ points between $[-1,1]$ uniformly at random and then take the average of all the function values. For instance, the following Julia code evaluates a Monte Carlo approximation

function montecarlo(f::Function, n::Int, a::Float64, b::Float64)
assert(a < b)
xi = (b-a)*rand(n)+a
fi = mean(f(xi))
end

• Clenshaw-Curtis quadrature. This quadrature uses Chebyshev points of the second kind to build an interpolatory quadrature formula instead of uniformly spaced points (as is common in Newton-Cotes quadrature). It just so happens that there is an incredibly elegant method to compute the weights associated with this quadrature based on the Fast-Fourier transform. See Trefethen's paper above for a 6-line Matlab code that implements Clenshaw-Curtis quadrature.

• Gauss-Legendre quadrature. In Gauss-Legendre quadrature, we pick the quadrature nodes and weights together. This gives even more accuracy. To find these nodes and weights, we must evaluate the eigenvalues and one component of each eigenvector of the Jacobi matrix associated with the Legendre orthogonal polynomials. The Jacobi matrix for these polynomials is easy: The size of the matrix should be $n \times n$ if you want an $n$-point formula. To get the nodes, we just look at the eigenvalues of the matrix. To get the weights, we need to get the first component of each eigenvector, and square it. (Hint: see Trefethen's paper for a simple Matlab code.)

• By whatever method you want, determine the exact values of these 6 integrals. (Hint: Wolfram Alpha is okay!)

• Write a program to create the Jacobi matrix for Gauss-Legendre quadrature and show the eigenvalues of the matrix for $n=11$.

• Implement a computer program for Clenshaw-Curtis quadrature. (Hint: you can copy Trefethen's routines, with attribution) but you must explain the steps. Julia implementations are advisable too.

• Implement a computer program for Gauss-Legendre quadrature.

• Note that the Monte Carlo method is a randomized algorithm, so the result will change if you do it again. Each run is called a trial or sample. Use a computer implementation of Monte Carlo integration to how much the values in a Monte Carlo integration can vary between one trial and the next. Compute the variance for $n=100$ and $1000$ samples for each of the above functions.

• Prepare 6 figures like Trefethen had in his paper for the 6 functions. Except use Monte Carlo integration instead of Newton-Cotes.

• Empirically determine how computing the Gaussian Quadrature nodes and weights scales as a function of $n$, the number of points, in terms of CPU time. (Hint, consider $n$ between $100$ and $1000$.) You should be able to justify your final scaling constant in terms of the runtime of a known algorithm.

• There are some recent developments in fast quadrature methods that produce the nodes and weights much more quickly. In Julia, these are implemented in the FastGaussQuadrature package and the gausslegendre function and in Matlab, the function legpts.m from Chebfun implements them (an old simple version is here: http://www.mathworks.com/matlabcentral/fileexchange/23972-chebfun-v4-old-version--please-download-current-version-instead/content/chebfun/legpts.m Determine the simplest equation that describes the empirical scaling for these routines to find the quadrature points and weights. (Hint, consider $n$ between $1000$ and $100000$.)