$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Homework 3

Please answer the following questions in complete sentences in a typed manuscript and submit the solution on blackboard by on February 29th at noon.

## Problem 0: Homework checklist

• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.

• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.

## Problem 1: Gautschi Exercise 2.26

Just a little bit of algebra. Show that the elementary Lagrange interpolation polynomials $\ell_i(x)$ are invariant with respect to any linear transformation of the independent variable. (What this is asking you to do is to show that if $y(x) = ax+b$, and we write the the interpolating polynomials in terms of $y$, then we get the same polynomials in terms of $x$.)

## Problem 2: Gautschi Exercise 2.35

The error in linear interpolation of $f$ at $x_0, x_1$ is known to be if $f \in C^2[x_0, x_1]$. Determine $\xi(x)$ explicitly in the case of $f(x) = \frac{1}{x}$ and $x_0 = 1, x_1 = 2$. Find the maximum and minimum of $\xi(x)$ on $1 \le x \le 2$.

## Problem 3: Gautschi Machine Exercise 2.7

1. Modify the computation of the divided differences for Newton interpolation given in class compute_newton_coeffs https://www.cs.purdue.edu/homes/dgleich/cs514-2016/julia/Lecture-11-Hermite-interpolation.html in order to avoid storing the $n$-by-$n$ matrix of coefficients and only store $n$ coefficients. Modify the Newton interpolation program to use the single coefficients instead of the table (very easy).

2. Use your routine to interpolate the Runge function $f(t) = 1/(1+t^2)$ from $[-5, 5]$ using $2, 4, 6,$ and $8$ equally spaced points on $[-5,5]$ including the endpoints. (i.e. for two points, use $x_0 = -5, x_1 = 5$.) Plot the interpolants vs. the true function.

## Problem 4: Gatuschi Machine Exercise 2.8b

The book outlines most of the solution. 1. Write a program for computing the natural spline interpolant on an arbitrary partition $a = x_1 < x_2 < \ldots < x_{n-1} < x_n = b$ of $[a,b]$.

1. Test your program on $f(x) = e^{-x}$ using 11 uniformly spaced points from $0$ to $1$ including the endpoints (linspace(0.,1.,11)) and use $1000$ uniformly spaced points to find the maximum error at any point (uniform approximation).

2. Test your program on $f(x) = x^{5/2}$ using 11 uniformly spaced points from $0$ to $1$ including the endpoints (linspace(0.,1.,11)) and use $1000$ uniformly spaced points to find the maximum error at any point (uniform approximation).

3. Test your program on $f(x) = x^{5/2}$ using the 11 points $x_i = \left( \frac{i-1}{n-1} \right)^2$ for $i=1,\ldots,11$ and use $1000$ uniformly spaced points to find the maximum error at any point (uniform approximation).

4. Test your program on $f(x) = \frac{1}{1+x^2}$ using 25 uniformly spaced points from $-5$ to $5$ and use $10000$ uniformly spaced points to find the maximum error at any point (uniform approximation).

## Problem 5: Derivatives via interpolants, two ways

In this problem, we are going to compare two strategies for approximating the derivative of a function $f(x)$ when given access to a computer program to evaluate $f(x)$. That is, we want to write a computer program $p(x)$ that returns an approximate value of $f'(x)$.

Strategy 1 We use a central difference formula to approximate the derivative:

Strategy 2 We first compute a polynomial interpolant to $f$ at $n+1$ data points, then evaluate the derivative of the polynomial interpolant. where $\ell_i'(x)$ is the derivative of the elementary Lagrange polynomial.

Both strategies have a parameter: $h$ for the central difference and $n$ for the interpolation. For strategy 2, we'll have to work out the derivative of the elemenary Lagrange polynomial $\ell_i(x)$.

The goal of this problem is to compare the strategies to see how they differ on the functions:

• $f(x) = e^x, 0 \le x \le 5$
• $f(x) = 1/(1+x^2), -5 \le x \le 5$
• $f(x) = e^{3x} \sin (200 x^2)/(1+20x^2), 0 \le x \le 1$.

We'll evaluate the error by taking 10000 uniformly spaced points within these intervals and comparing the true derivatives to the approximations by looking at the point of maximum difference (this is called uniform approximation).

1. Write down the derivaties of these three functions and plot their exact derivatives.

2. Implement a computer code for strategy 1. Evaluate the error for $10^{-10} \le h \le 0.1$. Show a plot of how the error changes for these values of $h$. Show three or four plots that illustrate different error regimes (think high,middle,low error and maybe an extra point).

3. Write down an expression for the derivative of the Lagrange polynomials. (Hint: see the paper by Berrut and Trefethen from the readings!)

4. Implement a computer code for strategy 2. Evaluate the error for $250 \ge n \ge 5$. Show a plot of how the error changes for these values of $n$. Show three or four plots that illustrate different error regimes (think high,middle,low error and maybe an extra point).

5. Discuss advantanges and disadvantages of these approaches.

## Optional extra problems (not graded)

• Gautschi 2.53
• Gautschi 2.55
• Gatucshi Machine Exercise 2.8c