$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Numerical Analysis

## Main reference

Gautschi
Numerical analysis 2nd edition. BirkhĂ¤user/Springer 2012 Walter Gautschi

## Other references

ATAP
Lloyd N. Trefethen. Approximation Theory and Approximation Practice, SIAM
First course
Uri M. Ascher and Chen Greif. A first course in numerical methods, SIAM

## Lecture 30

Spectral methods, Review of class.

Everything below
Trefethen, Chapter 6, 7
Final Review Slides
Julia
Spectral differentiation (html) (ipynb)

## Lecture 29

A-stable methods for ODEs and A-stability and eigenvalues.

Section 5.9
Trefethen, Chapter 6, 7
Julia
Stiff ODEs (html) (ipynb)

## Lecture 28

Global error analysis and step length control

Section 5.7, 5.8, 5.9
ODE Error
Julia
ODEs and Error Analysis (html) (ipynb)

## Lecture 27

One-step methods for ODEs and an intro to error analysis.

Section 5.3, 5.4, 5.5, 5.6
ODE Theory

## Lecture 26

Forward and Backward Euler and systems of equations

Section 5.6.1

## Lecture 25

Review of Chapter 4 and introduction to ODEs

Chapter 4
[Section 5.1][Gaustchi]

## Lecture 24

Remote lecture on the convergence of the secant method, the method of fixed points for nonlinear equations and Newton's method for nonlinear equations.

Section 4.4, 4.9
Julia
Secant Convergence Rates (html) (ipynb)
2d Bisection (html) (ipynb)

## Lecture 23

Newton's method and fixed points (Guest lecture by Ahmed Sameh)

Section 4.6, 4.7

## Lecture 22

Rates of convergence, method of false position, secant method.

Section 4.2, 4.4
Julia
How quickly sequences converge (html) (ipynb)

## Lecture 21

Intro to nonlinear equation solving, example problems and the method of bisection.

Section 4.1, 4.3
Bisecting roots in Julia Some interesting details that arise with bisection in floating point.

## Lecture 20

Section 3.2.4, 3.2.7

Midterm review

Midterm

## Lecture 17

Review for the midterm!

Everything below
Midterm Review Slides

## Lecture 16

We reviewed the relationship between the zeros of orthogonal polynomials, quadrature, and eigenvalues. Then we discussed quadrature via the method of undeteremined coefficients, which is a handy way to compute these!

Integration and Quadrature, Chapter 12, Spectral Methods in Matlab, Trefethen
Section 3.2.2, 3.2.3, 3.2.5
Julia
Orthogonal polynomials, eigenvalues, and quadrature (html) (ipynb)

## Lecture 15

Interpolatory quadrature via integrating polynomials exactly and getting
better accuracy via the node polynomial. We ended on how orthogonal

Section 3.2.2

## Lecture 14

We saw the introduction of numerical integration with the Trapezoidal and Simpsons rule and started thinking about more advanced methods.

Gautschi Section 3.2 (Trapezoidal and Simpson's rule)

## Lecture 13

We saw a number of ways to approximate derivatives starting from using polynomial interpolation and then going through Taylor series and finite difference methods before reviewing Gaustchi's perspective in the textbook. There is some julia code to illustrate some non-obvious effects here.

Gautschi Section 3.1
Julia
Approximating derivatives (html) (ipynb)

## Lecture 12

Today was all about splines and piecewise interpolants. We saw how piecewise interpolants are arbitrarily accurate and it was easy to get a good approximation of the

Gautschi Section 2.3
Chapter 11 This has a good treatment of piecewise interpolants from the basis function perspective.

## Lecture 11

We reviewed Newton interpolation and used it to derive Hermite interpolation by studying what it would mean to take divided differences at the same point.

Gautschi Section 2.2
Julia
Hermite interpolation (html) (ipynb) (wrong name, it's really Newton)

## Lecture 10

Guest lecture by Youhan Fang (our TA!) on Barycentric interpolation and Newton interpolation.

Gautschi Section 2.2
Berrut and Trefethen, Barycentric Lagrange Interpolation
Julia
Evaluating Polynomial Interpolants (html) (ipynb)

## Lecture 9

Guest lecture by Ahmed Sameh on Lagrange interpolation, the error formula, and Chebyshev nodes.

Gautschi Section 2.2

## Lecture 8

We covered the normal equations and the Weierstrauss approximation theorem, and orthogonal functions.

Gautschi Section 2.1
Chapter 6

## Lecture 7

We saw how to formalize the best approximation problem in terms of function norms. We got through integrals with a discrete measure, then did a quick demo with Julia

Gautschi Section 2.0
Trefethen, Numerically computing with functions
Chapter 1-3
chebfun
ApproxFun
Julia
Approximating functions with Julia (html) (ipynb)

## Lecture 6

We did more Julia, derived the condition number of a matrix, and saw some issues with the variance computation, then started into chapter 2 on functions.

Gautschi Section 2.0
Sample variance computations
Julia
Julia In-class

## Lecture 5

Today we completed our study of the overall floating point error computation using the condition number of an algorithm. We saw a few examples too and then had a demo of Julia

Gautschi Section 1.4, 1.5
Julia
Julia Intro

## Lecture 4

We introduced condition numbers for a function and studied two types: a sharp estimate based on the condition number of all the constituent gradients and a weaker condition number based on the norm of the Jacobian.

Gautschi Section 1.3

## Lecture 3

We got through the analysis of the fundamental floating point operations with respect to input errors.

Gautschi Section 1.2
Pentium FDIV Bug
Fun reading that mentions cancellation as a plot point
Once dead by Richard Phillips (See intro to Chapter 55, page 208)

## Lecture 2

We reviewed sources for numerical errors in computers and got through the format of IEEE Floating Point Arithmetic and how the numbers are stored.

Gautschi Section 1.1
Constructions of the real numbers
Decimal arithmetic
Integers and floating point numbers in Julia Most of the things described here for Float64 will also work in Matlab, except Julia has some nicer functions.
IEEE Floating point
Michael Overton, Numerical Computing with IEEE Floating Point Arithmetic - a great textbook on IEEE arithmetic!
Julia
IEEE Floats in Julia

## Lecture 1

We introduced the class and reviewed the syllabus.