We will use a package called ApproxFun in this tutorial. ApproxFun is a Julia package that is based on the Matlab package called chebfun. The chebfun package was written by Lloyd N. Trefethen (Nick Trefethen) and Zachary Battles in early 2004, then greatly extended starting around 2007 and is currently under active development. The idea is to use good function bases to represent functions numerically on the computer, instead of symbolically.

In [1]:
Pkg.add("ApproxFun") # Add the package we'll need
INFO: Nothing to be done
INFO: METADATA is out-of-date — you may not have the latest version of ApproxFun
INFO: Use `Pkg.update()` to get the latest versions of your packages
In [2]:
using ApproxFun # use it!
using Plots
In [4]:
# We can use ApproxFun to build up functions
x = Fun(identity,[-1.,1.]) 
plot(x)
Out[4]:

What is x here? x is a representation of the identify function $f(x) = x$ on the interval $[-1,1]$. It is represented in a basis of Chebyshev polynomials. You don't need to know much about that this point (we'll cover it in class), but this is a particularly useful basis. What matters is that we can compute with $f$ really easily!

In [5]:
plot(sin(100*x))
Out[5]:
In [7]:
length(sin(100*x))
Out[7]:
156

Not only can we manipulate x, we can also find roots and derivatives!

In [8]:
f = sin(10*x)^2
g = cos(x)
h = f+g-1

r = roots(h)
rp = roots(h') # compute the zeros of the derivate of h

plot(h)
scatter!(r,h(r);color=:green)
scatter!(rp,h(rp);color=:red)
Out[8]:
/usr/local/lib/python2.7/site-packages/matplotlib/collections.py:590: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  if self._edgecolors == str('face'):
In [9]:
f = Fun(x -> exp(-x), [1, 5.])
@show sum(f) 
@show exp(-1) - exp(-5.);
sum(f) = 0.36114149417235686
exp(-1) - exp(-5.0) = 0.36114149417235686
In [10]:
g = Fun(x -> exp(x), [1, 5.])
plot(f*g) # everything is a numerical approximation
Out[10]:
In [11]:
@show norm(f-g,1)
@show norm(f-g,2)
@show norm(f-g,Inf);
norm(f - g,1) = 145.33373577994536
norm(f - g,2) = 104.88854091031845
norm(f - g,Inf) = 148.40642115557748
In [16]:
plot(abs(g-f))
Out[16]:
In [17]:
x = Fun(identity,[-1,1])
f = sign(x)
plot(f, ylims=(-1.5,1.5))
Out[17]:
In [19]:
@show norm(f-0,1);
@show norm(f-0,2);
@show norm(f-0,Inf);
norm(f - 0,1) = 2.0
norm(f - 0,2) = 1.4142135623730951
norm(f - 0,Inf) = 1.0
In [20]:
# ApproxFun can be useful to generate inner-product matrices and function bases
B = [(0.*x + 1.);x;x^2;x^3]
plot(B)
Out[20]:
In [21]:
# we can orthogonalize B
Pi = B # start with the basis
for j=2:length(B)
    for k=1:j-1
        Pi[j] = Pi[j] - sum(B[j]*Pi[k])/sum(Pi[k]*Pi[k])*B[k]
    end
end
display(plot(Pi))
sum(Pi[1]*Pi[3])
Out[21]:
2.7755575615628914e-17
In [ ]:
Pi[3]