CS314 - Week 1 - June 14 through June 18

by Jason Lewicki jlewicki@purdue.edu

Monday June 14, 2010

Book:

Numerical Methods - Using MATLAB 3rd/4th edition

Website:

www.cs.purdue.edu/homes/cs314

Goal:

-To learn the potential limitations of the computer in solving mathematical problems.
-To learn how to use the computer to solve/approximate non-linear and linear equations, integrations, differentiations, interpolations, and differential equations

Mailing List:

Add yourself to the mailing list. Log into a cs machine(lore), type "mailer add me to cs314-students"

Grade Distribution:

25% midterm
25% final
50% projects and homework

Syllabus:

-floating point/fixed point representation of numbers
-Solutions of non-linear equations of the form f(x)=0
-How to solve linear equations Ax=B
-Interpolation
-Curve fitting
-Numerical differentiation
-Numerical integration
-Numerical Optimization
-Solutions of differential equations

Binary:

Computers use binary numbers.
1011 = 1x23 + 0x22 + 1x21 + 1x20 = 8 + 0 + 2 + 1 = 11

Decimal:

537.5 = 5x102 + 3x101 + 7x100 + 5x10-1 = 537.5

Binary->Decimal

110.11 = 1x22 + 1x21 + 0x20 + 1x2-1 + 1x2-2 = 4 + 2 + 1/2 + 1/4 = 6.75

---------------------------------

Tuesday June 15, 2010

How to go from base10 to base2?

-Separate the integer part>=1 from the fraction
-The integer part had the form n = b0 + (b1x21) + (b2x22) + ... + (bkx2k)
-We wish to find the digit bi : n/2 = b0/2 + (b1x20) + ... + (bkx2k-1)
-The remainder determines if bi is a 0 or a 1

Example:

2310 -> base2 2/23 = 11 r=1
2/11 = 5 r=1
2/5 = 2 r=1
2/2 = 1 r=0
2/1 = 0 r=1
Therefore 23 = 10111

Example:

F = b-1x2-1 + b-2x2-2 + b-3x2-3 + ... + b-nx2-n
2xF = b-1 + b-2x2-1 + b-3x2-2 + ... + b-nx2-n+1
if 2xF >= 1 then b-1 = 1
else b-1 = 0

Example:

.23 -> binary
.23x2 = .46 <=1 so .0
.46x2 = .92 <=1 so .00
.92x2 = 1.84 >1 so .001
.84x2 = 1.68 >1 so .0011
.68x2 = 1.36 >1 so .00111
.36x2 = .72 <=1 so .001110
This is an odd fraction in binary, sort of like 1/3 in base10

Example:

5.33 -> binary

part1:


2/5 = 2 r=1
2/2 = 1 r=0
2/1 = 0 r=1

part2:

.33x2 = .66<=1 so .0
.66x2 = 1.32>1 so .01
.32x2 = .64<=1 so .010
.64x2 = 1.28>1 so .0101
.28x2 = .56<=1 so .01010
So,
101.01010

Floating Point Number Representation:

How to represent numbers with integer and fraction part in memory NNNN.FF

Two ways:


1) Fixed Point
2) Floating Point

Fixed Point:

The number of bits for the integer and fraction parts are fixed.
NNN.FF, 4 bits for integer, 2 bits for fraction, 1 bit for sign(0 positive, 1 negative)

Largest Number:

0111.11 = 23 + 22 + 21 + 20 + 2-1 + 2-2 = 15.75

Smallest Number:

11111.11 = -15.75

Smallest difference between 2 consecutive numbers:

0000.01
-.0000.00
---------
=0000.01

2-2 = 1/22 = 1/4 = .25

Problem:

Little range and bad precision
Advantage:
Fast arithmetic operations

Floating Point:

The decimal point moves around to keep the desired precision
-use scientific notation in binary
-numbers are represented as: x = +/- 1.q x 2+/-e , 0 cannot be represented
-q stands for the mantissa, 0.5 <= q < 1
-in floating point we allocate a # of bits for the mantissa and a # of bits for the exponent

Addition:

1.01x23 -> .0101x25
1.10x25 -> +1.10x25 = 1.1101x25

---------------------------------

Wednesday June 16, 2010

Homework:

On website, due Wednesday June 23, 2010

Errors in computation of numbers:

Absolute Error = | p – p* | p* is the approximation of p, p is a number
Relative Error = | p – p* | / | p | p!=0
The number of bits used to represent a number are limited. So we need to "chop off" or "round" a number.
pi=3.145926
Chop off = 3.141
Round = 3.142

Absolute Error:

Chop-off = ~0.59x10-3
Round-off = ~0.41x10-3

Relative Error:

Chop-off = 1.9x10-4
Round-off = 1.3x10-4
Rounding is better.

Propagation of Errors:

Sum:

Assume p and q are approximated by p* and q*
p* = p + ep
q* = q + eq
p* + q* = p + q + (ep + eq)
The new error is the sum of the initial errors

Product:

p*q* = pq + p(eq) + q(ep) + (ep)(eq)
The new errors are a result of the amplification of the initial errors

Solution of non-linear equations:

f(x)=0 we will use iteration to solve the equation, starting with a value P0 and a function Pk=g(Pk-1)
P0
P1 = g(P0)
P2 = g(P1)
...
Pk = g(Pk-1)
We stop when |Pk - Pk-1| < ep, for some epsilon

Fixed Point:

-A fixed point of g(x) is a real number P, such that p=g(p)
-Geometrically - the fixed point of a function y=g(x) are the points of intersection between the line y=g(x) and y=x

Example:

2 ways to solve:

1st way:

x2 - 2x + 1 = 0
x2 + 1 = 2x
(x2 + 1)/2 = x
xk = (x2k-1 + 1)/2

x0 = 0
x1 = (02 + 1)/2 = 1/2 = .5
x2 = (.52 + 1)/2 = .625
x3 = (.6252 + 1)/2 = .695
x4 = (.6952 + 1)/2 = .74
x5 = .77
x6 = .79
x7 = .82
x8 = .83
xn = 1

2nd way:

x2 - 2x + 1 = 0
x = (-b +/- sqrt(b2-4AC))/2
x = (-(-2) +/- sqrt(4-4))/2
x = 2/2 = 1

---------------------------------

Thursday June 17, 2010

Fixed Point Theorem:

If | g’(x) | <= k < 1 for all x in [a,b] and g(x) in [a,b]
Then the iteration Pn = g(Pn-1) will converge to a unique fixed point Pis an element of [a,b]. In this case, P is said to be an attractive fixed point.

However, if | g’(x) | > 1, then the iteration will not converge.

Different types of convergence/divergence:

Monotone convergence:

0 < g'(P) < 1

Oscillating Convergence:

-1 < g'(P) < 0

Monotone Divergence:

1 < g'(P)

Oscillating Divergence:

g'(P) < -1

Bisection Method:

-Used to find a solution for f(x)=0, this is similar to binary search
-# of guesses = log2(c), where n is [a,b] n=b-a
-In the bisection method you have to start with a sub-range [a,b] such that f(a) < 0 and f(b) > 0, or the opposite.
-Since f(x) is contiguous, there is an x between [a,b] such that f(x)=0

Step 1:

-get the middle point(c). c=(a+b)/2
if f(a) and f(c) have opposite signs, a 0 lies between [a,c], b=c
if f(b) anf f(c) have opposite signs, a 0 lies between [c,b], a=c
if f(c) = 0, then stop. Also stop when |f(c)| < E, where E is some small value

Go back to Step 1:

Example:

sin(x) = 1/2 , x is in degrees
sin(x)-1/2 = 0 = f(x)
a=0, b=50

a

b

c

f(c)

0

50

25

-0.077

25

50

37.5

...

...
c -> 30

---------------------------------

Friday June 18, 2010

How many iterations used?
at each step the range |a-b| is reduced by half.
Therefore at step n |an - bn| = |a-b|/2n

If we want an approximation of error < E:
|an - bn| = |a-b|/2n < E
|a-b|/2n < E
|a-b|/E < 2n
log2 |a-b|/E < log2*2n
log2 |a-b|/E < n*log2*2
1/(log2*2)*log|a-b|/E < n
N > 1/(log2*2) * log |a-b|/E
Given some E, you will need this many operations

Example:

If we need an approximation E=0.01:
(log((0-50)/0.01))/log(2) < n
12 < n
Therefore: at least 12 iterations.

False Position Method:

-created because bisection method was too slow
-It also needs [a,b] where there is a 0 between [a,b]
-A line is subtended between (a,f(a)) and (b, f(b)
-A c is chosen to be the intersection of the line with the x axis
-As in bisection:

Step 1:

if f(a) and f(c) have opposite signs, a 0 lies between [a,c], b=c
if f(b) anf f(c) have opposite signs, a 0 lies between [c,b], a=c
if f(c) = 0, then stop. Also stop when |f(c)| < E, where E is some small value

Go back to Step 1:

slope(m)=(f(a)-f(b))/(a-b)
slope(m)=(f(b)-0)/(b-c)
slope(m)=(f(a)-f(b))/(a-b) = f(b)/(b-c)
c=b-f(b)*[(a-b)/(f(a)-f(b))]

Example:

sin(x) = 1/2
sin(x) - 1/2 = 0 = f(x)

i

a

b

f(a)

f(b)

c

f(c)

0

0

50

sin(0)-.5=-.5

sin(50)-.5=.266

50-.266[(0-50)/-.5-.266)]=32.546

sin(32.546)-.5=.03

1

0

32.546

sin(0)-.5=-.5

sin(32.546)-.5=.03

32.546-.03[(0-32.546)/-.5-03)]=30.70

sin(30.70)-.5=.01

2

0

30.70

...

...

...

...

Stop when f(c) < E

Check for Convergence:

1) Horizontal Convergence - Stop when |f(x)| < E
2) Vertical Convergence - Stop when |X-P| < D , where P = 0, f(P)=0
-we need to know P. We can approximate P by using the value of P in the previous iterative...:
|pn - Pn-1| < D
3) Both Vertical/Horizontal Convergence - Stop when both |pn - Pn-1| < D and |f(x)| < E

Troublesome Functions:

-If graph y=f(x) is steep near root(P,0) then root finding is called well conditioned.
-Solution is easy to obtain with good precision
-If graph y=f(x) is shallow near P(0), then the root finding is ill conditioned, i.e. to the root finding may only have few significant digits.

Newton Rapson Method:

If f(x), f'(x), and f''(x) and continuous, then we can use this information to find the solution of f(x)
...
Next Week's notes should have the rest of the Newton Rapson Method.