Numerical Methods Week #1

Monday, June 14, 2004

 

Goals:

·         Learn how to use the computer to solve and approximate linear/non-linear equations, numerical integration and differentiation, interpolation and differential equations

·         The limitations as well as the potential of the computer to solve mathematical problems

 

Webpage:       http://www.cs.purdue.edu/homes/cs314

 

Office Hours: Walk in for short questions or make an appointment. 
E-mail recommended.

 

Textbook:       Numerical Methods using Matlab (3rd. ed.ition) By John H. Mathews and Kurtis D. Fink (Prentice Hall)

 

Grade Distribution:

 

                        Midterm, Final                        50%

                        Projects and Homework         50%

 

Mailing List:

 

·         Login to a CS machine (Lore)

·         If you don’t have a CS account, send e-mail to grr@cs.purdue.edu

·         Type “mailer add me to cs314”

·         Send questions to cs314-ta@cs.purdue.edu. Don’t post it to the list.

·         Answers will be posted depending on the importance of it to everyone

 

Newsgroups:

 

·         Add yourself to Purdue Newsgroups CS314 Class

(Professor won’t be reading it. But, TA’s will be)

 

Syllabus:

           

·         Representation of Floating Point Numbers and Error Analysis

·         Solution of Nonlinear Equations

·         The Solution of Linear Systems AX=B

·         Interpolation and Polynomial Approximation

·         Curve Fitting

·         Numerical Differentiation

·         Numerical Integration

·         Numerical Optimization

·         Solution of Differential Equations

 

Lecture Notes:

 

·         Volunteers can take notes for one week and it will count up to 2% extra over the final grade depending on the quality of the class notes

·         Notes in a HTML file and JPG/GIF format for diagrams

·         E-mail it to zouh@cs.purdue.edu

·         Do not e-mail them to the professor

·         Sign up will be tomorrow, Tuesday June 15, 2004.

 

Class Time: MTWTF 1.00 PM – 2.00 PM at UNIV 303

 

Binary Numbers:

 

·         Computers use binary numbers

·         Easier to implement circuits with 2 states – “On” and “Off”

·         Integer Representation:

o       b0 * 20 + b1 * 21 + b2 * 22 ……

 

Tuesday, June 15, 2004

 

NOTE: Homework 1 Available on the webpage due Tuesday, June 22, 2004 at 11.59 PM             electronically

 

Decimal-Binary Conversion:

Separate integer part from the fraction: N.F

N = b0 + (b1x21) + (b2x22) + ... + (bkx2k)

 

We divide by 2 in order to find bi (0 or 1).

N/2 = b0/2 + (b1x20) + ... + (bkx2k-1)

The remainder is b0/2. The integer part is the rest, which is (b1x20) + ... + (bkx2k-1)

After this operation, the new N is (b1x20) + ... + (bkx2k-1)

The remainder determines whether b0 is 0 or 1.

The above process is repeated until the integer part is 0.

 

Example: 5710

57/2 = 28, remainder = 1

28/2 = 14, remainder = 0

14/2 = 7, remainder = 0

7/2 = 3, remainder = 1

3/2 = 1, remainder = 1

1/2 = 0, remainder = 1

The answer is 111001

0.12 = 1x2-1 = ½ = 0.510

0.012 = 0x2-1 + 1x2-2 = ¼ = 0.2510

1/3 10 = 0x2-1 + 1x2-2 + 0x2-3 + 1x2-4 + …. = 0.0101010101…2

 

Representation of Numbers in Machine

The number of bits for the integer part and the fraction part are fixed.

Example: NNNN.FF2 has 4 bits for the integer part and 2 bits for the fraction. In this case, the largest number is 1111.112 = 15.7510 and the smallest Number – 0000.002 = 010

Advantages:

            Fast arithmetic - Uses integer arithmetic

   Used in signal processors and signal processing programs that requires real time operations. Example: MP3 Player, MP3 Encoder/Decoder, MPEG, etc

                        Disadvantages:

                                    Small range of numbers

                                        Bad precision

                              Have to be careful about potential overflow

The decimal point moves around to keep desired precision.

Uses scientific notation:
where Q is called the mantissa and N is called the exponent

Precision of the number depends on the number of the bits used to represent Q.

x = ±Q * 2^N [Numbers are represented in the computer]

Q, N: represented with a fixed number of bits.

To prevent multiple representations, Q and N are normalized so that there is only one representation.

Example: Making 0.5 ≤ q < 1, other than q = 0

 

Representation of Real Numbers

4 bytes (32 bits)

24 bits mantissa (1 bit sign)

8 bits exponent (1 bit sign)

Range is 2-128 to 2127

Precision is approximately 6 digits

8 bytes (64 bits)

53 bits mantissa (1 bit sign)

11 bits exponent (1 bit sign)

Range is 2 1024

Precision is approximately 15 digits

Errors

Truncation vs. Rounding

            Use three digits for fraction.

Absolute Error

          Chop-off = 0.59 x 10-3

          Round-off = 0.41 x 10-3

Relative Error

          Chop-off = 1.9 x 10-4

          Round-off = 1.3 x 10-4

Conclusion: Due to a smaller error, Round-off is better.

Propagation Of Error

 

An algorithm is stable if a small initial error stays small and unstable if a small initial error gets large.

 

Solution of Nonlinear Equations (f(x) = 0)

·                                 Fixed Point: a point p such that p = G(p)

·                                 Fixed Point iteration: the sequence of values produced by p(n-1) = G(p(n)) for n = 0, 1, 2,

·                                 Example:

x2 -2x +1 = 0

x2 +1 = 2x => x = (x2 +1)/2

xk+1 = (xk2 +1)/2

If | g’(x) | ≤ k < 1, for x in [a,b] and g(x) in [a,b], then the iteration Pn = g(pn-1) will converge to a unique fixed point. In this case P is said to be an attractive fixed point

If | g’(x) | > 1, then the iteration Pn = g(pn-1) will not converge

Convergence

g(x) = (x2 +1)/2

g(0) = 1/2
g(1) = 1
g'(x) = x
g'(x) < 1. Therefore, it is convergence.

 

Bisection Method:

Used to find a solution for f(x) = 0

Start with two values a, b (an interval) where a < b and such that

Similar to binary search.

Method:

Example:

sin(x) = ½ (x is in degrees)

f(x) = sin(x) – ½ = 0

We start with a = 0, b = 50

a

b

c

f(c)

0

50

25

-0.077

25

50

37.5

0.1087

25

37.25

31.25

0.018

25

31.25

28.125

-0.02

 

C approaches 30.

At each iteration, the range is reduced by half. If we want an approximation error < E, we need n iterations such that log[2] |b - a|/e ≤ n

Example:
sin(x) = ½ = 0, e = .01             n > 12
The solution will be at most 0.01 units from the approximation obtained if at least 13 iterations are performed.

 

False Position (Regula-Falsi)

m = (f(a) – f(b)) / (a-b) ---> 1

m = (f(b) – 0) / (b-c)   ----> 2

1 = 2

f(b) / (b-c) = (f(a) – f(b)) / (a-b)

b-c = f(b) * (a-b) / (f(a) – f(b))

c = b - f(b) * (a-b) / (f(a) – f(b) )

Example:

sin(x) = ½  => f(x) = sin(x) – ½ = 0

Let a = 0 , b = 50

When a = 0, b = 50

f(a) = -0.5, f(b) = .766, c = 37.546, f(c) = .03
When a = 0, b = 37.546

f(a) = -0.5, f(b) = .03, c = 30.70, f(c) = .01

When a = 0, b = 30.70

 f(a) = -0.5, f(b) = .01, c = 30.09, f(c) = .001

            C seems to be approaching zero.

The approximation to a line using two points may be bad

Need an interval a, b where the solution is found

 

Newthon-Rapson

 

Convergence

we expect f(x) to be between some small error E such that f(x) < E

We stop when | x – p | < S.

We already know p. Thus, this is not possible.

This is approximated by determining when | Pn – Pn-1 | < S

Stop when | Pn – Pn-1 | < E & f(x) < E

Troublesome Functions

            If the graph y=f(x) is steep near the root (P, 0) then the root finding is well conditioned. Easier to find the solution. Very close to vertical. The intersection with the x-axis will be well-marked. If the graph y=f(x) is shallow near (P, 0) then the root finding may only have a few significant digits. Intersections are not well-marked.