6-14-04


Office hours

  •   Usually available after class

  •   Better idea is to email and set an appointment

  • www.cs.purdue.edu/homes/cs314


    Goal

  •   Learn how to use the computer to solve/approximate linear/non-linear equations, integration, differentiation, interpolation, differential equations
  •  Learn some of the limitations of the computer in solving math problems

  • Textbook:
    "Numerical Methods Using Matlab" (3rd Edition) by Mathews and Fink


    Mailing List

  •   Be certain to add yourself as soon as possible
  •   Login to a CS machine (Lore)
  •   If you don't have a CS account, send email to grr@cs.purdue.edu
  •   Type "mailer add me to cs314"
  •   Don't post questions to the list, but rather to cs314-ta@cs.purdue.edu (answers will be posted to the mailing list if it's important to everyone)
  •   There will be a newsgroup, though the professor won't be reading it
  • Grade

  •   25% midterm
  •   25% final
  •  50% projects and homework
  • Content

  •   Floating/fixed point representation of numbers
  •   Solutions of non-linear equations (f(x) = 0)
  •   Solutions of linear systems (Ax = B) ... Application to large sparse matrices
  •   Interpolation
  •   Curve fitting
  •   Numerical differentiation and integration
  •   Numerical optimization
  •   Solving differential equations
  • Lecture Notes

  •   Volunteer to take notes for 5 days
  •   Format notes in HTML (JPG images)
  •   Send notes to grr@cs.purdue.edu as an attachment to be uploaded
  •   You may receive up to 2% extra credit at the end of the semester
  •   First day to sign up will be Tuesday, 6-15-04
  •  

    ------------------------------------------------------------------------------

    Binary Numbers

  •   Computers use binary numbers due to electronic simplicity
  •   Integer representation (each place handling a particular power of 2)
  •   b0*(2^0) + b1*(2^1) + ...
  •   Dots indicate a separation between integer and fractional parts (generalized as N.F)
  •  

    6-15-04

     

    HW1 is on the webpage (due Tuesday, 6-22-04 at 11:59PM)


    ------------------------------------------------------------------------------


    Conversion from decimal to binary

  •   Given N.F in decimal
  •   Convert N, convert F
  •   N = b0*2^0 + b1*2^1 + ...
  •   Repeated division by 2, with places added as remainder of division
  •   Example: 57 = 111001
  •   Converting fraction to decimal
  • Fixed vs. floating point notation

  •   Fixed point notation

  •      * 1 sign, 4 number, 2 fraction
         * Advantages
              + Fast operations
              + Frequently used in signal processing
         * Disadvantages
              + Small range of numbers
              + Bad precision
              + Have to be careful about potential overflow
  •   Floating point notation

  •      * x = ±q * 2^±n
         * Can lead to multiple representations for the same value
         * To prevent multiple representations, q and n are normalized such that there is only one representation (like making .5 ≤ q < 1, aside from q = 0)

    6-16-04

    Floats

  •   4 bytes
  •   24 bits mantissa (including sign)
  •   8 bits exponent (including sign)


  • Doubles
  •   8 bytes
  •   53 bits mantissa (including sign)
  •   11 bits exponent (including sign)


  • Float range
  •   Exponent from 2^-128 to 2^127
  •   Precision is approximately 1.2*10^-7


  • Double range
  •   Exponent from 2^-1024 to 2^1023
  •   Precision is approximately 15 digits


  • Errors in computation
  •   p = quantity
  •   p* = approximation of p
  •   Absolute error = |p - p*|
  •   Relative error = |p - p*|/|p|


  • Truncation vs. Rounding

    Propogation of Error
  •   p and q are approximated by p* and q*, with errors of e[p] and e[q]
  •   p* = p + e[p]
  •   q* = q + e[q]
  •   p* + q* = p + q + (e[p] + e[q])
  •   p*q* = pq + pe[q] + qe[p] + e[p]e[q]
  •   Error is added during addition, amplified by p and q in multiplication
  •   An algorithm is stable if a small initial error stays small
  •   An algorithm is unstable if a small initial error gets large


  • Solution of Nonlinear Equations (f(x) = 0)
  •   Iteration used to solve this kind of equation
  •   Start with a value P[0] and a function P[k] = G(P[k-1])
  •   G is the iteration function, and is obtained from f


  • Fixed point = a point p such that p = G(p)
    Fixed Point Iteration = the sequence of values produced by p(n-1) = G(p(n)) for n = 0, 1, 2, ...

    Example:
    --------
    x^2 - 2x + 1 = 0
    x^2 + 1 = 2x
    (x^2 + 1)/2 = x
    x[k] = (x[k-1]^2 + 1)/2

    Fixed Point Theorem
  •   Let x = G(x)
  •   If |G'(x)| < 1 for all x in [a, b] and G(x) in [a, b], then the iteration P[n] = G(P[n-1]) will converge to a unique fixed point p in [a, b] (which is referred to as an "attractive fixed point")
  •  

    6-17-04

    If g'(x) < 1 => convergence
    If g'(x) ≥ 1 => divergence

    Fixed points are the intersection of y = x and y = g(x), achieved by repeated iteration until some minimum error bounds around the point are achieved

    Theta = angle of difference between g(x) and y = x

    If theta < 45¾, it converges

    Oscillating convergence occurs when -1 < g'(p) < 0
    Monotome convergence occurs when 0 < g'(p) < 1
    Monotome divergence occurs when 1 < g'(p)
    Oscillating divergence occurs when -1 > g'(p)

    Example:
    x^2 - 2x + 1 = 0
    Find f(x) = g(x)
    g(x) = (x^2 + 1)/2
    g(0) = 1/2
    g(1) = 1
    g'(x) = x
    g'(x) < 1 => convergence

    Finding g(x) in terms of x by setting one side to x will not always work

    Bisection Method

  •   Used to find a solution for f(x) = 0
  •   It is similar to binary search
  •   Start with two values, a and b, such that {a < b, f(a) < 0, f(b) > 0} or {a < b, f(a) > 0, f(b) < 0}, so that a and b straddle the y-intercept
  •   Method:
       1.) Determine c = (a + b)/2
       2.) If f(c) * f(b) < 0, the solution is between c and b
       Substitute c in for a and iterate
       Else if f(c) * f(b) > 0, the solution is between c and a
       Substitute c in for b and iterate
       If f(c) * f(b) = 0, the solution has been found


  • Example:
    sin(x) - 1/2 = 0, with x in degrees
    f(x) = sin(x) - 1/2 = 0
    Start with a = 0, b = 50
    c = 25
    f(c) = -0.077
    f(a) = -0.5
    f(b) = 0.766

    6-18-04

     

    Bisection Method (cont.)
  •   At each step, the interval |b - a| is reduced by half
  •   If we want an approximation within some error e, we need n iterations such that log[2] |b - a|/e ≤ n


  • Example:
    sin(x) - 1/2 = 0, e = .01
    n > 12
    If at least 13 iterations are performed, the solution will be at most .01 units from the approximation obtained

    False Position Method
  •   Also called Regula Falsi
  •   Converges faster to the soltuion than bisection
  •   Also needs an interval a, b where the solution is found
  •   A line is subtended between (a, f(a)) and (b, f(b))
  •   c is chosen to be the intersection of the subtended line and the x-axis
  •    If f(a)*f(c) < 0, the solution is between a and c, so substitute c for b
     Else if f(a)*f(c) > 0, substitute c for a
     If f(a)*f(c) = 0, the solution has been found
  •   To compute the value of c:
         1.) m[1] = (f(a) - f(b))/(a - b)
         2.) m[2] = (f(b) - 0)/(b - c)
         3.) m[1] = m[2]
         (f(a) - f(b))/(a - b) = (f(b) - 0)/(b - c)
         c = b - f(b)((a - b)/(f(a) - f(b)))


  • Example:
    sin(x) = 1/2
    f(x) = sin(x) - 1/2 = 0
    a = 0, b = 50, f(a) = -0.5, f(b) = .766, c = 37.546, f(c) = .03
    a = 0, b = 37.546, f(a) = -0.5, f(b) = .03, c = 30.70, f(c) = .01
    a = 0, b = 30.70, f(a) = -0.5, f(b) = .01, c = 30.09, f(c) = .001

    False Position (cont.)
  •   Trying to approximate the function with a line using the values of the function at the points (a, f(a)) and (b, f(b))
  •   Disadvantages
         * Need an interval a, b where the solution is found
         * The approximation to a line using two points may be bad


  • Newthon-Rapson
  •   Uses only one value as guess instead of on interval
  •   Uses the derivative


  • Checking convergence
  •   Horizontal convergence = stop iterations when |f(x)| < E
  •   Vertical convergence = stop iterations when |x[n] - p| < E, p is solution (though this requires knowing the solution ahead of time)
  •   Both vertical and horizontal convergence = stop iterations when |x[n] - x[n-1]| < D and |f(x)| < E


  • Troublesome functions
  •   There are some functions where we can find the solution easier than others
  •   Well-conditioned functions
         * It is easier to find the solution
         * These functions are close to the vertical
         * The intersection with the x-axis will be well-marked
  •   Ill-conditioned functions
         * These functions are close to the x-axis in the intersection
         * The intersection is not well-marked