6-21-04


Newton-Raphson
  •   Used to the find the solution for f(x) = 0
  •   Makes use of the derivative f'(x)
  •   Uses f(x[0]), f'(x[0]) and a point x[0] to find an approximation to a line of the function f(x)
  •   Method
         1.) m = (f(x[0]) - 0)/(x[0] - x[1])
         2.) m = f'(x[0])
         3.) x[1] = x[0] - f(x[0])/f'(x[0])
  •   Can arrive at the same iteration equation via Taylor polynomial
  •   f(x) ≈ f(x[0]) + f'(x[0])(x - x[0]) + e, where e is the error of using a line instead of the complete polynomial
  •   Were 3 terms in the approximation used instead of 2, the resulting numerical method would be faster but would require the second derivative (and would result in a quadratic instead of a line)


  • Example of Newton-Raphson
    sin(x) - 1/2 = 0
    f'(x) = cos(x)
    x[0] = 0
    x[1] = x[0] - f(x[0])/f'(x[0])
    i x[i] f(x[i]) f'(x[i]) x[i+1]

    0 0 -.5 1 .5
    1 .5 -.02 .8795 .522
    2 .522 -.00139 .8668 .5235

    Problems with Newton-Raphson
  •   Need derivative f'(x)
  •   Can obtain the derivative numerically:
    f'(x) = (f(x + E) - f(x))/E for some very small E


  • When Newton-Raphson may not give a good approximation/converge
  •   If the function does not approximate well using a line
  •   If the starting point is not near enough for decent approximation


  • 6-22-04


    Order of Convergence
  •   Some methods are faster than others at converging
  •   p is the zero (solution), E[n] = p - p[n] is the error for iteration n
         * A != 0, R > 0
         * lim[n->∞] E[n+1]/|E[n]|^R = A
         * If R = 1, the convergence is linear
         * If R = 2, the convergence is quadratic


  • Secant Method
  •   The Newthon Rapson method requires the evaluation of two functions at each iteration (f(x) and f'(x))
  •   This method requires only one evaluation of f(x)
  •   Order of convergence with a single root is ~1.618
  •   Uses two initial points p[0], p[1] close to the root


  • 6-23-04


    Secant method (started last time)

  •   Approximates Newthon-Rapson method if x[1] -> x[0]
  •   x[2] = x[1] - (f(x[1]))((x[1] - x[0])/(f(x[1]) - f(x[0])))
  •   As x[1] -> x[0], it becomes x[1] - f(x[0])/f'(x[0]), which is N-R

  • Example:
    f(x) = sin(x) - 1/2 = 0
    x[0] = 0, x[1] = 1
    x[n+1] = x[n] - f(x[n])((x[n] - x[n-1])/(f(x[n]) - f(x[n-1])))
    n=0 => x[n] = 0, f(x[n]) = -.5
    n=1 => x[n] = 1, f(x[n]) = .3415, x[n+1] = .594
    n=2 => x[n] = .594, f(x[n]) = .0599, x[n+1] = .50798
    n=3 => x[n] = .50798, f(x[n]) = -.0135, x[n+1] = .5238


    Summary

  •   Bisection
  •   False Position (Regula Falsi)
  •   Newthon Rapson
  •   Secant Method

  • ------------------------------------------------------------------------------


    Solving linear systems

  •   Ax = B, where A is a matrix (nxn) and both x and B are vectors (1xn)
  •   Each linear equation represents a plane in the nth dimension
  •   The solution represents the intersection of the planes
  •   Not all systems of linear equations have solutions
  •   Some systems have infinite solutions

  • Upper triangular systems of linear equations

  •   Elements are in row-then-column listing a[i][j]
  •   Upper triangular if a[i][j] = 0 when i > j
  •   Easy to solve by obtaining the value for x[n] (from the last equation) and substituting into progressively higher equations, called "back substitution"

  • Gausian Elimination

  •   Method for solving systems of linear equations
  •   Transforms an arbitrary system into upper triangular form, then solves it by using back substitution
  •   The transformation uses the following (without affecting the solution):
        * Interchanging equations
        * Multiplying an equation by a constant
        * Addition with a non-zero multiple of another equation


  • 6-24-04

    Example:
    1x + 3y + 4z = 19
    8x + 9y + 3z = 35
    x + y + z = 6

    [1  3  4  19,
     8  9  3  35,
     1  1  1  6 ]

    First row is the pivot row, first element in that row the pivotal element

    [1  3  4  19,
     0  -15  -29  -117,
     1  1  1  6 ]

    [1  3  4  19,
    0  15  29  117,
    1  1  1  6 ]

    [1  3  4  19,
    0  15  29  117,
    0  -2  -3  -13 ]

    etc.

    ----------------------------------------------------------------------------------------------------

    Implementing Gaussian Elimination using Matlab

    function x = gauss(A, b)
    % Input - A is an nxn nonsingular matrix
    % B is an nx1 matrix
    % Output - x is an nx1 matrix that is the solution to Ax = B
    [N N] = size(A)
    x = zeros(N, 1);
    Aug = [A B];
    for p=1 : N
       piv = Aug(p, p);
       for k = p : N+1
          Aug(P, k) = Aug(P, k)/piv;
       end
       for k = p+1 : N
          m = Aug(k, p);
          for i = p : N+1
             Aug(k, i) = Aug(k, i) - m*Aug(p, i);
          end
       end
    end
    % Check on "step" being the right way to indicate decrement
    for k = N : 1 (step -1)
       sum = 0;
       for i = k + 1 : N
          sum = sum + Aug(k, i)*x(i, 1)
       end
       x(k, 1) = Aug(k, N+1) - sum
    end

    6-25-04


    If pivot = 0, switch rows to prevent division by 0 (to reduce computing error, choose the pivot to be the row with the largest starting absolute value)

    Triangular Factorization
  •   A matrix has a triangular factorization if it can be expressed as the product of a lower triangular matrix and an upper triangular matrix
  •   Useful when you want to solve multiple systems of linear equations that have the same A (remember, they're of the form Ax = B)
  •   Replace A with LU (for lower and upper triangular matrices respectively), then declare Ux to be Y, solve for Y, then substitute to solve Ux = Y
  •   To split into L and U, multiply the matrix by an identity matrix (so it's sitting just to the side of the starting matrix), then use Gaussian Elimination to both matrices (matching in the identity matrix the moves that are performed in the starting matrix)
  •   When only a single system needs to be solved, Gaussian Elimination is better (as it requires less work)