a logo for the course

Computational methods in optimization

David Gleich

Purdue University

Spring 2012

Course number CS 59000-OPT

Tuesday and Thursday, 3:00-4:15pm

Lawson B134


Homework 3

Homework 3

Please answer the following questions in complete sentences in a typed manuscript and submit the solution to me in class on February 7th, 2012.

Problem 1: Checking Hessians and matrix calculus

In class, we mentioned that it was vital to double-check gradient calculations, and we then used the gradientcheck.m function to do this. Of course, if we want to utilize Newton’s method to solve a problem, then we need to check the Hessian matrix as well.

  1. Implement hessiancheck.m to verify that the Hessian is correct. You may reuse the function gradientcheck.m

  2. Determine and verify the Hessian of where is not necessarily symmetric or positive definite. Show the result of hessiancheck for a set of test matrices and test points . Be adversarial in your choice. Analyze any erratic behavior your find.

  3. Determine and verify the Hessian of . Show the result of hessiancheck for a set of test matrices test points . Be adversarial in your choice. Analyze any erratic behavior your find.

  4. Determine and verify the Hessian of . Show the result of hessiancheck for a set of test matrices test points . Be adversarial in your choice. Analyze any erratic behavior your find.

Problem 2: Accuracy of finite differences

In class, your professor mentioned that using a step size of the square root of machine precision () was a reasonable compromise when computing finite differences. The tension is between the error in the function evaluation and the error in the finite difference approximation.

  1. For the function , show (i.e. use matlab to compute) how the error varies in a forward finite difference approximation of the gradient when as the step-length approximation changes from to . What do you observe?

  2. Does this change if you switch to a centered finite difference formula?

Problem 3: Steepest descent

(Nocedal and Wright, Exercise 3.6) Let’s conclude with a quick problem to show that steepest descent can converge very rapidly! Consider the steepest descent method with exact line search for the function . Suppose that we know is parallel to an eigenvector of . Show that the method will converge in a single iteration.