Overview

Descent methods are a way to solve unconstrained optimization problems which will be the topic of this particular post. In this tutorial we will; talk about some properties of unconstrained optimization problems; solve an example optimization problem using gradient descent with backtracking line search; and solve the same problem with Newton’s method; This post is based on chapter 9 in Steven Boyd and Lieven Vandenberghe’s book Convex Optimization. I have implemented each of the algorithms and methods discussed here in python and they can be found here.

Convexity and Unconstrained Minimization

We are interested in solving the problem \begin{align} \min f(x) \end{align} where f is a convex function and . If we remember from calculus whenever we want to maximize or minimize a function we find such that: \begin{align} \nabla f(x^*) = 0\end{align} However this is not always easy to do analytically. So we search for a method which can approximate this optimal solution accurately and quickly. This leads us to a class of algorithms known as descent methods.

Descent Methods

The idea behind the descent methods is to produce a minimizing sequence where \begin{align}x^{(k+1)} & = x^{(k)} + \eta^{(k)}\Delta x^{(k)}\end{align} This is basically saying that at each time step we want to move a small step in the direction of . Here denotes our iteration number and . We are interested in descent methods which are a class of methods such that whenever is not optimal we have: \begin{align}f(x^{(k+1)}) < f(x^{k})\end{align} Most descent algorithms come in this flavor and the differences arise in how they particular algorithms choose to address the questions, what direction should we move in? How large of a step should we take? When do we stop moving?

Stepsize

There are a number of ways to determine the stepsize which should be used and there is no perfect answer. The simplest thing to do is choose some small constant and set the stepsize equal to that in machine learning this is known as the learning rate and is most often determined using cross validation. Here we will discuss two slightly better methods for choosing how large of a step to take. The first is known as exact line search. This method chooses so as to minimize the original function along some line segment . It should be noted here that this method requires solving an additional optimization problem: \begin{align} \eta = \text{arg}\min_{s \geq 0} f(x + s \Delta x) \end{align}. What this implies is that this method should be used when the cost of computing the search direction is very large. The next method commonly used is known as backtracking line search. This method starts with a stepsize of 1 and gradually reduces it by a factor of until .

Where to Step

Now That we have some methods to find out how big of a step to take we need to figure out in which direction to take that step. The simplest thing to do would be to take a step in the direction of steepest descent. In other words, the direction should be that of the negative gradient. This is exactly what vanilla gradient descent is doing! This algorithm proceeds by:

  1. set
  2. Do some form of line search to choose a step size
  3. update our optimal point
  4. repeat until we are sufficiently close to optimal

To see what this looks like let’s examine gradient descent with backtracking line search on the function . You can visualize this function as the 3D analogue of a parabola.

Each red dot represents a different and as you can see they pretty quickly zig-zag to the optimal point of this function. The exact number of iterations is 60 in this particular experiment starting at point . This seems good, and it is, but this is an extremely well behaved function. What happens when the function is a little bit less well behaved. To demonstrate some of the shortcomings of this algorithm we look at it’s performance on a particular instance of the Rosenbrock Function: \begin{align} f(x) = 100(x_2 - x_1^2)^2 + (1 - x_1)^2 \end{align}

The optimal point of this function is at Here we start at which is very close to optimal but it takes us 7230 iterations to reach the optimum! This is because the Rosenbrock function has a very steep descent to a large trough which leads to the optimum. As you can see when the direction is very small it can lead to long convergence times. In these instances pure gradient descent does not perform very well. (You can tell if things like this might happen by looking at the condition number of your function I will do a post on convergence and condition numbers at a later date.)

Newton’s Method to the Rescue!

One way to solve this problem is to use additional information about the function. That is exactly the idea behind newton’s method. Here we use second-order information about the function to improve convergence. Namely, we add information about the local concavity of the function in the form of the Hessian of . This algorithm proceeds as:

  1. set
  2. find
  3. If Return
  4. Do some form of line search to choose a step size
  5. update our optimal point
  6. repeat until (3) returns

Running Newton’s method on the Rosenbrock function is shown below:

Now it only takes 51 steps to reach the optimal point. With this additional information we can achieve 2 orders of magnitude fewer steps! However it is important to note that this method requires us to compute the Hessian of the function in question, and in many real world applications this is either too expensive or not possible. If this is the case for your particular application you can try a Quasi-Newton method where the Hessian information isn’t computed directly but estimated.