Interactive Educational Modules in
Scientific Computing

Conjugate Gradient Optimization Method

This module demonstrates the conjugate gradient method for minimizing a nonlinear function in two dimensions. Beginning with the negative gradient direction at a given starting point, a one-dimensional minimization is performed along each of a sequence of conjugate directions until convergence to the overall minimum of the objective function.

The user selects a problem either by choosing a preset example or typing in a desired objective function f(x, y). Contours of the objective function are drawn on the plot. An initial guess (x, y) can be selected either by typing it in or clicking on the plot. The steps of the conjugate gradient method are then carried out sequentially by repeatedly clicking on NEXT or on the currently highlighted step. The current point (x, y) is indicated by a bullet on the plot and all values are also shown numerically in the table below. At each iteration, a search direction is computed, the minimum of f along that direction is taken as the next approximate solution, and the process is then repeated. If the starting guess is close enough to the true minimum, then the conjugate gradient method converges to it, typically with a superlinear convergence rate.

Reference: Michael T. Heath, Scientific Computing, An Introductory Survey, 2nd edition, McGraw-Hill, New York, 2002. See Section 6.5.6, especially Algorithm 6.6 and Example 6.14.

Developers: Jeffrey Naisbitt and Michael Heath