Scientific Computing

This module demonstrates the conjugate gradient method for minimizing a nonlinear function in two dimensions. Beginning with the negative gradient direction at a given starting point, a one-dimensional minimization is performed along each of a sequence of conjugate directions until convergence to the overall minimum of the objective function.

The user selects a problem either by choosing a preset example or
typing in a desired objective function *f*(*x*,
*y*)*x*, *y*)*x*, *y*)*f* along that direction is taken as the next
approximate solution, and the process is then repeated. If the
starting guess is close enough to the true minimum, then the conjugate
gradient method converges to it, typically with a superlinear
convergence rate.

**Reference:** Michael T. Heath, *Scientific Computing,
An Introductory Survey*, 2nd edition, McGraw-Hill, New York,
2002. See Section 6.5.6, especially Algorithm 6.6 and Example 6.14.

**Developers:** Jeffrey Naisbitt and Michael Heath