# Gauss-Newton Method

This module demonstrates the Gauss-Newton method for nonlinear least squares. Given an approximate solution, a new approximate solution is computed based on local linearization about the current point using the Jacobian matrix, which results in a linear least squares problem to be solved for the step to the new approximate solution. This process is repeated until convergence.

The user selects a problem either by choosing a preset example or typing in a desired function f of parameters x, y. Contours of the least squares residual are drawn on the plot. An initial guess (x, y) can be selected either be typing it in or by clicking on the plot. The steps of the Gauss-Newton method are then carried out sequentially by repeatedly clicking on NEXT or on the currently highlighted step. The current point (x, y) is indicated by a bullet on the plot and all values are also shown numerically in the table below. At each iteration, the Gauss-Newton step s is computed by solving the linear least squares problem J s = −∇r, where J is the Jacobian matrix of the residual function r. The approximate solution is updated using the computed step, and the process is then repeated. If the starting guess is close enough to the true minimum, then the Gauss-Newton method usually converges to it, typically with a linear convergence rate.

Reference: Michael T. Heath, Scientific Computing, An Introductory Survey, 2nd edition, McGraw-Hill, New York, 2002. See Section 6.6.1, especially Example 6.15.

Developers: Jeffrey Naisbitt and Michael Heath