# Power Iteration and Inverse Iteration

This module demonstrates power iteration and inverse iteration for computing an eigenvector corresponding to the largest or smallest eigenvalue, respectively, of a matrix. An arbitrary nonzero starting vector is multiplied repeatedly by the matrix (or its inverse). To avoid overflow or underflow, the resulting new vector is rescaled at each iteration to have -norm 1.

The user first chooses a 2 × 2 symmetric matrix, either by typing in its entries or by selecting a random or preset example. Normalized eigenvectors of the matrix are shown by arrows on the plot, with a dark arrow corresponding to the larger eigenvalue (in magnitude) and a light arrow corresponding to the smaller eigenvalue. The user then selects a starting vector by clicking on the plot, and selects either power iteration or inverse iteration. The successive steps of the corresponding algorithm are then carried out sequentially by repeatedly clicking on NEXT or on the currently highlighted step. If power iteration is selected, then the starting vector x is multiplied by the matrix A and the result is normalized to obtain the next approximate eigenvector. If inverse iteration is selected, then the linear system A y = x is solved for y and the result is normalized to obtain the next approximate eigenvector. In either case, the operation can be repeated to obtain successively better eigenvector approximations. The effect of each iteration is to tilt the starting vector closer to the eigenvector corresponding to the largest (for power iteration) or smallest (for inverse iteration) eigenvalue in modulus. The corresponding approximate eigenvalue is also displayed, as well as both of the “exact” eigenvalues.

Reference: Michael T. Heath, Scientific Computing, An Introductory Survey, 2nd edition, McGraw-Hill, New York, 2002. See Sections 4.5.1 and 4.5.2, especially Figure 4.3 and Examples 4.10-4.12.

Developers: Nicholas Exner, Michael Heath, and Jeffrey Naisbitt