Eigenvalues and eigenvectors of matrices in two dimensions

An eigenvector of a linear transformation A is a vector v other than the zero vector such that Av is a multiple of v . In other words

Av = cv

for some constant c , which is called the eigenvalue corresponding to v . If c is positive then v and Av have the same direction, and if c is less than 0 then they have opposite directions. In other words, the directions of eigenvectors are those fixed, up to sign, by the transformation A .

In the applet just below, the red and blue arrows represent two vectors u and v , and the lighter ones represent Au and Av , where A is set at the bottom. You can change u and v by dragging with the mouse. You can also change A . You can see easily when a vectors w and Aw have the same direction, and estimate the eigenvectors and eigenvalues of A .

There are essentially three types of behaviour possible: (1) Two fixed directions, (2) one fixed direction, (3) no fixed direction. These are illustrated by the cases where A is [[1 1][1 2]] , [[1 1][0 1]] , or [[1 -1][1 1]] . These three types of behaviour characterize skew scale changes, generalized shears, or skew rotations. THe term skew here means that the axes are not necessarily perpendicular. A generalized shear is one that changes scale as it shears.

How can we keep track of the direction of a vector, as opposed to the vector itself? By normalizing it. In other words, if v is any vector then there is exactly one vector with the same direction and length 1 . We can calculate it by dividing v by its length |v| . If we combine the two ideas, we see that v is an eigenvector of A if one of these two possibilities occurs:

  • Av/|Av| = v/|v|
  • Av/|Av| = - v/|v|

We learned from solving Kepler's equation one technique of finding the fixed points of functions: if we want to find x such that f(x) = x we pick some initial value for x and iterate.

  • Pick a starting value x0 .
  • Set xn+1 = f(xn) until the value we get for x is close enough to what we want (assuming the process will converge).

In the present circumstances, we carry out this modification:

  • Pick an initial vector v0 .
  • Keep setting vn+1 = Avn/|Avn| until one of two things happens: (1) The successive vn 's are the same (up to the accuracy we are working with) or (2) successive vn 's are the negatives of each other.
  • At this point, v = vn will be a normalized eigenvector (to within working accuracy), and the ratio of Av to v will be the eigenvalue.

The first applet shows the effect of iteration, preserviung the sequence of iterates.

The second shos directly how the normalized iterates converge.

In the applet below, we have a text-based variation of the iteration process. Apply calculates Av from v , and Normalize replaces v by Av/|Av| , Reset starts all over again.. You can change A . The process is now a bit different. We are applying the usual iteration to the first column, but normalization also replaces the second column by a vector perpendicular to v . THe second matrix is that of A with respect to this orthogonal basis.