Just as the coordinate matrix of a vector depends on the basis chosen for the associated vector space, the matrix of a linear operator on a vector space depends on the basis for the vector space. It is often favorable to work with as simple a matrix as possible, for instance, diagonal matrix. The concepts of eigenvalues and eigenvectors assist us to select a basis for a vector space on which a linear operator is defined so that the linear operator has a diagonal matrix as its representation.

As described in Example 2 of the article First Encounter with Eigenvalues and Eigenvectors, the matrix has two eigenspaces E_{1} and E_{2}, whose bases are and , respectively. The matrix A can be viewed as the matrix of a linear operator T : defined by where the standard basis is chosen for . In other words, A is the * standard matrix* for T. If we “change” the basis to a “new” basis whose members are the linearly independent eigenvectors that span the eigenspaces of A, i.e. , then the matrix of T with respect to the “new” basis will be the diagonal matrix . This article shall focus on how to find the diagonal matrix and provide the theorem justifying the procedure.

**The Diagonalization Theorem**

An n×n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. In fact, A = PDP^{-1}, with D a diagonal matrix, if and only if the columns of P are n linearly independent eigenvectors of A. In this case, the diagonal entries of D are eigenvalues of A that correspond, respectively, to the eigenvectors in P.

**Example 1**

Is the matrix diagonalizable?

**Answer**

As described in Example 3 of the article First Encounter with Eigenvalues and Eigenvectors, there are only 2 linearly independent eigenvectors while the matrix A is of order 3. By the Diagonalization Theorem above, it can be concluded that A is not diagonalizable.

**Example 2**

Consider the matrix . Find the matrix P such that P^{-1}AP is a diagonal matrix. Let the diagonal matrix be D. Verify that the diagonal entries of D are eigenvalues of A, conforming with the Diagonalization Theorem.

**Answer**

In Example 2 of the article First Encounter with Eigenvalues and Eigenvectors, it is shown that the basis vectors of the eigenspaces of A are and . According to the theorem, P must be as follows.

.

The matrix P will diagonalize A.

To verify that D = P^{-1}AP is a diagonal matrix, first determine P^{-1}. It is a simple matter to show that . It follows that:

.

The diagonal entries of D is 4 and -5. They are the eigenvalues of A. (See Example 2 of the article mentioned above.)

**Example 3**

Find a matrix P that diagonalizes . Then, determine diagonal matrix produced by P.

**Answer**

Since (A – λI) is a triangular matrix, its determinant is the product of all its diagonal entries. Thus, the characteristic equation of A is (2-λ)(-4-λ)(3-λ) = 0. It follows that the eigenvalues are 2, -4, and 3. [It is consistent with a theorem that says: **“The eigenvalues of a triangular matrix are the entries on the main diagonal.”**]. We proceed to find the matrix P that diagonalizes A. The Diagonalization Theorem implies that we have to find the eigenvectors of A.

**If λ = 2** then , which has the same solution set as . (Stated another way, the two matrices are equivalent.) If , then the matrix will correspond to the system of linear equations represented as . The only free variable in this system is x_{1}. Therefore, we set x_{1} = t where . This yields the eigenspace .

**If λ = -4** then , which is equivalent to . If , then the matrix will correspond to the system of linear equations represented as . The only free variable in this system is x_{2}. To solve the system of equations, set x_{2} = -6t where . Consequently, x_{1} = t and the resulting eigenspace is .

**If λ = 3** then , which is equivalent to . If , then the matrix will correspond to the system of linear equations represented as . The only free variable in this system is x_{3}. To solve the system of equations, set x_{3} = 7t where . Accordingly, x_{1} = 5t, x_{2} = -2t and the eigenspace produced is .

Note that the three basis vectors spanning E_{1}, E_{2}, and E_{3}, i.e. , are linearly independent. (See the additional note at the end of this article.) The Diagonalization Theorem implies that the diagonalizing matrix is and P yields the diagonal matrix D whose diagonal entries are the eigenvalues corresponding to the eigenvectors in P. Thus, the diagonal matrix is .

Additional Note

**Theorem**

If are eigenvectors of A corresponding to **distinct** eigenvalues λ_{1}, λ_{2}, …, λ_{k}, then is a linearly independent set.