Skip to article frontmatterSkip to article content

6.1 Eigenvalues and Eigenvectors

Dept. of Electrical and Systems Engineering
University of Pennsylvania

Binder

Lecture notes

1Reading

Material related to this page, as well as additional exercises, can be found in ALA 8.1.

2Learning Objectives

By the end of this page, you should know:

  • the definition of the eigenvalues and eigenvectors of a square matrix AA,
  • how to find the eigenvectors corresponding to an eigenvalue.

3Eigenvalues and Eigenvectors

In this chapter, we will discuss one of the most fundamental elements of linear algebra: the eigenvalues and eigenvectors of a square matrix AA. Eigenvalues and eigenvectors are ubiquitous throughout linear algebra and calculus, and in later sections we’ll discuss their applications to the analysis of linear dynamical systems (sections 6.8 and 7.1-7.5).

Geometrically, when AA acts on an eigenvector v\vv v, it does not change its orientation: it only stretches it by the value specified by the eigenvalue λ. In a general sense, the eigenvectors and eigenvalues can be used to describe AA by indicating how it is “stretching” a vector space.

Our study of eigenvalues and -vectors will begin with a discussion on how we find them.

3.1Finding the eigenvectors for a known eigenvalue

We’ll first consider an easier problem.

Suppose we already know that AA has an eigenvalue λ. Then, (EIG) is a linear system in v\vv v. If we isolate our vector v\vv v by re-arranging terms, we get

(AλI)v=0(A - \lambda I)\mathbf{v} = \mathbf{0}

This is a system we know how to solve. Specifically, the solution set is the null space of AλIA - \lambda I.

Python Break!

Below is a code snippet demonstrating how to solve for the eigenvectors of a known eigenvalue. We aren’t introducing anything new, we are just applying the concepts we have covered in the previous few chapters!

import numpy as np 
from scipy import linalg

A = np.array([
    [-1, -1, 1],
    [-4, -1, -2],
    [0, 0, -3]
])

lambda_1 = -3        # we are given that -3 is an eigenvalue of A

print('Eigenvectors corresponding to lambda_1 = -3:')
print(linalg.null_space(A - (-3) * np.identity(3)))

Here, the null_space function from the scipy.linalg library returns a matrix whose columns form an orthonormal basis of a given matrix AA. To solve for the eigenvectors corresponding to λ1=3\lambda_1 = -3, use the null_space function to solve for the nullspace of A(3)IA - (-3)I.

4... But how do we find the eigenvalues?

The question, now, becomes how to find the eigenvalues of a matrix. A key observation is that the definition of eigenvalue requires that the corresponding eigenvector be nonzero, and we know that this can only occur if AλIA - \lambda I is singular! (Recall that non-singular matrices only have the 0\vv 0 vector in the null-space). This discussion is summarized in the following theorem:

This theorem gives us a plan of attack: to find all λ for which AλIA - \lambda I is singular!

In the next few sections, we’ll introduce the machinery needed to apply this method, known as the method of characteristic equations, for finding eigenvalues of small matrices.

In general, this method is not applied for large matrices due to numerical issues. For larger matrices, an algorithm based on the QR decomposition is used; we’ll cover this a few sections down the line.

Binder