6.1 Eigenvalues and Eigenvectors
1Reading¶
Material related to this page, as well as additional exercises, can be found in ALA 8.1.
2Learning Objectives¶
By the end of this page, you should know:
- the definition of the eigenvalues and eigenvectors of a square matrix ,
- how to find the eigenvectors corresponding to an eigenvalue.
3Eigenvalues and Eigenvectors¶
In this chapter, we will discuss one of the most fundamental elements of linear algebra: the eigenvalues and eigenvectors of a square matrix . Eigenvalues and eigenvectors are ubiquitous throughout linear algebra and calculus, and in later sections we’ll discuss their applications to the analysis of linear dynamical systems (sections 6.8 and 7.1-7.5).
Geometrically, when acts on an eigenvector , it does not change its orientation: it only stretches it by the value specified by the eigenvalue λ. In a general sense, the eigenvectors and eigenvalues can be used to describe by indicating how it is “stretching” a vector space.
Our study of eigenvalues and -vectors will begin with a discussion on how we find them.
3.1Finding the eigenvectors for a known eigenvalue¶
We’ll first consider an easier problem.
Suppose we already know that has an eigenvalue λ. Then, (EIG) is a linear system in . If we isolate our vector by re-arranging terms, we get
This is a system we know how to solve. Specifically, the solution set is the null space of .
Python Break!¶
Below is a code snippet demonstrating how to solve for the eigenvectors of a known eigenvalue. We aren’t introducing anything new, we are just applying the concepts we have covered in the previous few chapters!
import numpy as np
from scipy import linalg
A = np.array([
[-1, -1, 1],
[-4, -1, -2],
[0, 0, -3]
])
lambda_1 = -3 # we are given that -3 is an eigenvalue of A
print('Eigenvectors corresponding to lambda_1 = -3:')
print(linalg.null_space(A - (-3) * np.identity(3)))
Here, the null_space
function from the scipy.linalg
library returns a matrix whose columns form an orthonormal basis of a given matrix . To solve for the eigenvectors corresponding to , use the null_space
function to solve for the nullspace of .
4... But how do we find the eigenvalues?¶
The question, now, becomes how to find the eigenvalues of a matrix. A key observation is that the definition of eigenvalue requires that the corresponding eigenvector be nonzero, and we know that this can only occur if is singular! (Recall that non-singular matrices only have the vector in the null-space). This discussion is summarized in the following theorem:
This theorem gives us a plan of attack: to find all λ for which is singular!
In the next few sections, we’ll introduce the machinery needed to apply this method, known as the method of characteristic equations, for finding eigenvalues of small matrices.
In general, this method is not applied for large matrices due to numerical issues. For larger matrices, an algorithm based on the QR decomposition is used; we’ll cover this a few sections down the line.