Skip to article frontmatterSkip to article content

7.2 Repeated Eigenvalues, Jordan Forms, and Linear Dynamical Systems

multiplicity of Eigenvalues

Dept. of Electrical and Systems Engineering
University of Pennsylvania

Binder

Lecture notes

1Reading

Material related to this page, as well as additional exercises, can be found in ALA 8.6, 10.1 and 10.3.

2Learning Objectives

By the end of this page, you should know:

  • examples of matrices with repeated Eigenvalues
  • what are Jordan Forms
  • algebraic and geometric multiplicity of Eigenvalues
  • how to solve linear dynamical systems with repeated Eigenvalues

3Repeated Eigenvalues

Let’s revisit the matrix A=[2102]A = \begin{bmatrix} 2 & 1 \\ 0 & 2 \end{bmatrix} we saw in the previous lecture. This matrix has an eigenvalue λ=2\lambda = 2 of algebraic multiplicity 2: (det(AλI)=(λ2)2=0\det(A-\lambda I) = (\lambda-2)^2 = 0 λ1=λ2=2\Leftrightarrow \lambda_1 = \lambda_2 = 2) but geometric multiplicity 1, i.e., only one linearly independent eigenvector

v1=[10], \mathbf{v}_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix},

exists. How can we solve x˙=Ax\dot{\mathbf{x}} = A\mathbf{x} in this case? Taking the approach that we’ve seen so far, we would write a candidate solution as

x(t)=c1e2t[10]. \mathbf{x}(t) = c_1 e^{2t} \begin{bmatrix} 1 \\ 0 \end{bmatrix}.

But this won’t work! What if x(0)=[01]\mathbf{x}(0) = \begin{bmatrix} 0 \\ 1 \end{bmatrix}? There is no c1Rc_1 \in \mathbb{R} such that x(0)=[c10]=[01]\mathbf{x}(0) = \begin{bmatrix} c_1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. Does this mean no solution to x˙=Ax\dot{\mathbf{x}} = A\mathbf{x} exists? This would be deeply unsettling! The issue here is that we are “missing” an eigenvector. To remedy this, we’ll introduce the idea of a generalized eigenvector. We will only consider 2×2 matrices, in which case a generalized eigenvector v2\mathbf{v}_2 for an eigenvalue λ with eigenvector v1\mathbf{v}_1 is given by the solution to the linear system:

(AλI)v2=v1.(A - \lambda I)\mathbf{v}_2 = \mathbf{v}_1.

For our example, we compute v2\mathbf{v}_2 by solving:

([2102]2[1001])[v21v22]=[0100][v21v22]=[v220]=[10]=v1v22=1 and v21 is free. We set v21=0 and findv2=[01] (any choice for v21 would work, this is just a convenient choice).\begin{align*} \left(\begin{bmatrix} 2 & 1 \\ 0 & 2 \end{bmatrix} - 2\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\right) \begin{bmatrix} v_{21} \\ v_{22} \end{bmatrix} &= \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\begin{bmatrix} v_{21} \\ v_{22} \end{bmatrix} = \begin{bmatrix} v_{22} \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \mathbf{v}_1 \\[10pt] &\Rightarrow v_{22} = 1 \text{ and } v_{21} \text{ is free. We set } v_{21} = 0 \text{ and find} \\[5pt] &\mathbf{v}_2 = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \text{ (any choice for } v_{21} \text{ would work, this is just a convenient choice).} \end{align*}

Now, how can we construct a solution using v2\mathbf{v}_2? If we try the strategy we used for eigenvalue/vector pairs, things do not quite work out:

If x2(t)=e2tv2\mathbf{x}_2(t) = e^{2t}\mathbf{v}_2 then x˙2(t)=2e2tv2=2x2\dot{\mathbf{x}}_2(t) = 2e^{2t}\mathbf{v}_2 = 2\mathbf{x}_2 but Ax2(t)=A(e2tv2)=e2t(2v2+v1)=2x2+v1e2tA\mathbf{x}_2(t) = A(e^{2t}\mathbf{v}_2) = e^{2t}(2\mathbf{v}_2 + \mathbf{v}_1) = 2\mathbf{x}_2 + \mathbf{v}_1e^{2t},

where we used the fact that the generalized eigenvector v2\mathbf{v}_2 satisfies

Av2=λv2+v1A\mathbf{v}_2 = \lambda\mathbf{v}_2 + \mathbf{v}_1,

which is obtained by rearranging (3). So we’ll have to try something else. Let’s see if

x2(t)=e2tv2+te2tv1\mathbf{x}_2(t) = e^{2t}\mathbf{v}_2 + te^{2t}\mathbf{v}_1

does better. This guess is made because we need to find a way to have e2tv1e^{2t}\mathbf{v}_1 appear in x˙\dot{\mathbf{x}}.

First we compute

x˙2=2e2tv2+e2tv1+2te2tv1=2(e2tv2+te2tv1)+e2tv1=2x2+e2tv1\begin{align*} \dot{\mathbf{x}}_2 &= 2e^{2t}\mathbf{v}_2 + e^{2t}\mathbf{v}_1 + 2te^{2t}\mathbf{v}_1 \\ &= 2(e^{2t}\mathbf{v}_2 + te^{2t}\mathbf{v}_1) + e^{2t}\mathbf{v}_1 \\ &= 2\mathbf{x}_2 + e^{2t}\mathbf{v}_1 \end{align*}

This looks promising! Now let’s check

Ax2(t)=A(e2tv2+te2tv1)=2e2tv2+e2tv1+2te2tv1=2(e2tv2+te2tv1)+e2tv1=2x2+e2tv1.\begin{align*} A\mathbf{x}_2(t) = A(e^{2t}\mathbf{v}_2 + te^{2t}\mathbf{v}_1) &= 2e^{2t}\mathbf{v}_2 + e^{2t}\mathbf{v}_1 + 2te^{2t}\mathbf{v}_1 \\ &= 2(e^{2t}\mathbf{v}_2 + te^{2t}\mathbf{v}_1) + e^{2t}\mathbf{v}_1 \\ &= 2\mathbf{x}_2 + e^{2t}\mathbf{v}_1. \end{align*}

Success! We therefore can write solutions to our initial value problem as linear combinations of

x1(t)=e2tv1andx2(t)=e2tv2+te2tv1,i.e.,x(t)=(c1+c2t)e2tv1+c2e2tv2.\begin{align*} \mathbf{x}_1(t) &= e^{2t}\mathbf{v}_1 \quad \text{and} \quad \mathbf{x}_2(t) = e^{2t}\mathbf{v}_2 + te^{2t}\mathbf{v}_1, \\ \text{i.e.,} \quad \mathbf{x}(t) &= (c_1 + c_2t)e^{2t}\mathbf{v}_1 + c_2e^{2t}\mathbf{v}_2. \end{align*}

Let’s check if we can find c1c_1 and c2c_2 so that x(0)=[01]\mathbf{x}(0) = \begin{bmatrix} 0 \\ 1 \end{bmatrix}:

x(0)=c1v1+c2v2=[c1c2]=[01]c1=0,c2=1\begin{align*} \mathbf{x}(0) = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 = \begin{bmatrix} c_1 \\ c_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \Rightarrow c_1 = 0, \, c_2 = 1 \end{align*}

and x(t)=[te2te2t]\mathbf{x}(t) = \begin{bmatrix} te^{2t} \\ e^{2t} \end{bmatrix} is the solution to our initial value problem.

42×2 Jordan Blocks

In the complete matrix setting, we saw that we could diagonalize the matrix AA using a similarity transformation defined by the eigenvectors of AA, i.e., for V=[v1v2vn]V = [\mathbf{v}_1 \, \mathbf{v}_2 \, \ldots \, \mathbf{v}_n], we have that

A=V1ΛV, or equivalently, A=VΛV1,Λ=diag(λ1,λ2,,λn).A = V^{-1}\Lambda V, \text{ or equivalently, } A = V\Lambda V^{-1}, \quad \Lambda = \text{diag}(\lambda_1, \lambda_2, \ldots, \lambda_n).

We saw that this was very useful when solving systems of linear equations.

In the case of incomplete matrices, similarity transformations defined in terms of generalized eigenvectors and Jordan blocks play an analogous role.

For example, consider the matrix A=[1113]A = \begin{bmatrix} 1 & 1 \\ -1 & 3 \end{bmatrix}. This matrix has a repeated eigenvalue at λ=2\lambda = 2, and one eigenvector v1=[11]{\mathbf{v}_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}}. We therefore compute the generalized eigenvector by solving (AλI)v2=v1(A-\lambda I)\mathbf{v}_2 = \mathbf{v}_1:

[1111][v21v22]=[11]v21+v22=1\begin{bmatrix} -1 & 1 \\ -1 & 1 \end{bmatrix}\begin{bmatrix} v_{21} \\ v_{22} \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \Rightarrow -v_{21} + v_{22} = 1

One solution is v2=[01]\mathbf{v}_2 = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. We construct our similarity transformation as before, and set V=[v1v2]=[1011]V = \bm \mathbf{v}_1 & \mathbf{v}_2 \em = \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}, and compute V1=[1011]V^{-1} = \begin{bmatrix} 1 & 0 \\ -1 & 1 \end{bmatrix}

Let’s see what happens if we compute V1AVV^{-1}AV. In the complete case, this would give us a diagonal matrix. In this case, we get

[1011][1113][1011]=[2102],\begin{align*} \begin{bmatrix} 1 & 0 \\ -1 & 1 \end{bmatrix}\begin{bmatrix} 1 & 1 \\ -1 & 3 \end{bmatrix}\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 0 & 2 \end{bmatrix}, \end{align*}

which we’ll recognize as our previous example! It turns out that all 2×22 \times 2 matrices with λ=2\lambda=2 having algebraic multiplicity 2 and geometric multiplicity 1 are similar to the Jordan Block

J=[2102],J = \begin{bmatrix} 2 & 1 \\ 0 & 2 \end{bmatrix},

and this similarity transformation is defined by V=[v1v2]V = \bm \mathbf{v}_1 & \mathbf{v}_2 \em composed of the eigenvector v1\mathbf{v}_1 and generalized eigenvector v2\mathbf{v}_2 of the original matrix.

We can generalize this idea to any 2×22 \times 2 matrix with only one eigenvector:

Using this theorem, we can conclude, much in the same way we did for diagonalizable AA, that if A=VJλV1A = VJ_\lambda V^{-1}, then

x(t)=(c1+c2t)eλtv1+c2eλtv2\mathbf{x}(t) = (c_1 + c_2t)e^{\lambda t}\mathbf{v}_1 + c_2e^{\lambda t}\mathbf{v}_2

is a general solution to x˙=Ax\dot{\mathbf{x}} = A\mathbf{x}.

Binder