7.3 Matrix Exponential power series of a matrix
Dept. of Electrical and Systems Engineering
University of Pennsylvania
1 Reading ¶ Material related to this page, as well as additional exercises, can be found in ALA 10.4.
2 Learning Objectives ¶ By the end of this page, you should know:
how to define the matrix exponential as a power series, how to solve linear ODEs with the matrix exponential, how to compute the matrix exponential. 3 Defining the Matrix Exponential ¶ We’ve seen four cases for eigenvalues/eigenvectors and their relationship to solutions of initial value problems defined by x ˙ = A x \dot{\mathbf{x}} = A\mathbf{x} x ˙ = A x and x ( 0 ) \mathbf{x}(0) x ( 0 ) given:
real distinct eigenvalues, solved by diagonalization; real repeated eigenvalues with algebraic multiplicity = geometric multiplicity, also solved by diagonalization; complex distinct eigenvalues, solved by diagonalization and applying Euler’s formula to define real-valued eigenfunctions; repeated eigenvalues with algebraic multiplicity > geometric multiplicity, solved by Jordan decomposition using generalized eigenvectors. While correct, the fact that there are four different cases we need to consider is somewhat unsatisfying. In this section, we show that by appropriately defining a matrix exponential , we can provide a unified treatment of all the aforementioned settings.
We start by recalling the power series definition for the scalar exponential e x e^x e x , for x ∈ R x \in \mathbb{R} x ∈ R :
e x = 1 + x + x 2 2 ! + x 3 3 ! + ⋯ = ∑ k = 0 ∞ x k k ! , ( PS ) e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots = \sum_{k=0}^{\infty} \frac{x^k}{k!}, \quad (\text{PS}) e x = 1 + x + 2 ! x 2 + 3 ! x 3 + ⋯ = k = 0 ∑ ∞ k ! x k , ( PS ) where we recall that k ! = 1 ⋅ 2 ⋯ ( k − 1 ) ⋅ k k! = 1 \cdot 2 \cdots (k-1) \cdot k k ! = 1 ⋅ 2 ⋯ ( k − 1 ) ⋅ k . We know that for the scalar initial value problem x ˙ = a x \dot{x} = ax x ˙ = a x , the solution is x ( t ) = e a t x ( 0 ) x(t) = e^{at}x(0) x ( t ) = e a t x ( 0 ) , where e a t e^{at} e a t can be computed via (PS ) by setting x = a t x = at x = a t .
Wouldn’t it be cool if we could do something similar for the vector valued initial value problem defined by x ˙ = A x \dot{\mathbf{\vv x}} = A\mathbf{\vv x} x ˙ = A x ? Does there exist a function, call it e A t e^{At} e A t , so that x ( t ) = e A t x ( 0 ) \mathbf{x}(t) = e^{At}\mathbf{x}(0) x ( t ) = e A t x ( 0 ) ? How would we even begin to define such a thing?
Let’s do the “obvious” thing and start with the definition (PS ), and replace the scalar x x x with a matrix X X X to obtain the matrix exponential of X :
e X = I + X + X 2 2 ! + X 3 3 ! + ⋯ = ∑ k = 0 ∞ X k k ! , ( MPS ) e^X = I + X + \frac{X^2}{2!} + \frac{X^3}{3!} + \cdots = \sum_{k=0}^{\infty} \frac{X^k}{k!}, \quad (\text{MPS}) e X = I + X + 2 ! X 2 + 3 ! X 3 + ⋯ = k = 0 ∑ ∞ k ! X k , ( MPS ) Although we can’t prove it yet, it can be shown that (MPS ) converges for any X X X , so this is a well defined object. Does (MPS ) help with solving x ˙ = A x \dot{\mathbf{\vv x}} = A\mathbf{\vv x} x ˙ = A x ? Let’s try the test solution x ( t ) = e A t x ( 0 ) \mathbf{\vv x}(t) = e^{At}\mathbf{\vv x}(0) x ( t ) = e A t x ( 0 ) — this is exactly what we did for the scalar setting, but we replace e a t e^{at} e a t with e A t e^{At} e A t . Is this a solution to x ˙ = A x \dot{\mathbf{\vv x}} = A\mathbf{\vv x} x ˙ = A x ?
First, we compute A x ( t ) = A e A t x ( 0 ) A\mathbf{\vv x}(t) = Ae^{At}\mathbf{\vv x}(0) A x ( t ) = A e A t x ( 0 ) . Next, we need to compute d d t e A t x ( 0 ) \frac{d}{dt}e^{At}\mathbf{\vv x}(0) d t d e A t x ( 0 ) . But how do we do this? We will rely on (MPS ):
d d t e A t x ( 0 ) = d d t ( I + A t + ( A t ) 2 2 ! + ( A t ) 3 3 ! + ⋯ ) = d d t I + d d t A t + d d t A 2 t 2 2 ! + d d t A 3 t 3 3 ! + ⋯ = 0 + A + A 2 t + A 3 t 2 2 + ⋯ = A + A 2 t + A 3 t 2 2 ! + A 4 t 3 3 ! + ⋯ = A ( I + A t + A 2 t 2 2 ! + A 3 t 3 3 ! + ⋯ ) = A e A t x ( 0 ) . \begin{align*}
\frac{d}{dt} e^{At} \vv x(0) &= \frac{d}{dt} \left(I + At + \frac{(At)^2}{2!} + \frac{(At)^3}{3!} + \cdots\right) \\
&= \frac{d}{dt}I + \frac{d}{dt}At + \frac{d}{dt}\frac{A^2t^2}{2!} + \frac{d}{dt}\frac{A^3t^3}{3!} + \cdots \\
&= 0 + A + A^2t + A^3\frac{t^2}{2} + \cdots \\
&= A + A^2t + \frac{A^3t^2}{2!} + \frac{A^4t^3}{3!} + \cdots \\
&= A\left(I + At + \frac{A^2t^2}{2!} + \frac{A^3t^3}{3!} + \cdots\right) \\
&= A e^{At} \vv x(0).
\end{align*} d t d e A t x ( 0 ) = d t d ( I + A t + 2 ! ( A t ) 2 + 3 ! ( A t ) 3 + ⋯ ) = d t d I + d t d A t + d t d 2 ! A 2 t 2 + d t d 3 ! A 3 t 3 + ⋯ = 0 + A + A 2 t + A 3 2 t 2 + ⋯ = A + A 2 t + 2 ! A 3 t 2 + 3 ! A 4 t 3 + ⋯ = A ( I + A t + 2 ! A 2 t 2 + 3 ! A 3 t 3 + ⋯ ) = A e A t x ( 0 ) . This worked, and we have found a general solution to x ˙ = A x \dot{\vv x} = \vv Ax x ˙ = A x defined in terms of the matrix exponential!
Consider the initial value problem x ˙ = A x \dot{\vv x} = A \vv x x ˙ = A x , with x ( 0 ) \vv x(0) x ( 0 ) specified. Its
solution is given by x ( t ) = e A t x ( 0 ) \vv x(t) = e^{At} \vv x(0) x ( t ) = e A t x ( 0 ) , where e A t e^{At} e A t is defined according to the matrix power series MPS .
This is very satisfying, as now our scalar and vector-valued problems have similar looking solutions defined in terms of appropriate exponential functions. The only thing that remains is to compute e A t e^{At} e A t ! How do we do this? This is where all of the work we’ve done on diagonalization and Jordan forms really pays off!
4 Computing the Matrix Exponential ¶ 4.1 Case 1: Real eigenvalues, diagonalizable A A A ¶ Suppose that A ∈ R n × n A \in \mathbb{R}^{n\times n} A ∈ R n × n and has eigenvalues λ 1 , λ 2 , … , λ n \lambda_1, \lambda_2, \ldots, \lambda_n λ 1 , λ 2 , … , λ n with corresponding linearly independent eigenvectors v 1 , v 2 , … , v n \vv v_1, \vv v_2, \ldots,\vv v_n v 1 , v 2 , … , v n . Then we can write
A = V Λ V − 1 , for V = [ v 1 , v 2 , … , v n ] and Λ = diag ( λ 1 , λ 2 , … , λ n ) . A = V \Lambda V^{-1}, \text{ for } V = \bm \vv v_1, \vv v_2, \ldots, \vv v_n\em \text{ and } \Lambda = \text{diag}(\lambda_1, \lambda_2, \ldots, \lambda_n). A = V Λ V − 1 , for V = [ v 1 , v 2 , … , v n ] and Λ = diag ( λ 1 , λ 2 , … , λ n ) . To compute e A t e^{At} e A t we need to compute powers ( A t ) k (At)^k ( A t ) k . Let’s work a few of these out using A = V Λ V − 1 A = V\Lambda V^{-1} A = V Λ V − 1 :
( A t ) 0 = I , A t = V Λ V − 1 t , A 2 t 2 = ( V Λ V − 1 ) ( V Λ V − 1 ) t 2 , A 3 t 3 = ( V Λ V − 1 ) A 2 t 3 = V Λ 2 V − 1 t 2 = ( V Λ V − 1 ) ( V Λ 2 V − 1 ) t 3 = V Λ 3 V − 1 t 3 \begin{align*}
(At)^0 = I, \quad At = V\Lambda V^{-1}t, \quad A^2t^2 &= (V\Lambda V^{-1})(V\Lambda V^{-1})t^2,
& A^3t^3 &= (V\Lambda V^{-1})A^2t^3 \\
&= V\Lambda^2 V^{-1}t^2 & &=(V\Lambda V^{-1})(V\Lambda^2 V^{-1})t^3 \\
& & &= V\Lambda^3 V^{-1}t^3
\end{align*} ( A t ) 0 = I , A t = V Λ V − 1 t , A 2 t 2 = ( V Λ V − 1 ) ( V Λ V − 1 ) t 2 , = V Λ 2 V − 1 t 2 A 3 t 3 = ( V Λ V − 1 ) A 2 t 3 = ( V Λ V − 1 ) ( V Λ 2 V − 1 ) t 3 = V Λ 3 V − 1 t 3 There is a pattern: ( A t ) k = V Λ k V − 1 t k (At)^k = V \Lambda^k V^{-1} t^k ( A t ) k = V Λ k V − 1 t k . This is nice, since computing powers of diagonal matrices is easy:
Λ k = [ λ 1 ⋱ λ n ] k = [ λ 1 k ⋱ λ n k ] .
\Lambda^k = \begin{bmatrix}
\lambda_1 & & \\
& \ddots & \\
& & \lambda_n
\end{bmatrix}^k = \begin{bmatrix}
\lambda_1^k & & \\
& \ddots & \\
& & \lambda_n^k
\end{bmatrix}. Λ k = ⎣ ⎡ λ 1 ⋱ λ n ⎦ ⎤ k = ⎣ ⎡ λ 1 k ⋱ λ n k ⎦ ⎤ . Let’s plug these expressions into (MPS ):
e A t = I + A t + A 2 t 2 2 ! + A 3 t 3 3 ! + ⋯ = V V − 1 + V Λ V − 1 t + V Λ 2 V − 1 t 2 2 ! + V Λ 3 V − 1 t 3 3 ! + ⋯ = V ( I + Λ t + Λ 2 t 2 2 ! + Λ 3 t 3 3 ! + ⋯ ) V − 1 (factor out V ( ⋅ ) V − 1 ) = V ( diag ( 1 + λ 1 t + λ 1 2 t 2 2 ! + λ 1 3 t 3 3 ! , … , 1 + λ n t + λ n 2 t 2 2 ! + λ n 3 t 3 3 ! ) ) V − 1 = V [ e λ 1 t ⋱ e λ n t ] V − 1 (we recognize 1 + λ i t + λ i 2 t 2 2 ! + ⋯ as (PS)) \begin{align*}
e^{At} &= I + At + \frac{A^2t^2}{2!} + \frac{A^3t^3}{3!} + \cdots \\
&= VV^{-1} + V\Lambda V^{-1}t + V\Lambda^2 V^{-1}\frac{t^2}{2!} + V\Lambda^3 V^{-1}\frac{t^3}{3!} + \cdots \\
&= V\left(I + \Lambda t + \frac{\Lambda^2 t^2}{2!} + \frac{\Lambda^3 t^3}{3!} + \cdots\right)V^{-1} \quad \text{(factor out } V(\cdot)V^{-1}\text{)} \\
&= V\left(\text{diag}\left(1+\lambda_1t+\frac{\lambda_1^2t^2}{2!}+\frac{\lambda_1^3t^3}{3!}, \ldots, 1+\lambda_nt+\frac{\lambda_n^2t^2}{2!}+\frac{\lambda_n^3t^3}{3!}\right)\right)V^{-1} \\
&= V \begin{bmatrix}
e^{\lambda_1 t} & & \\
& \ddots & \\
& & e^{\lambda_n t}
\end{bmatrix} V^{-1} \quad \text{(we recognize } 1+\lambda_i t+\frac{\lambda_i^2t^2}{2!}+\cdots \text{ as (PS))}
\end{align*} e A t = I + A t + 2 ! A 2 t 2 + 3 ! A 3 t 3 + ⋯ = V V − 1 + V Λ V − 1 t + V Λ 2 V − 1 2 ! t 2 + V Λ 3 V − 1 3 ! t 3 + ⋯ = V ( I + Λ t + 2 ! Λ 2 t 2 + 3 ! Λ 3 t 3 + ⋯ ) V − 1 (factor out V ( ⋅ ) V − 1 ) = V ( diag ( 1 + λ 1 t + 2 ! λ 1 2 t 2 + 3 ! λ 1 3 t 3 , … , 1 + λ n t + 2 ! λ n 2 t 2 + 3 ! λ n 3 t 3 ) ) V − 1 = V ⎣ ⎡ e λ 1 t ⋱ e λ n t ⎦ ⎤ V − 1 (we recognize 1 + λ i t + 2 ! λ i 2 t 2 + ⋯ as (PS)) That’s very nice! We diagonalize A A A , then exponentiate its eigenvalues to compute e A t e^{At} e A t .
Let’s plug this back in to x ( t ) = e A t x ( 0 ) \vv x(t) = e^{At} \vv x(0) x ( t ) = e A t x ( 0 ) :
x ( t ) = V [ e λ 1 t ⋱ e λ n t ] V − 1 x ( 0 ) .
\vv x(t) = V \begin{bmatrix}
e^{\lambda_1 t} & & \\
& \ddots & \\
& & e^{\lambda_n t}
\end{bmatrix} V^{-1} \vv x(0). x ( t ) = V ⎣ ⎡ e λ 1 t ⋱ e λ n t ⎦ ⎤ V − 1 x ( 0 ) . Now, if we let c = V − 1 x ( 0 ) \vv c = V^{-1}\vv x(0) c = V − 1 x ( 0 ) , we can write
x ( t ) = [ v 1 ⋯ v n ] [ e λ 1 t ⋱ e λ n t ] [ c 1 ⋮ c n ] = c 1 e λ 1 t v 1 + ⋯ + c n e λ n t v n ,
\vv x(t) = \bm \vv v_1 \cdots \vv v_n\em \begin{bmatrix}
e^{\lambda_1 t} & & \\
& \ddots & \\
& & e^{\lambda_n t}
\end{bmatrix} \begin{bmatrix}
c_1 \\ \vdots \\ c_n
\end{bmatrix} = c_1 e^{\lambda_1 t}\vv v_1 + \cdots + c_n e^{\lambda_n t} \vv v_n, x ( t ) = [ v 1 ⋯ v n ] ⎣ ⎡ e λ 1 t ⋱ e λ n t ⎦ ⎤ ⎣ ⎡ c 1 ⋮ c n ⎦ ⎤ = c 1 e λ 1 t v 1 + ⋯ + c n e λ n t v n , recovering our previous solution, with the exact formula c = V − 1 x ( 0 ) \vv c = V^{-1} \vv x(0) c = V − 1 x ( 0 ) we saw previously for the coefficients c 1 , … , c n c_1, \ldots, c_n c 1 , … , c n !.
4.2 Case 2: Imaginary Eigenvalues ¶ We focus on the 2 × 2 2 \times 2 2 × 2 case with A = [ 0 ω − ω 0 ] = ω [ 0 1 − 1 0 ] {A = \begin{bmatrix} 0 & \omega \\ -\omega & 0 \end{bmatrix} = \omega \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}} A = [ 0 − ω ω 0 ] = ω [ 0 − 1 1 0 ] . In this case, we will compute the power series directly.
A = ω [ 0 1 − 1 0 ] , A 2 = ω 2 [ − 1 0 0 − 1 ] , A 3 = ω 3 [ 0 − 1 1 0 ] , A 4 = ω 4 [ 1 0 0 1 ] = ω J , = ω 2 J 2 , = ω 3 J 3 , = ω 4 J 4 A 5 = ω 5 J 5 = ω 5 J , A 6 = ω 6 J 6 = J 2 , A 7 = ω 7 J 7 = ω 7 J 3 , A 8 = ω 8 J 8 = ω 8 J 4 , \begin{align*}
A &= \omega \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, & A^2 &= \omega^2 \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix}, & A^3 &= \omega^3 \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}, & A^4 &= \omega^4 \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \\
&= \omega J, & &= \omega^2 J^2, & &= \omega^3 J^3, & &= \omega^4 J^4 \\
A^5 &= \omega^5 J^5 = \omega^5 J, & A^6 &= \omega^6 J^6 = J^2, & A^7 &= \omega^7 J^7 = \omega^7 J^3, & A^8 &= \omega^8 J^8 = \omega^8 J^4,
\end{align*} A A 5 = ω [ 0 − 1 1 0 ] , = ω J , = ω 5 J 5 = ω 5 J , A 2 A 6 = ω 2 [ − 1 0 0 − 1 ] , = ω 2 J 2 , = ω 6 J 6 = J 2 , A 3 A 7 = ω 3 [ 0 1 − 1 0 ] , = ω 3 J 3 , = ω 7 J 7 = ω 7 J 3 , A 4 A 8 = ω 4 [ 1 0 0 1 ] = ω 4 J 4 = ω 8 J 8 = ω 8 J 4 , etc. So putting this together in computing e A t e^{At} e A t we get:
e A t = [ 1 − 1 2 ! t 2 ω 2 + ⋯ t ω − 1 3 ! t 3 ω 3 + ⋯ − t ω + 1 3 ! t 3 ω 3 + ⋯ 1 − 1 2 ! t 2 ω 2 + ⋯ ] = [ cos ω t sin ω t − sin ω t cos ω t ] , e^{At} = \begin{bmatrix} 1 - \frac{1}{2!}t^2 \omega^2 + \cdots & t \omega - \frac{1}{3!}t^3 \omega^3 + \cdots \\
-t \omega + \frac{1}{3!}t^3 \omega^3 + \cdots & 1 - \frac{1}{2!}t^2 \omega^2 + \cdots \end{bmatrix} = \begin{bmatrix} \cos \omega t & \sin \omega t \\ -\sin \omega t & \cos \omega t \end{bmatrix}, e A t = [ 1 − 2 ! 1 t 2 ω 2 + ⋯ − t ω + 3 ! 1 t 3 ω 3 + ⋯ t ω − 3 ! 1 t 3 ω 3 + ⋯ 1 − 2 ! 1 t 2 ω 2 + ⋯ ] = [ cos ω t − sin ω t sin ω t cos ω t ] , where we used the power series for sin ω t \sin \omega t sin ω t and cos ω t \cos \omega t cos ω t in the last equality.
As expected, the matrix A = ω [ 0 1 − 1 0 ] A = \omega \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} A = ω [ 0 − 1 1 0 ] has a matrix exponential which defines a rotation, at rate ω, so that
x ( t ) = [ cos ω t sin ω t − sin ω t cos ω t ] x ( 0 ) .
\vv x(t) = \begin{bmatrix} \cos \omega t & \sin \omega t \\ -\sin \omega t & \cos \omega t \end{bmatrix} \vv x(0). x ( t ) = [ cos ω t − sin ω t sin ω t cos ω t ] x ( 0 ) . 4.3 Case 3: Complex Eigenvalues ¶ Let’s generalize our previous example to A = [ 6 ω − ω 6 ] A = \begin{bmatrix} 6 & \omega \\ -\omega & 6 \end{bmatrix} A = [ 6 − ω ω 6 ] .The matrix A A A has complex conjugate eigenvalues λ 1 = 6 + i ω \lambda_1 = 6 + i\omega λ 1 = 6 + iω and λ 2 = 6 − i ω \lambda_2 = 6 - i\omega λ 2 = 6 − iω . We will again compute the power series directly. To do so, we will use the following very useful fact:
We will strategically use this fact. First, defining J = [ 0 1 − 1 0 ] J = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} J = [ 0 − 1 1 0 ] we note that we can write A = 6 I + ω J A = 6I + \omega J A = 6 I + ω J .Importantly, 6 I 6I 6 I and ω J \omega J ω J commute as ( 6 I ) ( ω J ) = ( ω J ) ( 6 I ) = ω 6 J (6I)(\omega J) = (\omega J)(6I) = \omega 6J ( 6 I ) ( ω J ) = ( ω J ) ( 6 I ) = ω 6 J . Therefore,
e A t = e ( 6 I + ω J ) t = e 6 I t e ω J t = [ e 6 t 0 0 e 6 t ] [ cos ω t sin ω t − sin ω t cos ω t ] = e 6 t [ cos ω t sin ω t − sin ω t cos ω t ]
e^{At} = e^{(6I + \omega J)t} = e^{6It} e^{\omega Jt} = \begin{bmatrix} e^{6t} & 0 \\ 0 & e^{6t} \end{bmatrix} \begin{bmatrix} \cos \omega t & \sin \omega t \\ -\sin \omega t & \cos \omega t \end{bmatrix} = e^{6t} \begin{bmatrix} \cos \omega t & \sin \omega t \\ -\sin \omega t & \cos \omega t \end{bmatrix} e A t = e ( 6 I + ω J ) t = e 6 I t e ω J t = [ e 6 t 0 0 e 6 t ] [ cos ω t − sin ω t sin ω t cos ω t ] = e 6 t [ cos ω t − sin ω t sin ω t cos ω t ] 4.4 Case 4: Jordan Block ¶ Assume A = V [ λ 1 0 λ ] V − 1 A = V \begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix} V^{-1} A = V [ λ 0 1 λ ] V − 1 , for V = [ v 1 v 2 ] V = \bm \vv v_1 & \vv v_2\em V = [ v 1 v 2 ] an eigenvector and generalized eigenvector of A.
Then following the same argument as in Case 1, we have that e A t = V e [ λ 1 0 λ ] t V − 1 e^{At} = V e^{\begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix}t} V^{-1} e A t = V e [ λ 0 1 λ ] t V − 1 . To compute e [ λ 1 0 λ ] t e^{\begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix}t} e [ λ 0 1 λ ] t , we note [ λ 1 0 λ ] t = λ I t + t [ 0 1 0 0 ] \begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix}t = \lambda It + t\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} [ λ 0 1 λ ] t = λ I t + t [ 0 0 1 0 ] , and that these two terms commute. Hence: e [ λ 1 0 λ ] t = e [ λ 0 0 λ ] t e t [ 0 1 0 0 ] e^{\begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix}t} = e^{\begin{bmatrix} \lambda & 0 \\ 0 & \lambda \end{bmatrix}t} e^{t\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}} e [ λ 0 1 λ ] t = e [ λ 0 0 λ ] t e t [ 0 0 1 0 ] . We note that
e [ λ 0 0 λ ] t = [ e λ t 0 0 e λ t ] and e [ 0 t 0 0 ] = [ 1 0 0 1 ] + [ 0 t 0 0 ] (higher powers = 0 ) = [ 1 t 0 1 ] \begin{align*}
e^{\begin{bmatrix} \lambda & 0 \\ 0 & \lambda \end{bmatrix}t} = \begin{bmatrix} e^{\lambda t} & 0 \\ 0 & e^{\lambda t} \end{bmatrix} \text{ and } e^{\begin{bmatrix} 0 & t \\ 0 & 0 \end{bmatrix}} &= \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & t \\ 0 & 0 \end{bmatrix} \ \text{ (higher powers }=0) \\
&= \begin{bmatrix} 1 & t \\ 0 & 1 \end{bmatrix}
\end{align*} e [ λ 0 0 λ ] t = [ e λ t 0 0 e λ t ] and e [ 0 0 t 0 ] = [ 1 0 0 1 ] + [ 0 0 t 0 ] (higher powers = 0 ) = [ 1 0 t 1 ] Allowing us to conclude that e [ λ 1 0 λ ] t = [ e λ t t e λ t 0 e λ t ] e^{\begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix}t} = \begin{bmatrix} e^{\lambda t} & te^{\lambda t} \\ 0 & e^{\lambda t} \end{bmatrix} e [ λ 0 1 λ ] t = [ e λ t 0 t e λ t e λ t ] , and that
x ( t ) = e A t x ( 0 ) = [ v 1 v 2 ] [ e λ t t e λ t 0 e λ t ] V − 1 x ( 0 ) , and letting c = V − 1 x ( 0 ) = [ v 1 v 2 ] [ c 1 e λ t + c 2 t e λ t c 2 e λ t ] = ( c 1 e λ t + c 2 t e λ t ) v 1 + c 2 e λ t v 2 , \begin{align*}
\vv x(t) = e^{At} \vv x(0) &= \bm \vv v_1 & \vv v_2\em \begin{bmatrix} e^{\lambda t} & te^{\lambda t} \\ 0 & e^{\lambda t} \end{bmatrix} V^{-1} \vv x(0), \quad \text{and letting } \vv c = V^{-1}\vv x(0) \\
&= \bm \vv v_1 & \vv v_2\em \begin{bmatrix} c_1 e^{\lambda t} + c_2 te^{\lambda t} \\ c_2 e^{\lambda t} \end{bmatrix} = \left(c_1 e^{\lambda t} + c_2 te^{\lambda t}\right)\vv v_1 + c_2 e^{\lambda t} \vv v_2,
\end{align*} x ( t ) = e A t x ( 0 ) = [ v 1 v 2 ] [ e λ t 0 t e λ t e λ t ] V − 1 x ( 0 ) , and letting c = V − 1 x ( 0 ) = [ v 1 v 2 ] [ c 1 e λ t + c 2 t e λ t c 2 e λ t ] = ( c 1 e λ t + c 2 t e λ t ) v 1 + c 2 e λ t v 2 , which we recognize from our previous section on Jordan Blocks.