Skip to article frontmatterSkip to article content

7.1 Complex Eigenvalues and Linear Dynamical Systems

Euler breaks down complex exponentials

Dept. of Electrical and Systems Engineering
University of Pennsylvania

Binder

Lecture notes

0.1Reading

Material related to this page, as well as additional exercises, can be found in ALA 10.1 and 10.3.

0.2Learning Objectives

By the end of this page, you should know:

  • how to use Euler’s formula,
  • how to write real solutions to linear dynamical systems that have complex eigenvalues.

1Solving Linear ODEs with Complex Eigenvalues

In the previous section on linear ODEs, we learned how to find the solutions when the coefficient matrix AA was diagonalizable. We also saw some examples when AA was diagonalizable and had all real eigenvalues. It turns out that the same procedure generalizes when AA is diagonalizable, but has complex eigenvalues. On this page, we’ll see how this can be done.

Let’s apply our solution method for diagonalizable AA to x˙=Ax\dot{\vv x} = A \vv x with A=[0110]A = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}. We are given that AA has eigenvalue/eigenvector pairs:

λ1=i, v1=[1i]andλ2=i,v2=[1i].\lambda_1 = i, \ \vv v_1 = \begin{bmatrix} 1 \\ -i \end{bmatrix} \quad \text{and} \quad \lambda_2 = -i, \vv v_2 = \begin{bmatrix} 1 \\ i \end{bmatrix}.

Even though these eigenvalues/vectors have complex entries, we can still use our solution method for diagonalizable AA! We write the solution to x˙=Ax\dot{\vv x} = A\vv x as

x(t)=c1eλ1tv1+c2eλ2tv2=c1eit[1i]+c2eit[1i],(SOL)\vv x(t) = c_1e^{\lambda_1 t}\vv v_1 + c_2e^{\lambda_2t}\vv v_2 = c_1e^{it}\begin{bmatrix} 1 \\ -i \end{bmatrix} + c_2e^{-it}\begin{bmatrix} 1 \\ i \end{bmatrix}, \quad (\text{SOL})

i.e., x(t)\vv x(t) is a linear combination of the two solutions

x1(t)=c1eit[1i]andx2(t)=c2eit[1i].(BASE)\vv x_1(t) = c_1e^{it}\begin{bmatrix} 1 \\ -i \end{bmatrix} \quad \text{and} \quad \vv x_2(t) = c_2e^{-it}\begin{bmatrix} 1 \\ i \end{bmatrix}. \quad (\text{BASE})

and then solve for c1c_1 and c2c_2 to ensure compatibility with the initial condition x(0)\vv x(0). In general c1c_1 and c2c_2 will also be complex numbers, and x(t)\vv x(t) will take complex values. This is mathematically correct, and indeed one can study dynamical systems evolving over complex numbers. However, we will see that with a little massaging, we can rewrite this solution in a more useful form!

1.1Rewriting the Solution to a Linear ODE

In this class, and in most engineering applications, we are interested in real solutions to x˙=Ax\dot{\vv x} = A\vv x. If we know want real solutions, it might make sense to try to find different “base” solutions than x1(t)\vv x_1(t) and x2(t)\vv x_2(t) that still span all possible solutions to x˙=Ax\dot{\vv x} = A\vv x. Our key tool for accomplishing this is Euler’s formula:

We apply Euler’s formula to (BASE), and obtain (after simplifying):

x1(t)=eit[1i]=[costsint]+i[sintcost],x2(t)=eit[1i]=[costsint]i[sintcost].\begin{align*} \vv x_1(t) = e^{it}\begin{bmatrix} 1 \\ -i \end{bmatrix} = \begin{bmatrix} \cos t \\ \sin t \end{bmatrix} + i\begin{bmatrix} \sin t \\ -\cos t \end{bmatrix}, \vv x_2(t) = e^{-it}\begin{bmatrix} 1 \\ i \end{bmatrix} = \begin{bmatrix} \cos t \\ \sin t \end{bmatrix} - i\begin{bmatrix} \sin t \\ -\cos t \end{bmatrix}. \end{align*}

We’ve made some progress, in that x1(t)\vv x_1(t) and x2(t)\vv x_2(t) are now in the “standard” complex number form a+iba+ib, and that x1(t)=x2(t)\vv x_1(t) = \overline{\vv x_2}(t), i.e., x1(t)\vv{x_1}(t) and x2(t)\vv{x_2}(t) are complex conjugates. We use this observation strategically to define two new “base” solutions:

x^1(t)=12(x1(t)+x2(t))=12(x1(t)+x1(t))=[costsint](=Re{x1(t)})\begin{align*} \hat{\vv x}_1(t) &= \frac{1}{2}(\vv x_1(t) + \vv x_2(t)) = \frac{1}{2}(\vv x_1(t) + \overline{\vv x_1}(t)) \\ &= \begin{bmatrix} \cos t \\ \sin t \end{bmatrix} (= \text{Re}\{\vv x_1(t)\}) \end{align*}
x^2(t)=12i(x1(t)x2(t))=12i(x1(t)x1(t))=[sintcost](=Im{x1(t)})\begin{align*} \hat{\vv x}_2(t) &= \frac{1}{2i}(\vv x_1(t) - \vv x_2(t)) = \frac{1}{2i}(\vv x_1(t) - \overline{\vv x_1}(t)) \\ &= \begin{bmatrix} \sin t \\ -\cos t \end{bmatrix} (= \text{Im}\{\vv x_1(t)\}) \end{align*}

We note that since x^1(t)\hat{\vv x}_1(t) and x^2(t)\hat{\vv x}_2(t) are linear combinations of x1(t)\vv x_1(t) and x2(t)\vv x_2(t), they are valid solutions to x˙=Ax\dot{\vv x} = A\vv x. Furthermore, since x^1(t)\hat{\vv x}_1(t) and x^2(t)\hat{\vv x}_2(t) are linearly independent (i.e., c1x^1(t)+c2x^2(t)=0c_1\hat{\vv x}_1(t) + c_2\hat{\vv x}_2(t) = 0 for all tt if and only if c1=c2=0c_1 = c_2 = 0), they form a basis for the solution set to x˙=Ax\dot{\vv x} = A \vv x. Therefore, we can rewrite (SOL) as

x(t)=c1[costsint]+c2[sintcost]\vv x(t) = c_1\begin{bmatrix} \cos t \\ \sin t \end{bmatrix} + c_2\begin{bmatrix} \sin t \\ -\cos t \end{bmatrix}

and then solve for c1c_1 and c2c_2 using x(0)\vv x(0). If x(0)R2\vv x(0) \in \mathbb{R}^2, i.e., if the initial condition x(0)\vv x(0) is real, then c1c_1 and c2c_2 will be too. For example, suppose x(0)=[ab]\vv x(0) = \begin{bmatrix} a \\ b \end{bmatrix}, with a,bRa,b \in \mathbb{R}. Then:

x(0)=c1[10]+c2[01]=[c1c2]=[ab]c1=a,c2=b,\vv x(0) = c_1\begin{bmatrix} 1 \\ 0 \end{bmatrix} + c_2\begin{bmatrix} 0 \\ -1 \end{bmatrix} = \begin{bmatrix} c_1 \\ -c_2 \end{bmatrix} = \begin{bmatrix} a \\ b \end{bmatrix} \Rightarrow c_1 = a, c_2 = -b,

and

x(t)=[acostbsintasint+bcost]=[costsintsintcost][ab]=R(t)x(0), \vv x(t) = \begin{bmatrix} a\cos t - b\sin t \\ a\sin t + b\cos t \end{bmatrix} = \begin{bmatrix} \cos t & -\sin t \\ \sin t & \cos t \end{bmatrix}\begin{bmatrix} a \\ b \end{bmatrix} = R(t) \vv x(0),

i.e., the solution x(t)\vv x(t) corresponds to the initial condition x(0)\vv x(0) being rotated in a counterclockwise direction at a frequency of 1 rad/s.

Python break!

In the below code, we illustrate how to numerically integrate an ordinary differential equation using SciPy’s function in Python. We then compare it with the manual solution we obtained in Example 1.

import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt

A = np.array([[1, 2, 0],
          [0, 1, -2],
          [2, 2, -1]])

def system(t, x):
    return A @ x
    
x0 = np.array([2, -1, -2])

# Define time points for the solution
t_span = (0, 4)  # Time from 0 to 10
t_eval = np.linspace(t_span[0], t_span[1], 100)  # Time points where the solution is evaluated

# Solve the ODE system
sol = solve_ivp(system, t_span, x0, t_eval=t_eval)

# Extract the solution
times = sol.t
solutions = sol.y

# Plot the solutions
plt.figure(figsize=(10, 5))
plt.plot(times, solutions[0, :], label='x1(t)')
plt.plot(times, solutions[1, :], label='x2(t)')
plt.plot(times, solutions[2, :], label='x3(t)')
plt.xlabel('Time t')
plt.ylabel('Solution x(t)')
plt.title('Solutions of the ODE using SciPy integration')
plt.legend()
plt.grid()
plt.show()

## Manual solution from example 1

solutions_manual = np.vstack((2*np.exp(-times)+np.exp(times)*np.sin(2*times),
                             -2*np.exp(-times)+np.exp(times)*np.cos(2*times),
                              -2*np.exp(-times)+np.exp(times)*np.sin(2*times),))

# Plot the solutions
plt.figure(figsize=(10, 5))
plt.plot(times, solutions_manual[0, :], label='x1(t)')
plt.plot(times, solutions_manual[1, :], label='x2(t)')
plt.plot(times, solutions_manual[2, :], label='x3(t)')
plt.xlabel('Time t')
plt.ylabel('Solution x(t)')
plt.title('Manual Solutions of the ODE')
plt.legend()
plt.grid()
plt.show()
<Figure size 1000x500 with 1 Axes><Figure size 1000x500 with 1 Axes>

Binder