So far, we have focused on continuous time dynamical systems, for which the
time index t of the solution x(t) is continuous, i.e., t∈R. This is a natural model to use for many systems, such as for example those evolving according to the
laws of physics. However, in other instances, it is more natural to consider time in discrete steps. For example, when banks add interest to a savings account, they typically do so on a monthly or yearly basis. More specifically, suppose at period k, we have x(k) dollars in our
account, and an interest rate of r is applied. Then x(k+1)=(1+r)x(k). This defines a scalar linear iterative system, whichtake the general form
We therefore immediately conclude that there are three possibilities for x(0)=a=0:
Stable: If ∣λ∣<1, then ∣x(k)∣→0 as k→∞
Marginally Stable: If ∣λ∣=1, then ∣x(k)∣=∣a∣ for all k∈N, where N={0,1,2,…,…} (non-negative integers)
Unstable: If ∣λ∣>1, then ∣x(k)∣→∞ as k→∞
Our goal is to extend this analysis to the general linear iterative systems, that is formally defined below.
We will then use these tools to study a very important class of linear iterative systems called Markov Chains, which can be used for everything from internet search (Google’s search algorithm PageRank) to baseball statistics (DraftKings uses these ideas to set betting odds!).
However, unlike the scalar setting (1), the qualitative behavior of x(k)=Tka as k→∞ is much less obvious. Since the scalar solution x(k)=λka is defined in terms of powers of λ, let’s try a similar guess for the vector-valued
setting: x(k)=λkv. Under what conditions on λ and v is x(k)=λkv a solution to (3)?
On one hand, we have that x(k+1)=λk+1v. On the other, we have
The expressions x(k+1)=λk+1v and Tx(k)=λkTv will be equal if and only if Tv=λv, i.e., if and only if (λ,v) is an eigenvalue/vector pair of T.
Thus, for each eigenvalue/vector pair (λi,vi) of T, we can construct a solution xi(k)=λikvi to (3). By linear superposition, we can use
this observation to characterize all solutions for complete matrices.
We can extend this theorem to incomplete matrices using Jordan Blocks, but
the idea remains the same.
In the below python code, we solve different linear iterative systems by computing the eigen values and eigen vectors of the coefficient matrix. We plot the results of the iterates as in Figure 1, where the initial conditions are sampled from the unit circle and observe the different behaviors for each coefficient matrix.
import numpy as np
import matplotlib.pyplot as plt
def plot_iterates(T):
eigvals, eigvecs = np.linalg.eig(T)
print("Eigen Values: ", eigvals)
# Initial conditions on the unit circle
theta = np.linspace(0, 2*np.pi, 100)
init = np.vstack((np.cos(theta).reshape((1,-1)), np.sin(theta).reshape((1,-1))))
c = np.linalg.solve(eigvecs, init) # solving for the c vector using the initial conditions
x_all = []
T = 5
plt.figure(figsize=(8, 6))
labels = [f'Iteration {i + 1}' for i in range(T)]
for i in range(T):
x_t = (eigvals**i * eigvecs) @ c
x_all.append(x_t)
plt.scatter(x_t[0, :], x_t[1, :], label=labels[i])
plt.legend()
plt.gca().set_aspect('equal')
plt.show()
# Notice the difference between the plots and the corresponding eigen values
T = np.array([[3, .1], [.1, 3]])
plot_iterates(T)
T1 = np.array([[.6, .2], [.3, .6]])
plot_iterates(T1)
T2 = np.array([[.2, .9], [.8, .3]])
plot_iterates(T2)
An alternative and equally efficient approach to solving (3) in the case of complex matrices is based on the diagonalization of T. We start with the following observation (which we also saw when computing the matrix exponential). Let V=[v1⋯vn] be an eigenbasis for T, and Λ=diag(λ1,…,λn) the diagonal matrix of the eigenvalues of T. Then:
In the below python script, we diagonalize the coefficient matrix and plot the solution at different iterates. This method eliminates the need to solve for the c vector as in Example 1, but we can directly solve the system using the initial conditions and the diagonalization as in Example 3.
# given a square matrix A, returns a tuple of matrices (P, D) such that A = PDP^{-1}
def diagonalize(A):
evals, evecs = np.linalg.eig(A)
print("Eigen Values: ", evals)
return evecs, np.diag(evals)
def plot_iterates(P, D):
# Initial conditions on the unit circle
theta = np.linspace(0, 2*np.pi, 100)
init = np.vstack((np.cos(theta).reshape((1,-1)), np.sin(theta).reshape((1,-1))))
# Plot initial conditions
x_all = [init]
T = 5
labels = [f'Iteration {i + 1}' for i in range(T+1)]
plt.figure(figsize=(8, 6))
plt.scatter(init[0, :], init[1, :], label=labels[0])
# Iterates
for i in range(1, T+1):
Ti = P @ D**i @ np.linalg.inv(P)
x_i = Ti @ init
x_all.append(x_i)
plt.scatter(x_i[0, :], x_i[1, :], label=labels[i])
plt.legend()
plt.gca().set_aspect('equal')
plt.show()
T2 = np.array([[-0.6, 0.01], [0.8, -0.5]])
P, D = diagonalize(T2)
plot_iterates(P, D)