Skip to article frontmatterSkip to article content

8.1 Linear Iterative Systems

discrete time systems

Dept. of Electrical and Systems Engineering
University of Pennsylvania

Binder

Lecture notes

1Reading

Material related to this page, as well as additional exercises, can be found in ALA 9.1.

2Learning Objectives

By the end of this page, you should know:

  • the stable behavior of discrete time scalar system
  • the definition of a general linear iterative system
  • the general solution to linear iterative systems using eigenvectors
  • how to compute the general solution using diagonalization and iteration

3Discrete Time System

So far, we have focused on continuous time dynamical systems, for which the time index tt of the solution x(t)\vv x(t) is continuous, i.e., tRt \in \mathbb{R}. This is a natural model to use for many systems, such as for example those evolving according to the laws of physics. However, in other instances, it is more natural to consider time in discrete steps. For example, when banks add interest to a savings account, they typically do so on a monthly or yearly basis. More specifically, suppose at period kk, we have x(k)x(k) dollars in our account, and an interest rate of rr is applied. Then x(k+1)=(1+r)x(k)x(k+1) = (1+r)x(k). This defines a scalar linear iterative system, whichtake the general form

x(k+1)=λx(k),x(0)=a.x(k+1) = \lambda x(k), \quad x(0) = a.

If we roll out the dynamics (1), we easily compute:

x(0)=a,x(1)=λa,x(2)=λ2a,x(k)=λkafor any positive integer k.\begin{align*} x(0) &= a, \\ x(1) &= \lambda a, \\ x(2) &= \lambda^2 a, \\ &\ldots \\ x(k) &= \lambda^k a \quad \text{for any positive integer } k. \end{align*}

We therefore immediately conclude that there are three possibilities for x(0)=a0x(0) = a \neq 0:

  • Stable: If λ<1|\lambda| < 1, then x(k)0|x(k)| \to 0 as kk \to \infty
  • Marginally Stable: If λ=1|\lambda| = 1, then x(k)=a|x(k)| = |a| for all kNk \in \mathbb{N}, where N={0,1,2,,}\mathbb{N} = \{0, 1, 2, \ldots, \ldots \} (non-negative integers)
  • Unstable: If λ>1|\lambda| > 1, then x(k)|x(k)| \to \infty as kk \to \infty

Our goal is to extend this analysis to the general linear iterative systems, that is formally defined below.

We will then use these tools to study a very important class of linear iterative systems called Markov Chains, which can be used for everything from internet search (Google’s search algorithm PageRank) to baseball statistics (DraftKings uses these ideas to set betting odds!).

4Powers of Matrices

Rolling out the dynamics (3), we again see a clear solution to (3):

x(0)=a,x(1)=Ta,x(2)=T2a,x(k)=Tkafor all kN.\begin{align*} \vv x(0) &= \vv a, \\ \vv x(1) &= T \vv a, \\ \vv x(2) &= T^2 \vv a, \\ &\ldots \\ \vv x(k) &= T^k \vv a \quad \text{for all } k \in \mathbb{N}. \end{align*}

However, unlike the scalar setting (1), the qualitative behavior of x(k)=Tka\vv x(k) = T^k \vv a as kk \to \infty is much less obvious. Since the scalar solution x(k)=λkax(k) = \lambda^k a is defined in terms of powers of λ, let’s try a similar guess for the vector-valued setting: x(k)=λkv\vv x(k) = \lambda^k \vv v. Under what conditions on λ and v\vv v is x(k)=λkv\vv x(k) = \lambda^k \vv v a solution to (3)? On one hand, we have that x(k+1)=λk+1v\vv x(k+1) = \lambda^{k+1} \vv v. On the other, we have

Tx(k)=T(λkv)=λkTv. T\vv x(k) = T(\lambda^k \vv v) = \lambda^k T\vv v.

The expressions x(k+1)=λk+1v\vv x(k+1) = \lambda^{k+1} \vv v and Tx(k)=λkTvT\vv x(k) = \lambda^k T\vv v will be equal if and only if Tv=λvT \vv v = \lambda \vv v, i.e., if and only if (λ,v)(\lambda, \vv v) is an eigenvalue/vector pair of TT.

Thus, for each eigenvalue/vector pair (λi,vi)(\lambda_i, \vv v_i) of TT, we can construct a solution xi(k)=λikvi\vv x_i(k) = \lambda_i^k \vv v_i to (3). By linear superposition, we can use this observation to characterize all solutions for complete matrices.

We can extend this theorem to incomplete matrices using Jordan Blocks, but the idea remains the same.

4.1Python break!

In the below python code, we solve different linear iterative systems by computing the eigen values and eigen vectors of the coefficient matrix. We plot the results of the iterates as in Figure 1, where the initial conditions are sampled from the unit circle and observe the different behaviors for each coefficient matrix.

import numpy as np
import matplotlib.pyplot as plt

def plot_iterates(T):
    
    eigvals, eigvecs = np.linalg.eig(T)
    
    print("Eigen Values: ", eigvals)
    
    # Initial conditions on the unit circle
    theta = np.linspace(0, 2*np.pi, 100)    
    init = np.vstack((np.cos(theta).reshape((1,-1)), np.sin(theta).reshape((1,-1)))) 
    
    c = np.linalg.solve(eigvecs, init) # solving for the c vector using the initial conditions
    
    x_all = []
    T = 5
    
    plt.figure(figsize=(8, 6))
    
    labels = [f'Iteration {i + 1}' for i in range(T)]
    for i in range(T):
        x_t = (eigvals**i * eigvecs) @ c
        x_all.append(x_t)
        plt.scatter(x_t[0, :], x_t[1, :], label=labels[i])
    plt.legend()
    plt.gca().set_aspect('equal')
    plt.show()

# Notice the difference between the plots and the corresponding eigen values    
T = np.array([[3, .1], [.1, 3]])
plot_iterates(T)
T1 = np.array([[.6, .2], [.3, .6]])
plot_iterates(T1)
T2 = np.array([[.2, .9], [.8, .3]])
plot_iterates(T2)
Eigen Values:  [3.1 2.9]
<Figure size 800x600 with 1 Axes>
Eigen Values:  [0.84494897 0.35505103]
<Figure size 800x600 with 1 Axes>
Eigen Values:  [-0.6  1.1]
<Figure size 800x600 with 1 Axes>

5Diagonalization and Iteration

An alternative and equally efficient approach to solving (3) in the case of complex matrices is based on the diagonalization of T. We start with the following observation (which we also saw when computing the matrix exponential). Let V=[v1vn]V = \bm \vv v_1 & \cdots & \vv v_n\em be an eigenbasis for TT, and Λ=diag(λ1,,λn)\Lambda = \text{diag}(\lambda_1, \ldots, \lambda_n) the diagonal matrix of the eigenvalues of TT. Then:

T=VΛV1T2=VΛV1VΛV1=VΛ2V1T3=T(T2)=VΛV1VΛ2V1=VΛ3V1\begin{align*} T &= V \Lambda V^{-1} \\ T^2 &= V \Lambda V^{-1} V \Lambda V^{-1} = V \Lambda^2 V^{-1} \\ T^3 &= T(T^2) = V \Lambda V^{-1} V \Lambda^2 V^{-1} = V \Lambda^3 V^{-1} \end{align*}

and in general, Tk=VΛkV1T^k = V \Lambda^k V^{-1}. Therefore, the solution to x(k+1)=Tx(k)\vv x(k+1) = T\vv x(k), with x(0)=a\vv x(0) = \vv a is

x(k)=Tka=VΛkV1a. \vv x(k) = T^k \vv a = V \Lambda^k V^{-1} \vv a.

If we define c=V1a\vv c = V^{-1}\vv a, then we recover x(k)=c1λ1kv1++cnλnkvn\vv x(k) = c_1 \lambda_1^k \vv v_1 + \cdots + c_n \lambda_n^k \vv v_n.

5.1Python break!

In the below python script, we diagonalize the coefficient matrix and plot the solution at different iterates. This method eliminates the need to solve for the c vector as in Example 1, but we can directly solve the system using the initial conditions and the diagonalization as in Example 3.

# given a square matrix A, returns a tuple of matrices (P, D) such that A = PDP^{-1}
def diagonalize(A):
    evals, evecs = np.linalg.eig(A)
    print("Eigen Values: ", evals)
    return evecs, np.diag(evals)

def plot_iterates(P, D):
    # Initial conditions on the unit circle
    theta = np.linspace(0, 2*np.pi, 100)    
    init = np.vstack((np.cos(theta).reshape((1,-1)), np.sin(theta).reshape((1,-1))))
    # Plot initial conditions
    x_all = [init]
    T = 5
    labels = [f'Iteration {i + 1}' for i in range(T+1)]    
    plt.figure(figsize=(8, 6))
    plt.scatter(init[0, :], init[1, :], label=labels[0])

    # Iterates
    for i in range(1, T+1):
        Ti = P @ D**i @ np.linalg.inv(P)
        x_i = Ti @ init
        x_all.append(x_i)
        plt.scatter(x_i[0, :], x_i[1, :], label=labels[i])
    plt.legend()
    plt.gca().set_aspect('equal')
    plt.show()

T2 = np.array([[-0.6, 0.01], [0.8, -0.5]])
P, D = diagonalize(T2)
plot_iterates(P, D)
Eigen Values:  [-0.65246951 -0.44753049]
<Figure size 800x600 with 1 Axes>

Binder