1.7 Matrix Inverses How do I divide by a matrix?
Dept. of Electrical and Systems Engineering
University of Pennsylvania
1 Reading ¶ Material related to this page, as well as additional exercises, can be found in ALA Ch. 1.5, LAA Ch 2.2. These notes are mostly based on ALA Ch 1.5.
2 Learning Objectives ¶ By the end of this page, you should know:
what is the inverse of a matrix computing inverse for 2x2 matrices important properties of matrix inverse: existence, product Gauss Jordan elimination to compute inverses 3 Basic Definition ¶ The inverse of a matrix is analogous to the reciprocal a − 1 = 1 a a^{−1} =\frac{1}{a} a − 1 = a 1 of a nonzero scalar a ≠ 0 a \neq 0 a = 0 . We already encountered the inverses of matrices corresponding to elementary row operations. In this section, we will study inverses of general square matrices. We begin with the formal definition.
Definition 1 (Matrix inverse)
Let A A A be a square matrix of size n × n n \times n n × n . An n × n n \times n n × n matrix X X X is called the inverse of A A A if it satisfies
X A = I = A X ,
X A = I = A X, X A = I = A X , where I = I n I = I_n I = I n is the n × n n \times n n × n identity matrix. The inverse of A A A is commonly denoted by A − 1 A^{−1} A − 1 .
The inverse of a matrix is typically more useful in theory than it is in practice. In fact, a commandment of numerical linear algebra is “thou shalt not invert a matrix” because it tends to cause numerical issues like the ones we described in the last section. Because of this, we will not spend too much time on computing matrix inverses: it is actually very rare that you will ever need to do this outsie of a linear algebra class!
We want to find the inverse of the matrix A A A , that is denoted by X X X
A = [ a b c d ] , X = [ x y z w ] ⇒ A X = I = [ 1 0 0 1 ] A = \begin{bmatrix}
a & b \\ c & d
\end{bmatrix},\ X = \begin{bmatrix}
x & y \\ z & w
\end{bmatrix} \Rightarrow AX = I = \begin{bmatrix}
1 & 0 \\ 0 & 1
\end{bmatrix} A = [ a c b d ] , X = [ x z y w ] ⇒ A X = I = [ 1 0 0 1 ] The above matrix equation produces a set of four linear equations in the unknowns ( x , y , z , w ) (x, y, z, w) ( x , y , z , w ) :
A X = [ a b c d ] [ x y z w ] = [ a x + b z a y + b w c x + d z c y + d w ] = [ 1 0 0 1 ] = I ,
AX = \begin{bmatrix}
a & b \\ c & d
\end{bmatrix}\begin{bmatrix}
x & y \\ z & w
\end{bmatrix} = \begin{bmatrix} ax + bz & ay + bw \\ cx + dz & cy + dw \end{bmatrix} = \begin{bmatrix}
1 & 0 \\ 0 & 1
\end{bmatrix} = I, A X = [ a c b d ] [ x z y w ] = [ a x + b z c x + d z a y + b w cy + d w ] = [ 1 0 0 1 ] = I , which holds if an only if ( x , y , z , w ) (x,y,z,w) ( x , y , z , w ) satisfy the linear system:
a x + b z = 1 a y + b w = 0 c x + d z = 0 c y + d w = 1. \begin{align*}
ax + bz &= 1\\
ay + bw &= 0\\
cx + dz &= 0\\
cy + dw & =1.
\end{align*} a x + b z a y + b w c x + d z cy + d w = 1 = 0 = 0 = 1. Solving by Gaussian Elimination, we find
x = d a d − b c , y = − b a d − b c , z = − c a d − b c , w = a a d − b c ⇒ X = 1 a d − b c [ d − b − c a ] , x = \frac{d}{ad-bc}, \ y = \frac{-b}{ad-bc}, \ z = \frac{-c}{ad-bc}, \ w = \frac{a}{ad-bc} \Rightarrow X = \frac{1}{ad-bc}\begin{bmatrix}
d & -b \\ -c & a
\end{bmatrix}, x = a d − b c d , y = a d − b c − b , z = a d − b c − c , w = a d − b c a ⇒ X = a d − b c 1 [ d − c − b a ] , provided that the common denominator a d − b c ≠ 0 ad-bc \neq 0 a d − b c = 0 . You can verify that X A = I XA = I X A = I also holds, which lets us conclude that X = A − 1 X=A^{-1} X = A − 1 is the inverse of A A A .
4.1 Python Break! ¶ NumPy has a built in function for computing matrix inverses, np.linalg.inv
. Let’s compare the output of that function to a matrix inverse computed with our formula (5) . Notice that because of numerical errors, A A − 1 ≈ I AA^{-1} \approx I A A − 1 ≈ I , but that the approximation error is very small (on the order of 1e-16).
import numpy as np
def my_inv(A):
a = A[0,0]
b = A[0,1]
c = A[1,0]
d = A[1,1]
denominator = a*d - b*c
X = 1/denominator * np.array([[d, -b], [-c, a]])
return X
A = np.array([[1,2],[-3,5]])
Ainv = my_inv(A)
Ainv_np = np.linalg.inv(A)
print(f'A=\n {A}, \n Ainv =\n {Ainv}, \n Ainv_np = \n {Ainv_np}')
print(f'AA^{-1} = \n{A @ Ainv}')
A=
[[ 1 2]
[-3 5]],
Ainv =
[[ 0.45454545 -0.18181818]
[ 0.27272727 0.09090909]],
Ainv_np =
[[ 0.45454545 -0.18181818]
[ 0.27272727 0.09090909]]
AA^-1 =
[[ 1.00000000e+00 0.00000000e+00]
[-2.22044605e-16 1.00000000e+00]]
5 Some Useful Properties ¶ One way to understand the matrix inverse is to take a dynamic view of matrix multiplication. If we think of the matrix A A A as defining a function f ( x ) f(\vv x) f ( x ) that maps x \vv x x to a new vector f ( x ) = A x f(\vv x) = A\vv x f ( x ) = A x , then we can intuitively think of the matrix inverse as “undoing” this action, just as we did for elementary operation matrices and their inverses.
For example, the elementary operation of adding twice the first row to the third row is given by
E = [ 1 0 0 0 1 0 2 0 1 ] , E = \begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & 1 \end{bmatrix}, E = ⎣ ⎡ 1 0 2 0 1 0 0 0 1 ⎦ ⎤ , while the inverse operation is given by
L = [ 1 0 0 0 1 0 − 2 0 1 ] ,
L = \begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ -2 & 0 & 1 \end{bmatrix}, L = ⎣ ⎡ 1 0 − 2 0 1 0 0 0 1 ⎦ ⎤ , and you can verify that L = E − 1 L = E^{-1} L = E − 1 . You can also verify similarly that for permutation matrices with exactly one interchange that
P = [ 0 1 0 1 0 0 0 0 1 ] = P − 1 ,
P = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} = P^{-1}, P = ⎣ ⎡ 0 1 0 1 0 0 0 0 1 ⎦ ⎤ = P − 1 , i.e., that P P P is its own inverse! Our observation above gives us some easy intuition for understanding why this is true, but you can also check this directly by using the formula (5) .
The above statement will be proved later, but for now think about the scalar analogy. The equation a x = b ax=b a x = b has a unique solution x = a − 1 b x = a^{-1}b x = a − 1 b if and only if a ≠ 0 a \neq 0 a = 0 . Similarly, A x = b A \textbf{x} = \textbf{b} A x = b has a unique solution x = A − 1 b \textbf{x} = A^{-1}\textbf{b} x = A − 1 b if and only if A − 1 A^{-1} A − 1 exists.
The inverse of a square matrix, if it exists, is unique. If A A A is invertible, so is A − 1 A^{-1} A − 1 and ( A − 1 ) − 1 = A \left(A^{-1}\right)^{-1} = A ( A − 1 ) − 1 = A . If A A A and B B B are invertible matrices of the same size, then their product A B AB A B is also invertible, and
( A B ) − 1 = B − 1 A − 1 order is reversed!
(AB)^{-1} = B^{-1}A^{-1} \\ \textbf{order is reversed!} ( A B ) − 1 = B − 1 A − 1 order is reversed!
Again, we can intuit why this is true by thinking about the transformations x ↦ B x \vv x \mapsto B \vv x x ↦ B x and y ↦ A y \vv y \mapsto A \vv y y ↦ A y : we have to undo the transformation of x \vv x x computed by first transforming y = B x \vv y = B \vv x y = B x , and then z = A y = A B x \vv z = A \vv y = AB \vv x z = A y = A B x in the right order:
x → B B x → A A B x → A − 1 A − 1 A B x = B x → B − 1 B − 1 B x = x
\textbf{x} \xrightarrow{B} B \textbf{x} \xrightarrow{A} AB\textbf{x} \xrightarrow{A^{-1}} A^{-1}AB\textbf{x} = B\textbf{x} \xrightarrow{B^{-1}} B^{-1}B\textbf{x} = \textbf{x} x B B x A A B x A − 1 A − 1 A B x = B x B − 1 B − 1 B x = x Exercise 1 (Matrix inverses)
For each of the following inveritble matrices, find their inverse.
a. [ 1 2 3 4 ] \bm 1&2\\3&4 \em [ 1 3 2 4 ]
b. [ 1 0 0 0 2 0 0 0 − 3 ] \bm 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & -3 \em ⎣ ⎡ 1 0 0 0 2 0 0 0 − 3 ⎦ ⎤
c. I n I_n I n (the identity matrix in n n n dimensions)
d. [ 1 2 0 0 1 2 0 0 1 ] \bm 1&2&0\\0&1&2\\0&0&1 \em ⎣ ⎡ 1 0 0 2 1 0 0 2 1 ⎦ ⎤
Click me for a hint!
a. Use the formula for 2 × 2 2\times 2 2 × 2 matrices!
b. The inverse of this matrix will be another diagonal matrix (has zeros on off diagonal entries).
c. Can you find any (square) matrix X X X such I n X = I n I_nX = I_n I n X = I n ?
a. Using the formula for inverses of 2 × 2 2 \times 2 2 × 2 matrices,
[ 1 2 3 4 ] − 1 = 1 1 ⋅ 4 − 2 ⋅ 3 [ 4 − 2 − 3 1 ] = [ − 2 1 3 2 − 1 2 ] \begin{align*}
\bm 1&2\\3&4 \em^{-1} = \frac{1}{1\cdot 4 - 2 \cdot 3}\bm 4 & -2 \\ -3 & 1\em = \bm -2 & 1 \\ \frac 3 2 & -\frac 1 2\em
\end{align*} [ 1 3 2 4 ] − 1 = 1 ⋅ 4 − 2 ⋅ 3 1 [ 4 − 3 − 2 1 ] = [ − 2 2 3 1 − 2 1 ] b. We can confirm that
[ 1 0 0 0 2 0 0 0 − 3 ] [ 1 0 0 0 1 2 0 0 0 − 1 3 ] = [ 1 0 0 0 1 0 0 0 1 ] = [ 1 0 0 0 1 2 0 0 0 − 1 3 ] [ 1 0 0 0 2 0 0 0 − 3 ] \begin{align*}
\bm 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & -3 \em\bm 1 & 0 & 0 \\ 0 & \frac 1 2 & 0 \\ 0 & 0 & -\frac 1 3 \em = \bm 1&0&0\\0&1&0\\0&0&1 \em = \bm 1 & 0 & 0 \\ 0 & \frac 1 2 & 0 \\ 0 & 0 & -\frac 1 3 \em\bm 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & -3 \em
\end{align*} ⎣ ⎡ 1 0 0 0 2 0 0 0 − 3 ⎦ ⎤ ⎣ ⎡ 1 0 0 0 2 1 0 0 0 − 3 1 ⎦ ⎤ = ⎣ ⎡ 1 0 0 0 1 0 0 0 1 ⎦ ⎤ = ⎣ ⎡ 1 0 0 0 2 1 0 0 0 − 3 1 ⎦ ⎤ ⎣ ⎡ 1 0 0 0 2 0 0 0 − 3 ⎦ ⎤ which means that
[ 1 0 0 0 2 0 0 0 − 3 ] − 1 = [ 1 0 0 0 1 2 0 0 0 − 1 3 ] \begin{align*}
\bm 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & -3 \em^{-1} = \bm 1 & 0 & 0 \\ 0 & \frac 1 2 & 0 \\ 0 & 0 & -\frac 1 3 \em
\end{align*} ⎣ ⎡ 1 0 0 0 2 0 0 0 − 3 ⎦ ⎤ − 1 = ⎣ ⎡ 1 0 0 0 2 1 0 0 0 − 3 1 ⎦ ⎤ c. Remember that I n × I n = I n I_n \times I_n = I_n I n × I n = I n , which means I n − 1 = I n I_n^{-1} = I_n I n − 1 = I n .
# Inverse of the product
A = np.array([[1, 2, 3],
[4, -5, 6],
[7, 8, 9]])
B = np.array([[1, 2, 0],
[1, 3, 0],
[1, -3, 1]])
A_inv = np.linalg.inv(A)
B_inv = np.linalg.inv(B)
AB_inv = np.linalg.inv(A@B)
print("\nInverse of product: \n", AB_inv, "\nProduct of inverses:\n", B_inv @ A_inv)
Inverse of product:
[[-2.425 0.35 0.575 ]
[ 0.825 -0.15 -0.175 ]
[ 5.45833333 -0.75 -1.20833333]]
Product of inverses:
[[-2.425 0.35 0.575 ]
[ 0.825 -0.15 -0.175 ]
[ 5.45833333 -0.75 -1.20833333]]
6 Gauss-Jordan Elimination ¶ Gauss-Jordan Elimination (GJE) is the principal algorithm for computing inverses of a nonsingular matrix.
For some matrices A A A , we can only find a matrix X X X that satsifies the right inverse condition A X = I AX = I A X = I , but not the left inverse condition X A = I XA = I X A = I . Such a matrix is called a right inverse . Similarly, a matrix X X X that only satisfies the left inverse condition X A = I XA=I X A = I but not the right inverse condition is called a left inverse . For a non-square matrix, the same X X X cannot simultaneously satisfy both A X = I AX = I A X = I and X A = I XA = I X A = I (check dimensions). Hence, we emphasize that for X X X to be an inverse of A A A , both left and right inverse conditions should be satisfied, even if checking one of the conditions is sufficient.
Here is some good news: we already have all of the tools needed to solve A X = I AX = I A X = I for the unknown matrix X X X . Our starting point is to recognize that the matrix equation A X = I AX = I A X = I is really n n n linear systems of the form A x i = e i A\vv x_i = \vv e_i A x i = e i in parallel, where the x i \vv x_i x i and e i \vv e_i e i are the columns of the matrix X X X and the identify matrix I I I , respevtively.
We define the n × 1 n \times 1 n × 1 unit vectors e i \textbf{e}_i e i :
e 1 = [ 1 0 0 ⋮ 0 ] , e 2 = [ 0 1 0 ⋮ 0 ] , ⋯ , e n = [ 0 0 ⋮ 0 1 ]
\textbf{e}_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \ \textbf{e}_2 = \begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \cdots, \ \textbf{e}_n = \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} e 1 = ⎣ ⎡ 1 0 0 ⋮ 0 ⎦ ⎤ , e 2 = ⎣ ⎡ 0 1 0 ⋮ 0 ⎦ ⎤ , ⋯ , e n = ⎣ ⎡ 0 0 ⋮ 0 1 ⎦ ⎤ as the vectors with exactly one entry of 1 in the i t h i^{th} i t h position and zeros elsewhere. The vectors e i \textbf{e}_i e i are the columns of the identity matrix I n I_n I n :
I n = [ e 1 e 2 ⋯ e n ]
I_n = \begin{bmatrix} \textbf{e}_1 & \textbf{e}_2 & \cdots & \textbf{e}_n \end{bmatrix} I n = [ e 1 e 2 ⋯ e n ] Hence, the right inverse equation can be written as
A X = I ⟺ A [ x 1 x 2 ⋯ x n ] = [ A x 1 A x 2 ⋯ A x n ] = [ e 1 e 2 ⋯ e n ] ⟺ A x 1 = e 1 , A x 2 = e 2 , ⋯ , A x n = e n . \begin{align*}
AX = I \iff
A \begin{bmatrix} \textbf{x}_1 & \textbf{x}_2 & \cdots & \textbf{x}_n \end{bmatrix} = \begin{bmatrix} A \textbf{x}_1 & A \textbf{x}_2 & \cdots & A \textbf{x}_n \end{bmatrix} = \begin{bmatrix} \textbf{e}_1 & \textbf{e}_2 & \cdots & \textbf{e}_n \end{bmatrix} \iff
A\textbf{x}_1 = \textbf{e}_1, A\textbf{x}_2 = \textbf{e}_2, \cdots, A\textbf{x}_n = \textbf{e}_n.
\end{align*} A X = I ⟺ A [ x 1 x 2 ⋯ x n ] = [ A x 1 A x 2 ⋯ A x n ] = [ e 1 e 2 ⋯ e n ] ⟺ A x 1 = e 1 , A x 2 = e 2 , ⋯ , A x n = e n . The above defines a set of n n n systems of linear equations. A key feature here is that all n n n linear systems have the same coefficient matrix A A A . We can take advantage of that to build one large augmented matrix M M M that stacks all n n n right hand sides on the right of the coefficient matrix A A A :
M = [ A e 1 e 2 ⋯ e n ] = [ A I ] . M = \left[ \begin{array}{c|ccc} A & \textbf{e}_1 & \textbf{e}_2 & \cdots & \textbf{e}_n \end{array}\right] = \left[ \begin{array}{c|c} A & I \end{array}\right]. M = [ A e 1 e 2 ⋯ e n ] = [ A I ] . We can then apply our row operations (scaling and adding , swapping ) to (22) to reduce A A A to upper triangular form
M = [ A I ] → N = [ U C ] ,
M = \left[ \begin{array}{c|c} A & I \end{array}\right] \to N = \left[ \begin{array}{c|c} U & C \end{array}\right], M = [ A I ] → N = [ U C ] , which is equivalent to reducing the original n n n linear systems to
U x 1 = c 1 , U x 2 = c 2 , ⋯ , U x n = c n ,
U\textbf{x}_1 = \textbf{c}_1, U\textbf{x}_2 = \textbf{c}_2, \cdots, U\textbf{x}_n = \textbf{c}_n, U x 1 = c 1 , U x 2 = c 2 , ⋯ , U x n = c n , which we could then solve via back substituion for the columns x i \vv x_i x i of the matrix inverse X X X .
For example, consider the following A A A matrix, corresponding augmented matrix M M M , and corresponding upper triangular form
A = [ 0 2 1 2 6 1 1 1 4 ] , M = [ 0 2 1 1 0 0 2 6 1 0 1 0 1 1 4 0 0 1 ] → N = [ 2 6 1 0 1 0 0 2 1 1 0 0 0 0 9 2 1 − 1 2 1 ] . A = \begin{bmatrix}0 & 2 & 1 \\ 2 & 6 & 1 \\ 1 & 1 & 4 \end{bmatrix},\ M =
\left[ \begin{array}{ccc|ccc} 0 & 2 & 1 & 1 & 0 & 0 \\ 2 & 6 & 1 & 0 & 1 & 0 \\ 1 & 1 & 4 & 0 & 0 & 1 \end{array}\right] \to N = \left[ \begin{array}{ccc|ccc} 2 & 6 & 1 & 0 & 1 & 0 \\ 0 & 2 & 1 & 1 & 0 & 0 \\ 0 & 0 & \frac{9}{2} & 1 & -\frac{1}{2} & 1 \end{array}\right]. A = ⎣ ⎡ 0 2 1 2 6 1 1 1 4 ⎦ ⎤ , M = ⎣ ⎡ 0 2 1 2 6 1 1 1 4 1 0 0 0 1 0 0 0 1 ⎦ ⎤ → N = ⎣ ⎡ 2 0 0 6 2 0 1 1 2 9 0 1 1 1 0 − 2 1 0 0 1 ⎦ ⎤ . Although we could stop here, it’s worth highlighting that a more common version of GJE continues to apply row operations to fully reduce the augmented matrix to the form [ I X ] \left[ \begin{array}{c|c} I & X \end{array}\right] [ I X ] so that X X X is the inverse of A A A .
We first note that both U U U and I I I have zeros below the diagonal, so this is a good start! However, in our current form [ U C ] \left[ \begin{array}{c|c} U & C \end{array}\right] [ U C ] , the diagonal pivots of U U U are not 1. We need another row operation!
The scaling operation on N N N in (25) reduces the augmented matrix to
N = [ 2 6 1 0 1 0 0 2 1 1 0 0 0 0 9 2 1 − 1 2 1 ] → [ V B ] = [ 1 3 1 2 0 1 2 0 0 1 1 2 1 2 0 0 0 0 1 2 9 − 1 9 2 9 ] , N = \left[ \begin{array}{ccc|ccc} 2 & 6 & 1 & 0 & 1 & 0 \\ 0 & 2 & 1 & 1 & 0 & 0 \\ 0 & 0 & \frac{9}{2} & 1 & -\frac{1}{2} & 1 \end{array}\right] \to\left[ \begin{array}{c|c} V & B \end{array}\right] = \left[ \begin{array}{ccc|ccc} 1 & 3 & \frac{1}{2} & 0 & \frac{1}{2} & 0 \\ 0 & 1 & \frac{1}{2} & \frac{1}{2} & 0 & 0 \\ 0 & 0 & 1 & \frac{2}{9} & -\frac{1}{9} & \frac{2}{9} \end{array}\right], N = ⎣ ⎡ 2 0 0 6 2 0 1 1 2 9 0 1 1 1 0 − 2 1 0 0 1 ⎦ ⎤ → [ V B ] = ⎣ ⎡ 1 0 0 3 1 0 2 1 2 1 1 0 2 1 9 2 2 1 0 − 9 1 0 0 9 2 ⎦ ⎤ , where we divide each row by its corresponding pivot. Now, to make V V V identity, we perform use the same idea as we did in Gaussian Elimination, but to zero out entries above the pivot . In this case, we start with the ( 3 , 3 ) (3,3) ( 3 , 3 ) pivot to zero out the ( 2 , 3 ) (2, 3) ( 2 , 3 ) and ( 1 , 3 ) (1,3) ( 1 , 3 ) entries, and then use the ( 2 , 2 ) (2,2) ( 2 , 2 ) pivot to zero out the ( 2 , 1 ) (2,1) ( 2 , 1 ) entry.
[ V B ] = [ 1 3 1 2 0 1 2 0 0 1 1 2 1 2 0 0 0 0 1 2 9 − 1 9 2 9 ] → [ 1 0 0 − 23 18 7 18 2 9 0 1 0 7 18 1 18 − 1 9 0 0 1 2 9 − 1 9 2 9 ] \left[ \begin{array}{c|c} V & B \end{array}\right] = \left[ \begin{array}{ccc|ccc} 1 & 3 & \frac{1}{2} & 0 & \frac{1}{2} & 0 \\ 0 & 1 & \frac{1}{2} & \frac{1}{2} & 0 & 0 \\ 0 & 0 & 1 & \frac{2}{9} & -\frac{1}{9} & \frac{2}{9} \end{array}\right] \to \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & -\frac{23}{18} & \frac{7}{18} & \frac{2}{9} \\ 0 & 1 & 0 & \frac{7}{18} & \frac{1}{18} & -\frac{1}{9} \\ 0 & 0 & 1 & \frac{2}{9} & -\frac{1}{9} & \frac{2}{9} \end{array}\right] [ V B ] = ⎣ ⎡ 1 0 0 3 1 0 2 1 2 1 1 0 2 1 9 2 2 1 0 − 9 1 0 0 9 2 ⎦ ⎤ → ⎣ ⎡ 1 0 0 0 1 0 0 0 1 − 18 23 18 7 9 2 18 7 18 1 − 9 1 9 2 − 9 1 9 2 ⎦ ⎤ Finally, the right-hand matrix in (27) is the inverse of A A A :
A − 1 = [ − 23 18 7 18 2 9 7 18 1 18 − 1 9 2 9 − 1 9 2 9 ]
A^{-1} = \left[ \begin{array}{ccc} -\frac{23}{18} & \frac{7}{18} & \frac{2}{9} \\ \frac{7}{18} & \frac{1}{18} & -\frac{1}{9} \\ \frac{2}{9} & -\frac{1}{9} & \frac{2}{9} \end{array}\right] A − 1 = ⎣ ⎡ − 18 23 18 7 9 2 18 7 18 1 − 9 1 9 2 − 9 1 9 2 ⎦ ⎤ Scaling Elementary Matrix
What is the elementary matrix corresponding to the scaling operation ? Starting with the identify matrix as before, we see that scaling the i i i th row by a scalar a ≠ 0 a\neq 0 a = 0 only affects the ( i , i ) (i,i) ( i , i ) entry which becomes a a a , instead of 1. For example, the elementary matrix associated that scales the 2nd row of a 3-row matrix by 4 is
E = [ 1 0 0 0 4 0 0 0 1 ] .
E = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 1 \end{bmatrix}. E = ⎣ ⎡ 1 0 0 0 4 0 0 0 1 ⎦ ⎤ .