2.5 Kernel and Image For our next trick, we'll make this vector disappear
Dept. of Electrical and Systems Engineering
University of Pennsylvania
1 Reading ΒΆ Material related to this page, as well as additional exercises, can be found in ALA Ch. 2.5 and LAA 4.2.
2 Learning Objectives ΒΆ By the end of this page, you should know:
a kernel (null space) and image (column space) of a matrix the realtionship between the null space and column space to solutions of linear systems to apply the superposition principle for solving linear systems with different right-hand vectors 3 Introduction ΒΆ In applications of linear algebra, subspaces of R n \mathbb{R}^n R n (and general vector spaces V V V ) typically arise from either
(a) set of all solutions to a system of linear equations of the form A x = 0 A \vv x = \vv 0 A x = 0 , called a homogeneous linear system, or
(b) as the span of certain specified vectors.
(a) is known as the null space description of a subspace, while (b) is known as the column space or image description of a subspace.
We will see that these are intimately related to systems of linear equations.
4 Null Space of a Matrix ΒΆ If we think of the function f ( x ) = A x f(\vv x) = A \vv x f ( x ) = A x that maps x β¦ A x \vv x \mapsto A \vv x x β¦ A x , then Null( A ) (A) ( A ) is the subset of R n \mathbb{R}^n R n that f ( x ) f(\vv x) f ( x ) maps to 0 \vv 0 0 .
Consider the following system of homogeneous equations
x 1 β 3 x 2 β 2 x 3 = 0 , β 5 x 1 + 9 x 2 + 2 x 3 = 0 , \begin{align*}
x_1 - 3x_2 - 2x_3 &= 0, \\
-5x_1 + 9x_2 + 2x_3 &= 0,
\end{align*} x 1 β β 3 x 2 β β 2 x 3 β β 5 x 1 β + 9 x 2 β + 2 x 3 β β = 0 , = 0 , β or in matrix form A x = 0 A \vv x = \vv 0 A x = 0 , where A = [ 1 β 3 β 2 β 5 9 1 ] A = \bm1 & -3 & -2 \\ -5 & 9 & 1 \em A = [ 1 β 5 β β 3 9 β β 2 1 β ] .
Recall that the set of x \vv x x satisfying A x = 0 A \vv x = \vv 0 A x = 0 is the solution set of (2) . Our goal is to relate this solution set to the matrix A A A which will give us a geometric interpretation to the solution of the algebraic system.
Is u = [ 5 3 β 2 ] \vv u = \bm 5 \\ 3 \\ -2 \em u = β£ β‘ β 5 3 β 2 β β¦ β€ β in Null( A ) (A) ( A ) ?
Evaluating A u = [ 1 β 3 β 2 β 5 9 1 ] [ 5 3 β 2 ] = [ 0 0 ] β A u = 0 β u β A \vv u = \bm1 & -3 & -2 \\ -5 & 9 & 1 \em\bm 5 \\ 3 \\ -2 \em = \bm 0 \\ 0\em \Rightarrow A \vv u = \vv 0 \Rightarrow \vv u \in A u = [ 1 β 5 β β 3 9 β β 2 1 β ] β£ β‘ β 5 3 β 2 β β¦ β€ β = [ 0 0 β ] β A u = 0 β u β Null( A ) (A) ( A ) .
Now, you may be wondering why are we calling Null( A ) (A) ( A ) the null space ? Thatβs because Null( A ) (A) ( A ) is a subspace !
We test it as follows. Suppose that u , v β \vv u, \vv v \in u , v β Null( A ) (A) ( A ) and c , d β R c, d \in \mathbb{R} c , d β R . We need to check if c u + d v β c \vv u + d \vv v \in c u + d v β Null( A ) (A) ( A ) , i.e., is it true that A ( c u + d v ) = 0 ? A (c \vv u + d \vv v) = \vv 0? A ( c u + d v ) = 0 ?
A ( c u + d v ) = c A u + d A v , Β ( linearityΒ ofΒ matrixΒ multiplication ) = c 0 + d 0 , Β ( u , v β Null ( A ) . Β soΒ A u = A v = 0 ) = 0 \begin{align*}
A(c \vv u + d \vv v) &= c A \vv u + d A \vv v, \ (\textrm{linearity of matrix multiplication}) \\
&= c \vv 0 + d \vv 0, \ (\vv u, \vv v \in \textrm{Null}(A). \ \textrm{so} \ A \vv u = A \vv v = \vv 0) \\
&= \vv 0
\end{align*} A ( c u + d v ) β = c A u + d A v , Β ( linearityΒ ofΒ matrixΒ multiplication ) = c 0 + d 0 , Β ( u , v β Null ( A ) . Β so Β A u = A v = 0 ) = 0 β From (3) , Null( A ) (A) ( A ) is a vector space! If A β R m Γ n A \in \mathbb{R}^{m \times n} A β R m Γ n , then Null( A ) (A) ( A ) is a subspace of R n \mathbb{R}^n R n (where x \vv x x lived). This property leads to the following incredibly important superposition principle for solutions to homogeneous linear systems.
Theorem 1 (Superposition principle)
If u 1 , u 2 , β¦ , u k \vv u_1, \vv u_2, \ldots, \vv u_k u 1 β , u 2 β , β¦ , u k β are each solutions to A u = 0 A \vv u = 0 A u = 0 , then so is EVERY linear combination c 1 u 1 + c 2 u 2 + β¦ + c k u k c_1 \vv u_1 + c_2\vv u_2 + \ldots + c_k\vv u_k c 1 β u 1 β + c 2 β u 2 β + β¦ + c k β u k β .
Although we are focussing on linear systems of equations of the form A x = b A \vv x = \vv b A x = b here, the same ideas apply to more general linear systems, e.g., those defined on infinite dimensional vector space like solutions to linear differential equations, which we will see later in the course.
5 Describing the Null Space ΒΆ There is no obvious relationship between the entries of A A A and Null( A ) (A) ( A ) . Rather, it is defined implicitly via the condition that x β \vv x \in x β Null( A ) (A) ( A ) if and only if A x = 0 A \vv x =\vv 0 A x = 0 . However, if we compute the general solution to A x = 0 A \vv x = 0 A x = 0 , this will give us an explicit description of Null( A ) (A) ( A ) . This can be accomplished via Gaussian Elimination.
Let us find a basis for Null( A ) (A) ( A ) , where
A = [ β 3 6 β 1 1 β 7 1 β 2 2 3 β 1 2 β 4 5 8 β 4 ] A = \bm -3 & 6 & -1 & 1 & -7 \\
1 & -2 & 2 & 3 & -1 \\
2 & -4 & 5 & 8 & -4
\em A = β£ β‘ β β 3 1 2 β 6 β 2 β 4 β β 1 2 5 β 1 3 8 β β 7 β 1 β 4 β β¦ β€ β . We reduce [ A β£ 0 ] \bm A & | & \vv 0\em [ A β β£ β 0 β ] to row echelon form in order ot write the basic variables in terms of the free variables:
[ 1 β 2 0 β 1 3 β 0 0 1 2 β 2 0 0 0 0 0 0 0 ] β x 1 β 2 x 2 β x 4 + 3 x 5 = 0 , x 3 + 2 x 4 β 2 x 5 = 0 , 0 = 0. \begin{align*}
\bm 1 & -2 & 0 & -1 & 3 & - \\
0 & 0 & 1 & 2 & -2 & 0 \\
0 & 0 & 0 & 0 & 0 & 0
\em \Leftrightarrow & x_1 - 2x_2 - x_4 + 3x_5 = 0, \\ & x_3 + 2x_4 - 2x_5 = 0, \\ &0 = 0.
\end{align*} β£ β‘ β 1 0 0 β β 2 0 0 β 0 1 0 β β 1 2 0 β 3 β 2 0 β β 0 0 β β¦ β€ β β β x 1 β β 2 x 2 β β x 4 β + 3 x 5 β = 0 , x 3 β + 2 x 4 β β 2 x 5 β = 0 , 0 = 0. β From (4) , the general solution is x 1 = 2 x 2 + x 4 β 3 x 5 , Β x 3 = β 2 x 4 + 2 x 5 x_1 = 2x_2 + x_4 - 3x_5, \ x_3 =-2x_4 + 2x_5 x 1 β = 2 x 2 β + x 4 β β 3 x 5 β , Β x 3 β = β 2 x 4 β + 2 x 5 β . The free variables are x 2 , x 4 , x 4 x_2, x_4, x_4 x 2 β , x 4 β , x 4 β and the basic variables are x 1 , x 3 x_1, x_3 x 1 β , x 3 β , since, the pivots are at ( 1 , 1 ) , ( 2 , 3 ) (1, 1), (2, 3) ( 1 , 1 ) , ( 2 , 3 ) . We can decompose the general solution as
[ x 1 x 2 x 3 x 4 x 5 ] = [ 2 x 2 + x 4 β 3 x 5 x 2 β 2 x 4 + 2 x 5 x 4 x 5 ] = x 2 [ 2 1 0 0 0 ] + x 4 [ 1 0 β 2 1 0 ] + x 5 [ β 3 0 2 0 1 ] = x 2 u 1 + x 4 u 2 + x 5 u 3 , where , u 1 = [ 2 1 0 0 0 ] , Β u 2 = [ 1 0 β 2 1 0 ] , Β u 3 = [ β 3 0 2 0 1 ] . \bm x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5\em = \bm 2x_2 + x_4 - 3x_5 \\ x_2 \\ -2x_4 + 2x_5 \\ x_4 \\ x_5\em = x_2 \bm 2 \\ 1 \\ 0 \\ 0 \\ 0\em + x_4 \bm 1 \\ 0 \\ -2 \\ 1 \\ 0\em + x_5 \bm -3 \\ 0 \\ 2 \\ 0 \\ 1\em = x_2 \vv u_1 + x_4 \vv u_2 + x_5 \vv u_3, \\
\textrm{where}, \vv u_1 = \bm 2 \\ 1 \\ 0 \\ 0 \\ 0\em, \ \vv u_2 = \bm 1 \\ 0 \\ -2 \\ 1 \\ 0\em, \ \vv u_3 = \bm -3 \\ 0 \\ 2 \\ 0 \\ 1\em. β£ β‘ β x 1 β x 2 β x 3 β x 4 β x 5 β β β¦ β€ β = β£ β‘ β 2 x 2 β + x 4 β β 3 x 5 β x 2 β β 2 x 4 β + 2 x 5 β x 4 β x 5 β β β¦ β€ β = x 2 β β£ β‘ β 2 1 0 0 0 β β¦ β€ β + x 4 β β£ β‘ β 1 0 β 2 1 0 β β¦ β€ β + x 5 β β£ β‘ β β 3 0 2 0 1 β β¦ β€ β = x 2 β u 1 β + x 4 β u 2 β + x 5 β u 3 β , where , u 1 β = β£ β‘ β 2 1 0 0 0 β β¦ β€ β , Β u 2 β = β£ β‘ β 1 0 β 2 1 0 β β¦ β€ β , Β u 3 β = β£ β‘ β β 3 0 2 0 1 β β¦ β€ β . From (5) , every linear combination of u 1 , u 2 , u 3 \vv u_1, \vv u_2, \vv u_3 u 1 β , u 2 β , u 3 β is in Null( A ) (A) ( A ) . Also, u 1 , u 2 , u 3 \vv u_1, \vv u_2, \vv u_3 u 1 β , u 2 β , u 3 β are linearly independent (think when does (5) become zero and why so?), hence, u 1 , u 2 , u 3 \vv u_1, \vv u_2, \vv u_3 u 1 β , u 2 β , u 3 β form a basis for Null( A ) (A) ( A ) .
We conclude that Null( A ) β R 5 (A) \subset \mathbb{R}^5 ( A ) β R 5 is a subspace of dimension 3.
6 The Column Space of A A A ΒΆ We have seen previously that we can write the matrix-vector product A x A \vv x A x as the linear combination
A x = x 1 a 1 + x a 2 + β¦ + x n a n ,
A \vv x = x_1 \vv a_1 + x \vv a_2 + \ldots + x_n \vv a_n, A x = x 1 β a 1 β + x a 2 β + β¦ + x n β a n β , of the columns a , a 2 , β¦ , a n \vv a_, \vv a_2, \ldots , \vv a_n a , β a 2 β , β¦ , a n β of A = [ a 1 a 2 β¦ a n ] A = \bm \vv a_1 & \vv a_2 & \ldots & \vv a_n\em A = [ a 1 β β a 2 β β β¦ β a n β β ] weighted by the elements x i x_i x i β of x \vv x x . By letting the coefficients x 1 , x 2 , β¦ , x n x_1, x_2, \ldots, x_n x 1 β , x 2 β , β¦ , x n β vary, we can descsribe the subspace spanned by the columns of A A A , aptly named the column space of A A A .
Definition 2 (Column Space)
The column space of an m Γ n m \times n m Γ n matrix A A A , written as Col( A ) (A) ( A ) , is the set of all linear combinations of the columns of A = [ a 1 a 2 β¦ a n ] A = \bm \vv a_1 & \vv a_2 & \ldots & \vv a_n\em A = [ a 1 β β a 2 β β β¦ β a n β β ] :
Col ( A ) = { b β R m : b = A x Β forΒ someΒ x β R n } , = span ( a 1 , a 2 , β¦ , a n ) . \begin{align*}
\textrm{Col}(A) &= \{\vv b \in \mathbb{R}^m : \vv b = A \vv x \ \textrm{for some} \ \vv x \in \mathbb{R}^n\}, \\
&= \textrm{span}(\vv a_1, \vv a_2, \ldots, \vv a_n).
\end{align*} Col ( A ) β = { b β R m : b = A x Β forΒ some Β x β R n } , = span ( a 1 β , a 2 β , β¦ , a n β ) . β The Col( A ) (A) ( A ) is also sometimes called the image or range space of A A A .
Since Col( A ) (A) ( A ) is defined by the span of some vectors, it is immediate that Col( A ) (A) ( A ) is a subspace. However, note that Col( A ) β R m (A) \subset \mathbb{R}^m ( A ) β R m (where b \vv b b lives), not R n \mathbb{R}^n R n (where Null( A ) (A) ( A ) and x \vv x x lives).
Find a matrix A A A so that the set
W = { [ 6 a β b a + b β 7 a ] : a , b β R } W = \left\{ \bm 6a - b \\ a + b \\ -7a \em : a, b \in \mathbb{R}\right\} W = β© β¨ β§ β β£ β‘ β 6 a β b a + b β 7 a β β¦ β€ β : a , b β R β β¬ β« β is equal to Col( A ) (A) ( A ) . To do so, we first write W W W as a set of linear combinations
W = { a [ 6 1 β 7 ] + b [ β 1 1 0 ] : a , b β R } = span { [ 6 1 β 7 ] , [ β 1 1 0 ] } \begin{align*}
W &= \left\{ a \bm 6 \\ 1 \\ -7\em + b \bm -1 \\ 1 \\ 0\em : a, b \in \mathbb{R}\right\} \\
&= \textrm{span}\left\{\bm 6 \\ 1 \\ -7\em, \bm -1 \\ 1 \\ 0\em\right\}
\end{align*} W β = β© β¨ β§ β a β£ β‘ β 6 1 β 7 β β¦ β€ β + b β£ β‘ β β 1 1 0 β β¦ β€ β : a , b β R β β¬ β« β = span β© β¨ β§ β β£ β‘ β 6 1 β 7 β β¦ β€ β , β£ β‘ β β 1 1 0 β β¦ β€ β β β¬ β« β β Now, we set the vectors in (9) as the columns of A : A = [ 6 β 1 1 1 β 7 0 ] A: A = \bm 6 & -1 \\ 1 & 1 \\ -7 & 0\em A : A = β£ β‘ β 6 1 β 7 β β 1 1 0 β β¦ β€ β . It then follows that Col ( A ) = span { [ 6 1 β 7 ] , [ β 1 1 0 ] } = W {\textrm{Col}(A) = \textrm{span}\left\{\bm 6 \\ 1 \\ -7\em, \bm -1 \\ 1 \\ 0\em\right\} = W} Col ( A ) = span β© β¨ β§ β β£ β‘ β 6 1 β 7 β β¦ β€ β , β£ β‘ β β 1 1 0 β β¦ β€ β β β¬ β« β = W .
7 The Complete Solution to A x = b A \vv x = \vv b A x = b ΒΆ With an understanding of Null( A ) (A) ( A ) and Col( A ) (A) ( A ) , we can completely characterize the solution set to A x = b A \vv x = \vv b A x = b .
The linear system A x = b A \vv x = \vv b A x = b has at least one solution x β \vv x^* x β if and only if b β \vv b \in b β Col( A ) (A) ( A ) . If this occurs, then x \vv x x is a solution to A x = b A \vv x = \vv b A x = b if and only if
x = x β + n ,
\vv x = \vv x^* + \vv n, x = x β + n , where n β \vv n \in n β Null( A ) (A) ( A ) is an element of the null space of A A A .
The first part of the theorem was already discussed here . Suppose, both x \vv x x and x β \vv x^* x β are solutions so that A x = A x β = b A \vv x = A \vv x^* = \vv b A x = A x β = b , then their difference n = x β x β \vv n = \vv x - \vv x^* n = x β x β satisfies
A n = A ( x β x β ) = A x β A x β = b β b = 0
A \vv n = A (\vv x - \vv x^*) = A \vv x - A \vv x^* = \vv b - \vv b = \vv 0 A n = A ( x β x β ) = A x β A x β = b β b = 0 so that n β \vv n \in n β Null( A ) (A) ( A ) . This means that x = x β + ( x β x β ) = x β + n \vv x = \vv x^* + (\vv x - \vv x^*) = \vv x^* + \vv n x = x β + ( x β x β ) = x β + n .
Consequences of
TheoremΒ 2 TheoremΒ 2 tells us that to contruct the most general solution to A x = b A \vv x = \vv b A x = b , we only need to know a particular solution x β \vv x^* x β and the general solution to A n = 0 A \vv n = \vv 0 A n = 0 .This might remind you of how you solved inhomogeneous linear ordinary differential equations; again, not a coincidence! We will see later in the semester that linear algebraic systems and linear ordinary differential equations are both examples of general linear systems . Computing the general solution to A x = b A \vv x = \vv b A x = b requires applying Gaussian Elimination (GE) first to [ A β£ b ] \bm A | \vv b\em [ A β£ b β ] to get a particular solution, and then to [ A β£ 0 ] \bm A | \vv 0 \em [ A β£ 0 β ] to characterize the null space. We can specialize TheoremΒ 3 or square matrices, which allows us to characterize if A A A is invertible via either its null space or column space.
8 The Superposition Principle ΒΆ We already showed that for homogeneous systems A x = 0 A \vv x = \vv 0 A x = 0 , superposition lets us generate new solutions by combining known solutions. For inhomogeneous systems A x = b A \vv x = \vv b A x = b , superposition lets us combine solutions for different RHS (b \vv b b vectors).
Suppose we have solutions x 1 β \vv x_1^* x 1 β β and x 2 β \vv x_2^* x 2 β β to A x = b 1 A \vv x = \vv b_1 A x = b 1 β and A x = b 2 A \vv x = \vv b_2 A x = b 2 β , respectively. Can I quickly build a solution to A x = c 1 b 1 + c 2 b 2 A \vv x = c_1 \vv b_1 + c_2 \vv b_2 A x = c 1 β b 1 β + c 2 β b 2 β for some c 1 , c 2 β R c_1, c_2 \in \mathbb{R} c 1 β , c 2 β β R ?
The answer is superposition ! Letβs try x β = c 1 x 1 β + c 2 x 2 β \vv x^* = c_1 \vv x_1^* + c_2 \vv x_2^* x β = c 1 β x 1 β β + c 2 β x 2 β β :
A x β = A ( c 1 x 1 β + c 2 x 2 β ) = c 1 ( A x 1 β ) + c 2 ( A x 2 β ) = c 1 b 1 + c 2 b 2 . \begin{align*}
A \vv x^* &= A (c_1 \vv x_1^* + c_2 \vv x_2^*) = c_1 (A \vv x_1^*) + c_2 (A \vv x_2^*) \\
&= c_1 \vv b_1 + c_2 \vv b_2.
\end{align*} A x β β = A ( c 1 β x 1 β β + c 2 β x 2 β β ) = c 1 β ( A x 1 β β ) + c 2 β ( A x 2 β β ) = c 1 β b 1 β + c 2 β b 2 β . β It worked! From (18) , x β = c 1 x 1 β + c 2 x 2 β \vv x^* = c_1 \vv x_1^* + c_2 \vv x_2^* x β = c 1 β x 1 β β + c 2 β x 2 β β is a solution to A x = c 1 b 1 + c 2 b 2 A \vv x = c_1 \vv b_1 + c_2 \vv b_2 A x = c 1 β b 1 β + c 2 β b 2 β . This is again the power of linear superposition at play.
The system
[ 4 1 1 4 ] [ x 1 x 2 ] = [ f 1 f 2 ]
\bm 4 & 1 \\ 1 & 4 \em \bm x_1 \\ x_2 \em = \bm f_1 \\ f_2 \em [ 4 1 β 1 4 β ] [ x 1 β x 2 β β ] = [ f 1 β f 2 β β ] models the mechanical response of a pair of masses connected by springs subject to external forcing.
The solution x = [ x 1 x 2 ] \vv x = \bm x_1 \\ x_2 \em x = [ x 1 β x 2 β β ] is the displacement of the masses and the right-hand side f = [ f 1 f 2 ] \vv f = \bm f_1 \\ f_2 \em f = [ f 1 β f 2 β β ] are the appplied forces.
For f = e 1 = [ 1 0 ] \vv f = \vv e_1 = \bm 1 \\ 0 \em f = e 1 β = [ 1 0 β ] , x 1 β = [ 4 15 β 1 15 ] \vv x_1^* = \bm \frac{4}{15} \\ -\frac{1}{15}\em x 1 β β = [ 15 4 β β 15 1 β β ] ; and f = e 2 = [ 0 1 ] \vv f = \vv e_2 = \bm 0 \\ 1 \em f = e 2 β = [ 0 1 β ] , x 2 β = [ β 1 15 4 15 ] \vv x_2^* = \bm -\frac{1}{15} \\ \frac{4}{15}\em x 2 β β = [ β 15 1 β 15 4 β β ] .
Hence, we can write the general solution for f = [ f 1 f 2 ] = f 1 e 1 + f 2 e 2 \vv f = \bm f_1 \\ f_2 \em = f_1 \vv e_1 + f_2 \vv e_2 f = [ f 1 β f 2 β β ] = f 1 β e 1 β + f 2 β e 2 β as x β = f 1 x 1 β + f 2 x 2 β \vv x^* = f_1 \vv x_1^* + f_2 \vv x_2^* x β = f 1 β x 1 β β + f 2 β x 2 β β !
The above idea caneasily be extended to several RHSs (solutions to more than two b \vv b b vectors).
If x 1 β , x 2 β , β¦ , x k β \vv x_1^*, \vv x_2^*, \ldots, \vv x_k^* x 1 β β , x 2 β β , β¦ , x k β β are solutions to A x = b 1 , A x = b 2 , β¦ , A x = b k A \vv x = \vv b_1, A \vv x = \vv b_2, \ldots, A \vv x = \vv b_k A x = b 1 β , A x = b 2 β , β¦ , A x = b k β , then, for any choice of c 1 , c 2 , β¦ , c k β R c_1, c_2, \ldots, c_k \in \mathbb{R} c 1 β , c 2 β , β¦ , c k β β R , a particular solution to
A x = c 1 b 1 + c 2 b 2 + β¦ + c k b k A \vv x = c_1 \vv b_1 + c_2 \vv b_2 + \ldots + c_k \vv b_k A x = c 1 β b 1 β + c 2 β b 2 β + β¦ + c k β b k β is given by x β = c 1 x 1 β + c 2 x 2 β + β¦ + c k x k β \vv x^* = c_1 \vv x_1^* + c_2 \vv x_2^* + \ldots + c_k \vv x_k^* x β = c 1 β x 1 β β + c 2 β x 2 β β + β¦ + c k β x k β β . The general solution to (20) is then
x = x β + n = c 1 x 1 β + c 2 x 2 β + β¦ + c k x k β + n , \vv x = \vv x^* + \vv n = c_1 \vv x_1^* + c_2 \vv x_2^* + \ldots + c_k \vv x_k^* + \vv n, x = x β + n = c 1 β x 1 β β + c 2 β x 2 β β + β¦ + c k β x k β β + n , where n β \vv n \in n β Null( A ) (A) ( A ) .
If we know the particular solutions x 1 β , x 2 β , β¦ , x m β \vv x_1^*, \vv x_2^*, \ldots, \vv x_m^* x 1 β β , x 2 β β , β¦ , x m β β to A x = e i A \vv x = \vv e_i A x = e i β for i = 1 , 2 , β¦ , m i = 1, 2, \ldots, m i = 1 , 2 , β¦ , m , where e 1 , β¦ , e m \vv e_1, \ldots, \vv e_m e 1 β , β¦ , e m β are the standard basis vectors of R m \mathbb{R}^m R m , then, we can construct a particular solution x β \vv x^* x β
to A x = b A \vv x = \vv b A x = b by first writing
b = b 1 e 1 + b 2 e 2 + β¦ + b m e m
\vv b = b_1 \vv e_1 + b_2 \vv e_2 +\ldots + b_m \vv e_m
b = b 1 β e 1 β + b 2 β e 2 β + β¦ + b m β e m β to conclude that x β = b 1 x 1 β + b 2 x 2 β + β¦ + b m x m β \vv x^* = b_1 \vv x_1^* + b_2 \vv x_2^* +\ldots + b_m \vv x_m^* x β = b 1 β x 1 β β + b 2 β x 2 β β + β¦ + b m β x m β β is a solution to A x = b A \vv x = \vv b A x = b .
This is conceptually useful because it tells us how the elements b i b_i b i β of b \vv b b affect our solution x β \vv x^* x β .
Practically, this is of limited value however; for example, if A A A is square, then this fact is another way of computing A β 1 A^{-1} A β 1 . Indeed, the vectors x 1 β , x 2 β , β¦ , x m β \vv x_1^*, \vv x_2^*, \ldots, \vv x_m^* x 1 β β , x 2 β β , β¦ , x m β β are the columns of A β 1 A^{-1} A β 1 (what are the m m m linear systems?), and x β = A β 1 b \vv x^* = A^{-1}\vv b x β = A β 1 b .