1.4 Gaussian Elimination Revisited
Gaussian Elimination as Matrix Factorization
1Reading¶
Material related to this page, as well as additional exercises, can be found in LAA Ch. 2.5, ALA Ch 1.3, and ILA Ch. 2.6. This page is mostly based on ALA Ch 1.3.
2Learning Objectives¶
By the end of this page, you should know:
- how to use Gaussian elimination to solve linear systems when is a regular matrix
- what pivots in a matrix are
- algorithms to solve large systems of linear equations using Gaussian elimination and back substitution
3Gaussian Elimination: Regular Case¶
With basic matrix arithmetic operations in our toolkit, we will develop a systematic method for solving linear systems of equations. For a linear system , with an coefficient matrix, an unknowns vector, and an right hand side vector, we define the augmented matrix:
which is an matrix obtained by tacking the right-hand side vector onto the right of the coefficient matrix . The extra vertical line is just to remind us that the last column of this matrix plays a special role. For example, the augmented matrix for our example system is
Note that it is simple to go back and forth between the original linear system and the augmented matrix, but since operations on equations also affect their right-hand sides, it is convenient to keep track of everything together using the augmented matrix.
For the time being, we will concentrate our efforst on linear systems that have the same number, , of equations as unknowns. The associated coefficient matrix is square of size , and the corresponding augmented matrix then has size .
We start with a simple observation connecting Linear System Operation #1 to its equivalent matrix operation
For example, when solving our example system, our first step was to subtract two times the first equation from the second. This is equivalently done by subtracting two times the first row of the augmented matrix (2) from the second row:
We recognize this as the second row of the modified augmented matrix
that corresponds to the first equivalent example system. When elementary row operation #1 is performed, it is critical that the result replaces the row being added to and not the row being multiplied by the scalar. Notice that the elimination of a variable in an equation, in this case the first variable in the second equation, amounts to making its entry in the coefficient matrix equal to zero.
3.1Pivots¶
We will call the entry of the coefficient matrix the first pivot. The precise definition of a pivot will become clear as we continue, but one key requirement is that a pivot must always be nonzero. Eliminating the first variable from the second and third equations is the same as making all of the matrix entries in the column below the pivot equal to zero. We have already done this with the entry in (4). To make the entry equal to zero, we subtract the first from from the last row, resulting in the augmented matrix
which we again recognize as the corresponding to the second equivalent example system. The second pivot is the entry of this matrix, which is 2, and is the coefficient of the second variable in the second equation. Again, the pivot must be nonzero. We use the Observation 1 of adding of the second row to the third row to make the entry below the second pivot equal to 0, resulting in the augmented matrix
that corresponds to the triangular system equivalent to our example system. We write the final augmented matrix as
The corresponding linear system can be written as . A special feature of this system is that the coefficient matrix is upper triangular[1], which means that all entries below the main diagonal are zero, i.e., whenever . The three nonzero entries on its diagonal, 1, 2, and , including the last one in the slot, are the three pivots. Once the system has been reduced to this triangular form, we can easily solve it via Back Substitution.
What we just described is an algorithm for solving a linear system of equations in unknowns, and is known as regular Gaussian Elimination. We’ll call a square matrix regular if the algorithm successfully reduces it to the upper triangular form with all nonzero pivots on the diagonal. If this fails to happen, i.e., if a pivot appearing on the diagonal is zero, then the matrix is not regular. We then use the pivot row to make all entries lying in the column below the pivot equal to zero through elementary row operations. The solution is then found by applying Back Substitution to the resulting system. We present both of these algorithms in pseudocode and python code below.
Here we use what are called in place updates, meaning that the same letter (with entries ) denotes the current augmented matrix at each stage in the computation. We initialize with , and output (assuming is regular) the upper triangular equivalent augmented matrix , where is the upper triangular matrix with diagonal entries the pivots, and is the resulting vector of the right-hand sides of the triangular system .
Python Break!¶
Here’s what this looks like implemented as a function in NumPy. You should be able to map the code below to the pseudo-code above.
import numpy as np
def GaussElim(A, b):
n = A.shape[0] # number of rows of A = number of equations
M = np.hstack((A, b.reshape((n,1)))) # build the augmented matrix by horizontally stacking A and b.
# Gaussian elimination
for j in range(n):
if M[j, j] == 0:
print("A is not regular")
break
else:
for i in range(j+1, n):
scalar = M[i, j]/M[j, j]
M[i, :] = M[i, :] - scalar*M[j, :]
# return the matrix M = [U | c] in upper triangular form
return M
# Let's test our function
A = np.array([[1., 2., 1.],
[2., 6., 1.],
[1., 1., 4.]])
b = np.array([2, 7, 3])
M = GaussElim(A,b)
print(M)
[[ 1. 2. 1. 2. ]
[ 0. 2. -1. 3. ]
[ 0. 0. 2.5 2.5]]
# The importance of a period!
Aint = np.array([[1, 2, 1],
[2, 6, 1],
[1, 1, 4]])
bint = np.array([2, 7, 3])
newM = GaussElim(Aint,bint)
print(M-newM) # these are different!
[[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0.5 0.5]]
Next, let’s take a look at the pseudocode for Back Substitution.
Here’s what this looks like implemented as a function in NumPy. You should be able to map the code below to the pseudo-code above.
def back_substitution(M):
# We assume M = [U|c] has U in upper triangular form
# Better software engineering practices would call for us to check this before proceeding but we
# focus on the math here
n = M.shape[0] # number of rows of M = number of equations = number of unknowns for square A
x = np.zeros((n,)) # set up an all zeros array x to be populated with the solution
x[n-1] = M[n-1, -1]/M[n-1, n-1] # -1 is the last index of the particular axis in a python array
for i in range(n-2, -1, -1): # range(a, b, c) returns the list [a, a+c ,a+2c, ..., b]
x[i] = (M[i, -1] - np.sum(M[i, i+1:-1]*x[i+1:]))/M[i, i] # * is elementwise multiplication of arrays
return x
# Let's test out our code using M from above: it gives us the right answer!
x = back_substitution(M)
print(x)
[-3. 2. 1.]
Now let’s put the two functions together to define a solve
function that takes in a matrix A
and a right hand side b
and returns a solution x
such that A@x = b
.
def solve(A,b):
M = GaussElim(A,b)
x = back_substitution(M)
return x
x = solve(A,b)
print(f'x = {x}\n A @ x - b = {A @ x - b}')
x = [-3. 2. 1.]
A @ x - b = [0. 0. 0.]
3.2Worked Examples¶
4Solving Big Systems of Linear Equations¶
The code that we wrote above works equally well when we have 1000 equations in 1000 unknowns (or more!). The code below generates a random A
and b
of size 1000 x 1000 and 1000 x 1, respectively, and solves for a 1000 x 1 solution x
satisfying A @ x = b
. Compare how long our code takes to solve this problem and the built in function from NumPy np.linalg.solve
.
If you want to play around with this, launch the interactive notebook and see how big of a system you can solve before it takes too long or you run out of memory!
# Setup a large random system of equation Ax = b
n = 1000 # number of equations and unknowns
# Generate random A and b of size (n,n) and (n,1)
A_big = np.eye(n) + np.random.randn(n,n)
b_big = np.random.randn(n)
# solve linear system of equations Ax=b using our homebrewed solution
x = solve(A_big, b_big)
# print max absolute value of Ax - b
print(np.max(np.abs(A_big @ x - b_big)))
4.564420619246334e-09
# compare to using NumPy's built in function
x_np = np.linalg.solve(A_big, b_big)
# print max absolute value of Ax - b
print(np.max(np.abs(A_big @ x_np - b_big)))
3.06916714265526e-11
# use some python magic to compare timing: which should you use in practice?!
%timeit solve(A_big, b_big)
%timeit np.linalg.solve(A_big, b_big)
805 ms ± 4.06 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
9.83 ms ± 641 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
It’s convention we used the symbol to remind ourselves that the matrix is upper triangular.