Skip to article frontmatterSkip to article content

1.4 Gaussian Elimination Revisited

Gaussian Elimination as Matrix Factorization

Dept. of Electrical and Systems Engineering
University of Pennsylvania

Binder

Lecture notes

1Reading

Material related to this page, as well as additional exercises, can be found in LAA Ch. 2.5, ALA Ch 1.3, and ILA Ch. 2.6. This page is mostly based on ALA Ch 1.3.

2Learning Objectives

By the end of this page, you should know:

  • how to use Gaussian elimination to solve linear systems Ax=bA\vv x = \vv b when AA is a regular matrix
  • what pivots in a matrix are
  • algorithms to solve large systems of linear equations using Gaussian elimination and back substitution

3Gaussian Elimination: Regular Case

With basic matrix arithmetic operations in our toolkit, we will develop a systematic method for solving linear systems of equations. For a linear system Ax=bA\vv x = \vv b, with AA an m×nm\times n coefficient matrix, x\vv x an n×1n \times 1 unknowns vector, and b\vv b an m×1m \times 1 right hand side vector, we define the augmented matrix:

M=[Ab]=[a11a12a1nb1a21a22a2nb2am1am2amnbm],M = \left[\begin{array}{c|c} A & \vv b \end{array}\right] =\left[ \begin{array}{cccc|c} a_{11} & a_{12} & \cdots & a_{1n} & b_1 \\ a_{21} & a_{22} & \cdots & a_{2n} & b_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} & b_m \end{array}\right],

which is an m×(n+1)m \times (n+1) matrix obtained by tacking the right-hand side vector b\vv b onto the right of the coefficient matrix AA. The extra vertical line is just to remind us that the last column of this matrix plays a special role. For example, the augmented matrix for our example system is

M=[121226171143]M = \left[ \begin{array}{ccc|c} 1 & 2 & 1 & 2\\ 2 & 6 & 1 & 7 \\ 1 & 1 & 4 & 3 \end{array}\right]

Note that it is simple to go back and forth between the original linear system and the augmented matrix, but since operations on equations also affect their right-hand sides, it is convenient to keep track of everything together using the augmented matrix.

For the time being, we will concentrate our efforst on linear systems that have the same number, nn, of equations as unknowns. The associated coefficient matrix AA is square of size n×nn \times n, and the corresponding augmented matrix M=[Ab]M = [ A \, | \, \vv b] then has size n×(n+1)n \times (n+1).

We start with a simple observation connecting Linear System Operation #1 to its equivalent matrix operation

For example, when solving our example system, our first step was to subtract two times the first equation from the second. This is equivalently done by subtracting two times the first row of the augmented matrix (2) from the second row:

2[1212]+[2617]=[0213]. -2\bm 1 & 2 & 1 & 2 \em + \bm 2 & 6 & 1 & 7 \em = \bm 0 & 2 & -1 & 3\em.

We recognize this as the second row of the modified augmented matrix

[121202131143],\left[ \begin{array}{ccc|c} 1 & 2 & 1 & 2\\ 0 & 2 & -1 & 3\\ 1 & 1 & 4 & 3 \end{array}\right],

that corresponds to the first equivalent example system. When elementary row operation #1 is performed, it is critical that the result replaces the row being added to and not the row being multiplied by the scalar. Notice that the elimination of a variable in an equation, in this case the first variable in the second equation, amounts to making its entry in the coefficient matrix equal to zero.

3.1Pivots

Pivot!

We will call the (1,1)(1,1) entry of the coefficient matrix the first pivot. The precise definition of a pivot will become clear as we continue, but one key requirement is that a pivot must always be nonzero. Eliminating the first variable x1x_1 from the second and third equations is the same as making all of the matrix entries in the column below the pivot equal to zero. We have already done this with the (2,1)(2,1) entry in (4). To make the (3,1)(3,1) entry equal to zero, we subtract the first from from the last row, resulting in the augmented matrix

[121202130131],\left[ \begin{array}{ccc|c} 1 & 2 & 1 & 2\\ 0 & 2 & -1 & 3\\ 0 & -1 & 3 & 1 \end{array}\right],

which we again recognize as the corresponding to the second equivalent example system. The second pivot is the (2,2)(2,2) entry of this matrix, which is 2, and is the coefficient of the second variable x2x_2 in the second equation. Again, the pivot must be nonzero. We use the Observation 1 of adding 1/21/2 of the second row to the third row to make the entry below the second pivot equal to 0, resulting in the augmented matrix

[12120213005252],\left[ \begin{array}{ccc|c} 1 & 2 & 1 & 2\\ 0 & 2 & -1 & 3\\ 0 & 0 & \frac{5}{2} & \frac{5}{2} \end{array}\right],

that corresponds to the triangular system equivalent to our example system. We write the final augmented matrix as

N=[Uc],whereU=[1210210052],c=[2352]. N = [U \, | \, \vv c], \quad \text{where} \quad U = \bm 1 & 2 & 1 \\ 0 & 2 & -1 \\ 0 & 0 & \frac{5}{2}\em, \quad \vv c = \bm 2 \\ 3 \\ \frac{5}{2} \em.

The corresponding linear system can be written as Ux=cU\vv x = \vv c. A special feature of this system is that the coefficient matrix UU is upper triangular[1], which means that all entries below the main diagonal are zero, i.e., uij=0u_{ij}=0 whenever i>ji>j. The three nonzero entries on its diagonal, 1, 2, and 5/25/2, including the last one in the (3,3)(3,3) slot, are the three pivots. Once the system has been reduced to this triangular form, we can easily solve it via Back Substitution.

What we just described is an algorithm for solving a linear system of nn equations in nn unknowns, and is known as regular Gaussian Elimination. We’ll call a square matrix AA regular if the algorithm successfully reduces it to the upper triangular form UU with all nonzero pivots on the diagonal. If this fails to happen, i.e., if a pivot appearing on the diagonal is zero, then the matrix is not regular. We then use the pivot row to make all entries lying in the column below the pivot equal to zero through elementary row operations. The solution is then found by applying Back Substitution to the resulting system. We present both of these algorithms in pseudocode and python code below.

Here we use what are called in place updates, meaning that the same letter MM (with entries mijm_{ij}) denotes the current augmented matrix at each stage in the computation. We initialize with M=[Ab]M=[A \, | \, \vv b], and output (assuming AA is regular) the upper triangular equivalent augmented matrix M=[Uc]M = [U \, | \, \vv c], where UU is the upper triangular matrix with diagonal entries the pivots, and c\vv c is the resulting vector of the right-hand sides of the triangular system Ux=cU\vv x = \vv c.

Python Break!

Here’s what this looks like implemented as a function in NumPy. You should be able to map the code below to the pseudo-code above.

import numpy as np

def GaussElim(A, b):
    n = A.shape[0] # number of rows of A = number of equations
    M = np.hstack((A, b.reshape((n,1)))) # build the augmented matrix by horizontally stacking A and b.
    
    # Gaussian elimination
    for j in range(n):
        if M[j, j] == 0:
            print("A is not regular")
            break
        else:
            for i in range(j+1, n):
                scalar = M[i, j]/M[j, j]
                M[i, :] = M[i, :] - scalar*M[j, :]
    
    # return the matrix M = [U | c] in upper triangular form
    return M        

# Let's test our function
A = np.array([[1., 2., 1.],
              [2., 6., 1.],
              [1., 1., 4.]])

b = np.array([2, 7, 3])

M = GaussElim(A,b)
print(M)
[[ 1.   2.   1.   2. ]
 [ 0.   2.  -1.   3. ]
 [ 0.   0.   2.5  2.5]]
# The importance of a period!
Aint = np.array([[1, 2, 1],
              [2, 6, 1],
              [1, 1, 4]])
bint = np.array([2, 7, 3])

newM = GaussElim(Aint,bint)
print(M-newM) # these are different!
[[0.  0.  0.  0. ]
 [0.  0.  0.  0. ]
 [0.  0.  0.5 0.5]]

Next, let’s take a look at the pseudocode for Back Substitution.

Here’s what this looks like implemented as a function in NumPy. You should be able to map the code below to the pseudo-code above.

def back_substitution(M):
    # We assume M = [U|c] has U in upper triangular form
    # Better software engineering practices would call for us to check this before proceeding but we
    # focus on the math here
    
    n = M.shape[0] # number of rows of M = number of equations = number of unknowns for square A
    x = np.zeros((n,)) # set up an all zeros array x to be populated with the solution
    
    x[n-1] = M[n-1, -1]/M[n-1, n-1] # -1 is the last index of the particular axis in a python array
    for i in range(n-2, -1, -1): # range(a, b, c) returns the list [a, a+c ,a+2c, ..., b] 
        x[i] = (M[i, -1] - np.sum(M[i, i+1:-1]*x[i+1:]))/M[i, i] # * is elementwise multiplication of arrays
    
    return x

# Let's test out our code using M from above: it gives us the right answer!
x = back_substitution(M)
print(x)
[-3.  2.  1.]

Now let’s put the two functions together to define a solve function that takes in a matrix A and a right hand side b and returns a solution x such that A@x = b.

def solve(A,b):
    M = GaussElim(A,b)
    x = back_substitution(M)
    return x

x = solve(A,b)

print(f'x = {x}\n A @ x - b = {A @ x - b}')
x = [-3.  2.  1.]
 A @ x - b = [0. 0. 0.]

3.2Worked Examples

4Solving Big Systems of Linear Equations

The code that we wrote above works equally well when we have 1000 equations in 1000 unknowns (or more!). The code below generates a random A and b of size 1000 x 1000 and 1000 x 1, respectively, and solves for a 1000 x 1 solution x satisfying A @ x = b. Compare how long our code takes to solve this problem and the built in function from NumPy np.linalg.solve.

If you want to play around with this, launch the interactive notebook and see how big of a system you can solve before it takes too long or you run out of memory!

# Setup a large random system of equation Ax = b
n = 1000 # number of equations and unknowns

# Generate random A and b of size (n,n) and (n,1)
A_big = np.eye(n) + np.random.randn(n,n)
b_big = np.random.randn(n)
# solve linear system of equations Ax=b using our homebrewed solution
x = solve(A_big, b_big)

# print max absolute value of Ax - b
print(np.max(np.abs(A_big @ x - b_big)))
4.564420619246334e-09
# compare to using NumPy's built in function
x_np = np.linalg.solve(A_big, b_big)

# print max absolute value of Ax - b
print(np.max(np.abs(A_big @ x_np - b_big)))
3.06916714265526e-11
# use some python magic to compare timing: which should you use in practice?!
%timeit solve(A_big, b_big)
%timeit np.linalg.solve(A_big, b_big)
805 ms ± 4.06 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
9.83 ms ± 641 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Binder

Footnotes
  1. It’s convention we used the symbol UU to remind ourselves that the matrix is upper triangular.