12.1 Motivation and SVD
1Reading¶
Material related to this page can be found in Lecture 9 from Stanford CS168 course.
2Learning Objectives¶
By the end of this page, you should know:
- what is matrix completion
- the motivating applications for low rank matrix approximation
- how the SVD of a matrix is used to find the low rank approximation
3What are the Missing Entries?¶
Suppose that I run a web streaming service for movies for three of my friends, Amy, Bob, and Carol. It’s a very specialized web movie service, with only five movie options: The Matrix, Inception, Star Wars: Episode 1, Moana, and Inside Out. After 1 month, we ask our friends Amy, Bob, and Carol to rate the movies they’ve watched from one to five. We collect their ratings into a table below (we mark unrated movies with ?):
Table 1:Movie Ratings
The Matrix | Inception | Star Wars: Ep.1 | Moana | Inside Out | |
---|---|---|---|---|---|
Amy | 9 | ? | ? | ? | 5 |
Bob | ? | 3 | 4 | ? | 2 |
Carol | ? | ? | 2 | 1 | ? |
and are asked to provide recommendations to Amy, Bob, and Carol as to which movie they should watch next. Said another way, we are asked to fill in the unknown ? ratings in the table above.
This seems a bit unfair! Each of the unknown entries could be any value in 1-5 after all! But what if I told you an additional hint: Amy, Bob, and Carol have the same relative preferences for each movie. For example, Amy likes Inside Out more than Bob likes Inside Out, and this ratio is the same across all movies. Mathematically, we are making the assumption that all columns of the table above are multiples of each other.
Thus we can conclude that Bob likes The Matrix . Similarly, Carol’s rating of Inception is , Carol’s rating of Inside Out is , and so on. Here’s the completed matrix:
The point of this example is that when you know something about the structure of a partially known matrix, then sometimes it is possible to intelligently fill in missing entries. In this example, the assumption that every column is a multiple of each means that rank (since dim column ), which is pretty extreme! One natural and useful definition is that assuming a matrix has low-rank. What rank counts as “low” is application dependent but it typically means that for a matrix , that rank .
This lecture will explore how we can use this idea of structure to solve the matrix completion problem by finding the best low-rank approximation to a partially known matrix. The SVD will of course be our main tool.
4Low-Rank Matrix Approximations: Motivation¶
Before diving into the math, let’s highlight some applications of low-rank matrix approximation:
Compression: We saw this idea last class, but it’s worth revisiting through the lens of low-rank approximations. If the original matrix is described by numbers, then a rank approximation requires numbers. To see this, recall that if has rank , then we can write its SVD as:
or product where and . For example, if represents a grayscale image (with entries = pixel intensities), and are typically in the 100s (or 1000s for HD images), and a modest value of (100-150) is usually enough to give a good approximation of the original image.
Updating Huge AI Models: A modern application of low-rank matrix approximation is for “fine-tuning” huge AI models. In the setting of large language models (LLMs) like ChatGPT, we are typically given some huge off-the-shelf model with billions (or more) parameters. Given this large model that has been trained on an enormous but generic corpus of text from the web, one often performs “fine-tuning”. This fine-tuning is a second round of training, typically using a much smaller domain specific dataset (for example, the lecture notes for this class could be used to fine-tune a “LinearAlgebraGPT”). The challenge of fine-tuning is that because these models are so big, making these updates is extremely challenging. The 2021 paper LoRA: Low-Rank Adaptation of Large Language Models argued that fine-tuning updates are generally approximately low-rank and that one can learn these updates in their factored forms, allowing model fine-tuning with 1000x-10000x fewer parameters.
Denoising: If is a noisy version of some “true” matrix that is approximately low-rank, then finding a low-rank approximation to will typically remove a lot of noise (and maybe some signal), resulting in a matrix that is actually more informative than the original.
Matrix Completion: Low-rank approximations offers a way of solving the matrix completion problem we introduced above. Given a matrix with missing entries, the first step is to obtain a full matrix by filling in the missing entries with “default” values: what these default values should be often requires trial and error, but natural things to try include 0, the average of known entries in the same column, row, or the entire matrix. The second step is then to find a rank approximation to . This approach works well when the unknown matrix is close to a rank matrix and there aren’t too many missing entries.
With this motivation in mind, let’s see how the SVD can help us in finding a good rank approximation of a matrix . Once we’ve described our procedure and seen some examples of it in action, we’ll make precise how our method is actually producing the “best” rank approximation possible.
5Low-Rank Approximations from the SVD¶
Given an matrix , which we’ll assume has rank . Then the SVD of is given by
for , , and the matrices of left singular vectors, right singular vectors, and singular values, respectively.
This right-most expression of (SVD) is a particularly convenient expression for our purposes, which expresses as a sum of rank 1 matrices with mutually orthogonal column and row spaces.
This sum expression suggests a very natural way of forming a rank approximation to : simply truncate the sum to the top terms, as measured by the singular values :
where the right-most expression is defined in terms of the truncated matrices:
Before analyzing the properties of , let’s examine if could plausibly address our motivating applications. Storing the matrices and requires storing numbers if which is much less than the numbers needed to store when and are relatively large.
It is also natural to interpret (SVD-k) as approximating the raw data in terms of “concepts” (e.g., “sci-fi”, “romcom”, “drama”, “classic”), where the singular values express the “prominance” of the concepts, the rows of and columns of express the “typical row/column” associated with each concept (e.g., a viewer likes only sci-fi movies, or a movie liked only by romcom viewers), and the rows of (or columns of ) approximately express each row (or column) of as a linear combination (scaled by ) of the “typical rows” (or “typical columns”).
This method of producing a low-rank approximation is beautiful: we interpret the SVD of a matrix as a list of “ingredients” ordered by “importance”, and we retain only the most important ingredients. But is this elegant procedure any “good”?