In the last two lectures, we generalized how we add and scale vectors in R2 and R3 to general Euclidean spaces Rn, and more general vector spaces V. In this lecture, we bring over other key concepts from R2 and R3 to vector spaces, namely the ideas of angle, length, and distance.
The notions of angle, length, and distance in general vector spaces play a foundational role in modern applications of engineering, economics, and AI. By the end of the next few lectures, you will be equipped with both conceptual and computational tools that will allow you to solve some really interesting problems with immediate real-world application!
We start with a familiar example of an inner-product for vectors in Rn, the dot product:
The Pythagorean theorem extends to n−dimensional space and tells us that v⋅v is the square of the length of v. We use this observation to define the Euclidean norm (or length):
In R2, the formula for Euclidean norm looks a lot like the familiar Pythagorean theorem! This generalizes our idea of length from R2 and R3 to Rn.
The Eucledian norm∥v∥ of a vector v has some intuitive properties. For example, if v=0, then ∥v∥>0 (all nonzero vectors have positive length), and ∥v∥=0 if and only if v=0 (only the zero vector has zero length).
These properties, and those of the dot product, inspire the following abstract definition for more general inner-products:
As we will see soon, an inner product allows us to define notions of angle, length, and distance in a vector space. This added structure is very useful, so when a vector space is equipped with an inner product, we call it an inner product space.
which assigns more weights to the second coordinate relative to the first.
We can generalize Example 1 to Rn and arbitrary positive weights. Let c1,...,cn>0 we positive numbers. Then the corresponding weighted inner product and weighted norm on Rn is defined to be
Let’s use the np.inner function in NumPy to compute the inner product between vectors and notice how it is different from the np.dot function. We also compute weighted inner products. Then, we use np.linalg.norm to directly compute the Eucledian norm of a vector and compare it with the induced norm of the corresponding inner product (dot product).
# Inner product
import numpy as np
v1_c = v1.reshape((-1,1)) # Notice the shape of the vectors
v2_c = v2.reshape((-1,1))
print("The vectors are v1: \n", v1_c, "\n and v2: \n", v2_c)
inner_prod = np.inner(v1_c, v2_c) # What happens if you use np.dot?
print("Inner product <v1, v2> is: \n", inner_prod)
# weighted inner product
weights = np.array([2, 5, 3])
inner_weighted = np.inner(v1, weights*v2)
print("Weighted inner product <v1, v2> is: \n", inner_weighted)
The vectors are v1:
[[1]
[2]
[3]]
and v2:
[[4]
[5]
[6]]
Inner product <v1, v2> is:
[[ 4 5 6]
[ 8 10 12]
[12 15 18]]
Weighted inner product <v1, v2> is:
112
# Norms
print("Norm of v1: ", np.linalg.norm(v1))
print("Induced norm of v1: ", np.sqrt(np.dot(v1, v1)))
Norm of v1: 3.7416573867739413
Induced norm of v1: 3.7416573867739413