Chapter 3 (cont.) - Products and Quotients of Vector Spaces
3.E: Products and Quotients of Vector Spaces
Products of Vector Spaces
product of vector spaces
Suppose are vector spaces over .
The product is defined by:
Addition on is defined by:
Scalar multiplication on is defined by:
For instance, .
Product of vector spaces is a vector space
Suppose are vector spaces over . Then is a vector space over .
We don't prove that here (the book doesn't even prove it), but it's easy to show. Note that of this space is the for each vector space put in each slot:
The additive inverse is just the negatives of each vector:
An Example
Notice that and are not equal. They can't even be compared! However, they are isomorphic via:
as our isomorphism.
Also notice that the list:
is a valid basis for . This paints the idea for the next lemma:
Dimension of a product is the sum of dimensions
Suppose are finite-dimensional vector spaces. Then is finite-dimensional and:
Proof
Choose a basis of each . For each basis vector of each , consider the element of that equals the basis vector in the -th slot and in the other slots. The list of all such vectors is linearly independent and spans , so it's a valid basis, of the corresponding length.
☐
Products and Direct Sums
Products and direct sums
Suppose that are subspaces of . Define a linear map by:
Then is a direct sum iff is injective/invertible
Notice that being injective here is the same as invertible. That's because this map is surjective by the definition of , as it's range is the sum space itself by definition.
Proof
The linear map is injective iff the only way to write as a sum where each is to have each . Thus, Chapter 1 - Vector Spaces#^038942 applies iff is injective, so then is injective iff is a direct sum.
☐
A sum is a direct sum iff the dimensions add up
Suppose is finite-dimensional and are subspaces of . Then is a direct sum iff:
Proof
We know that is surjective. By the FTOLM, we know is injective iff:
Suppose and is a subspace of . Then is the subset of defined by:
affine subset, parallel
An affine subset of is a subset of of the form for some and some subspace of
For and being a subspace of , the affine subset is said to be parallel to .
For instance, in the above example, all the lines in with slope 2 are parallel to , and each of these parallel lines are each affine subsets of .
Another example, if then the affine subset of parallel to are the planes in that are parallel to the -plane in the usual sense. Notice here that no line in would be an affine subset while still being parallel to the plane . That's because, to be affine, we require being allowed to span the whole plane , not just a parallel line.
quotient space,
Suppose is a subspace of . Then the quotient space is the set of all affine subsets of parallel to . In other words:
For instance, if was the line then is set of all lines in that have slope 2.
If instead is a line in containing the origin, then is the set of all lines in parallel to .
If is a plane in containing the origin, then is the set of all planes in parallel to .
Two affine subsets parallel to are equal or disjoint
Suppose is a subspace of and . Then the following are equivalent:
Proof
First suppose (1) holds, so . If any then:
So then . A similar argument shows that , so then (2) is proved.
We know that if (2) holds then (3) must hold, since the sets themselves can't be empty (worst case is ).
Suppose (3). Thus there are such that:
Then , showing (1).
☐
addition and scalar multiplication on
Suppose is a subspace of . Then addition and scalar multiplication are defined on by:
for and .
We need these to show that is a vector space.
Quotient space is a vector space
Suppose is a subspace of . Then , with the operations of addition and scalar multiplication as defined above, is a vector space.
Proof
The problem might arise where since is the set of all affine subsets, it might possibly not be unique. For example, suppose . Suppose also that are such that and . To show our definition of addition on makes sense, we'll show that .
If you think about the idea of being a plane at the origin in , we can think of as the line of planes that can be made possible by adding our 's. Hence, this is why . Further, gives the plane for a chosen as it's output . Lastly, is the map of all parallel sets (affine) to it's own null space, and outputs just the vector that's not in that nullspace .
The definition shows how (3) is true. For (4), use the FTOLM to show they have the same dimension, and thus there is some isomorphism. Further, is this isomorphism.
☐
3.F: Duality
The Dual Space and the Dual Map
linear functional
A linear functional of vector space is an element of .
For instance, the trace of a matrix is a linear function on . Another example is where it's given as:
This is a linear functional. We give a special name to :
dual space,
The dual space of , denoted , is the vector space of all linear functionals on . In other words,
Suppose is finite-dimensional. Then is also finite dimensional and is the same dimension as .
Proof .
☐
As such, then these two spaces are also isomorphic too.
dual basis
If is a basis of then the dual basis of is the list of elements of where each is the linear functional on such that:
Example
What is the dual basis of the standard basis for ?
Clearly:
The next result shows that the dual basis is indeed a basis. Thus, the terminology "dual basis" is justified.
Dual basis is a basis of the dual space
Suppose is finite-dimensional. Then the dual basis of a basis of is a basis of .
Proof
Suppose is a basis for . Let denote the dual basis. To show that is LI, suppose such that:
Now for . Thus so then is LI.
We also have vectors, implying that it's a basis for .
☐
dual map,
If then the dual map of is the linear map defined by for .
If and then is defined above as the composition of the linear maps and , so is a linear map from to , so .
To show the linear part, if then:
and for :
Warning
Do not confuse which is the dual of a linear map, and which is the derivative of a polynomial .
For example, define as the standard derivative operator over the polynomials . Suppose is the linear function on this space given by . Then is the linear functional on our polynomial space given by:
So then is the linear functional that takes to .
Or if then:
Algebraic properties of dual maps
for all
for all
for all and
Proof
The reader (me) will prove the first two in a HW problem.
For the third one, suppose then:
Thus .
☐
The Null Space and Range of the Dual of a Linear Map
Our goal here is to describe and the range in terms of the nullspace and range of :
annihilator,
For the annihilator of , denoted , is defined by:
This is a "nullspace lite" for the , only applying for a subspace . For example, suppose is the subspace of consisting of all polynomial multiples of . If is the linear functional on the space defined by , then .
Note
Sometimes is used in case multiple vector spaces are used, as depends on . However, usually this is known from context and thus is omitted.
As an example, let denote the standard basis of . Let denote the dual basis of . Suppose:
as a result, since it selects either the 3rd, 4th, or 5th entry. Thus . So then . For the other direction, suppose . Since the dual basis is a basis of then there are where:
Since and then:
The same is for , so . Hence and thus we get a subset the other way.
The annihilator is a subspace
Suppose . Then is a subspace of .
Proof
Clearly where is the zero-linear functional on . This is because for all we have it that where the left is the zero functional and the right is in .
Suppose . Then . Further, for every . If then:
Thus we get closure under vector addition. A similar argument shows closure under scalar multiplication. Thus, is a subspace of .
☐
The following proof could work as follows:
Choose a basis of
Extend the basis to for
Let be the dual basis of .
Show that is a basis of .
But this proof will be quicker than that:
Dimension of the annihilator
Suppose is finite-dimensional and is a subspace of . Then:
Proof
Let be the inclusion map defined by for . Thus is a linear map from . Using the FTOLM:
But and since then:
If then can be extended to a linear function on via HW 3 - Linear Maps#11. As such, then so then . Thus, so the left dimension becomes that for as we want.
☐
The null space of
Suppose are finite-dimensional and . Then:
Proof
(a) First, suppose . Thus . Hence:
for any . Thus so then is implied.
For the other way, suppose . Then for all . Thus, so then . Thus is implied.
(b) We have:
☐
The following is very useful as sometimes it's easier to prove that something is injective, rather than surjective.
surjective is equivalent to injective
Suppose are finite-dimensional and . Then is surjective iff is injective.
The transpose of a matrix , denoted , is the matrix obtained from by interchanging the rows and columns. If is then is whose entries are given by:
Notice that the transpose operator itself is linear and for all matrices and all , and thus is a linear operator.
The transpose of the product of matrices
If is and is an matrix, then:
Proof
Suppose and . Then:
Therefore .
☐
For the next lemma, we assume a dimensional and dual basis , an dimensional along with it's dual basis . So s computer with respect to the bases just mentioned of and and is computed with respect to the dual bases of and .
the matrix of is the transpose of the matrix of
Suppose . Then .
Note that usually in the exponent is not talking about the exponentiation of , which is an allowed operation.
Proof
Let and . Suppose and . From the definition of we have:
The left side of the equation equals . So applying both sides of the equation on gives:
or:
Comparing the last line of both equations, then so . Thus, as desired.
☐
The Rank of a Matrix
row rank, column rank
Suppose is an matrix with entries in .
The row rank of is the dimension of the span of the rows of in .
The column rank of is the dimension of the span of the columns of in .
For example, consider:
The row rank of is the dimension of:
notice neither two vectors listed above are LD, so this is dimension . The row rank of is 2.
The column rank is similar just with the columns of . the span of this list is length 4, and must have at least dimension 2 since the first two vectors are LI with each other. But the span of the list is in so it's dimension must be at most 2. Thus, the dimension is exactly 2, so the column rank of is 2.
Dimension of equals rank of
Suppose are finite-dimensional and . Then equals the column rank of .
Proof
Suppose is a basis of and is a basis of . The function that takes to is easily seen to be an isomorphism from onto . Thus, their dimensions equal, where the last dimension equals the column rank of .
It is easy to see that . Thus we have which is the column rank of as desired.
☐
Row rank equals column rank
Suppose . Then the row rank of equals the column rank of .
Proof
Define by . Thus , where is computed with respect to the standard bases of and . Now: