Chapter 3 (cont.) - Products and Quotients of Vector Spaces

3.E: Products and Quotients of Vector Spaces

Products of Vector Spaces

product of vector spaces

Suppose V1,...,Vm are vector spaces over F.

  • The product V1××Vm is defined by:
V1××Vm={(v1,...,vm):i(viVi)}
  • Addition on V1××Vm is defined by:
(u1,...,um)+(v1,...,vm)=(u1+v1,...,um+vm)
  • Scalar multiplication on V1××Vm is defined by:
λ(v1,...,vm)=(λv1,...,λvm)

For instance, (56x+4x2,(3,8,7))P2(R)×R3.

Product of vector spaces is a vector space

Suppose V1,...,Vm are vector spaces over F. Then V1××Vm is a vector space over F.

We don't prove that here (the book doesn't even prove it), but it's easy to show. Note that 0 of this space is the 0 for each vector space put in each slot:

0=(0V1,,0Vm)

The additive inverse is just the negatives of each vector:

(v1,...,vm)=(v1,...,vm)V1××Vm

An Example

Notice that R2×R3 and R5 are not equal. They can't even be compared! However, they are isomorphic via:

((x1,x2),(x3,x4,x5))(x1,x2,x3,x4,x5)

as our isomorphism.

Also notice that the list:

(1,(0,0)),(x,(0,0)),(x2,(0,0)),(0,(1,0)),(0,(0,1))

is a valid basis for P2(R)×R2. This paints the idea for the next lemma:

Dimension of a product is the sum of dimensions

Suppose V1,...,Vm are finite-dimensional vector spaces. Then V1××Vm is finite-dimensional and:

dim(V1××Vm)=dimV1++dimVm

Proof
Choose a basis of each Vj. For each basis vector of each Vj, consider the element of V1××Vm that equals the basis vector in the j-th slot and 0 in the other slots. The list of all such vectors is linearly independent and spans V1××Vm, so it's a valid basis, of the corresponding length.

Products and Direct Sums

Products and direct sums

Suppose that U1,...,Um are subspaces of V. Define a linear map Γ:U1××UmU1++Um by:

Γ(u1,...,um)=u1++um

Then U1++Um is a direct sum iff Γ is injective/invertible

Notice that being injective here is the same as invertible. That's because this map is surjective by the definition of U1++Um, as it's range is the sum space itself by definition.

Proof
The linear map Γ is injective iff the only way to write 0 as a sum u1++um where each ujUj is to have each uj=0. Thus, Chapter 1 - Vector Spaces#^038942 applies iff Γ is injective, so then Γ is injective iff U1++Um is a direct sum.

A sum is a direct sum iff the dimensions add up

Suppose V is finite-dimensional and U1,...,Um are subspaces of V. Then U1++Um is a direct sum iff:

dim(U1++Um)=dimU1++dimUm

Proof
We know that Γ is surjective. By the FTOLM, we know Γ is injective iff:

dim(U1++Um)=dim(U1××Um)

Combining Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^e94bef and Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^f0ae57, we get that it's a direct sum iff:

dim(U1++Um)=dim(U1)++dim(Um)

Quotients of Vector Spaces

v+U

Suppose vV and U is a subspace of V. Then v+U is the subset of V defined by:

v+U={v+u:uU}

Pasted image 20240402154116.png

affine subset, parallel

  • An affine subset of V is a subset of V of the form v+U for some vV and some subspace U of V
  • For vV and U being a subspace of V, the affine subset v+U is said to be parallel to U.

For instance, in the above example, all the lines in R2 with slope 2 are parallel to U, and each of these parallel lines are each affine subsets of R2.

Another example, if U={(x,y,0)R3:x,yR} then the affine subset of R3 parallel to U are the planes in R3 that are parallel to the xy-plane U in the usual sense. Notice here that no line in R3 would be an affine subset while still being parallel to the plane U. That's because, to be affine, we require being allowed to span the whole plane U, not just a parallel line.

quotient space, V/U

Suppose U is a subspace of V. Then the quotient space V/U is the set of all affine subsets of V parallel to U. In other words:

V/U={v+U:vV}

For instance, if U was the line {(x,2x):xR2} then R2/U is set of all lines in R2 that have slope 2.

If instead U is a line in R3 containing the origin, then R3/U is the set of all lines in R3 parallel to U.

If U is a plane in R3 containing the origin, then R3/U is the set of all planes in R3 parallel to U.

Two affine subsets parallel to U are equal or disjoint

Suppose U is a subspace of V and v,wV. Then the following are equivalent:

  • vwU
  • v+U=w+U
  • (v+U)(w+U)

Proof
First suppose (1) holds, so vwU. If any uU then:

v+u=w+((vw)+u)w+U

So then v+Uw+U. A similar argument shows that w+Uv+U, so then (2) is proved.

We know that if (2) holds then (3) must hold, since the sets themselves can't be empty (worst case is {v,w}).

Suppose (3). Thus there are u1,u2U such that:

v+u1=w+u2

Then vw=u2u1U, showing (1).

addition and scalar multiplication on V/U

Suppose U is a subspace of V. Then addition and scalar multiplication are defined on V/U by:

(v+U)+(w+U)=(v+w)+Uλ(v+U)=(λv)+U

for v,wV and λF.

We need these to show that V/U is a vector space.

Quotient space is a vector space

Suppose U is a subspace of V. Then V/U, with the operations of addition and scalar multiplication as defined above, is a vector space.

Proof
The problem might arise where since V/U is the set of all affine subsets, it might possibly not be unique. For example, suppose v,wV. Suppose also that v^,w^V are such that v+U=v^+U and w+U=w^+U. To show our definition of addition on V/U makes sense, we'll show that (v+w)+U=(v^+w^)+U.

Via Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^045b5b, we have:

vv^U,ww^U

Because U is a subspace of V and thus closed under addition, this implies that (vv^)+(ww^)U so then (v+w)(v^+w^)U. Using our theorem again, we get that:

(v+w)+U=(v^+w^)+U

so then the definition we made makes sense.

In a similar vien, suppose λF. Since U is closed under scalar multiplication, then λ(vv^)U so then λvλv^U, where using the theorem again suggest that (λv)+U=(λv^)+U, as desired.

Note that actually showing vector space properties is pretty. We note though that the additive identity of V/U is 0+U and additive inverse of v+U is v+U.

quotient map, π

Suppose U is a subspace of V. The quotient map π is the linear map π:VV/U defined by:

π(v)=v+U

for vV.

It's easy to show that π is a linear map. Most often V,U are given from context.

Dimension of a quotient space

Suppose V is finite-dimensional and U is a subspace of V. Then:

dim(V/U)=dim(V)dim(U)

Proof
Let π be the quotient map from V to V/U. From Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^045b5b, we see that null(π)=U. Further, range(π)=V/U, so using the FTOLM:

dim(V)=dim(U)+dim(V/U)

and as such giving the desired result.

Each linear map T on V induces a linear map T~ on V/(null(T)), which we define:

T~

Suppose TL(V,W). Define T~:V/(null(T))W by:

T~(v+null(T))=Tv

To show that the definition of T~ makes sense, suppose u,vV are such that u+null(T)=v+null(T). By Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^045b5b, we have uvnull(T). Thus T(uv)=0 so then Tu=Tv.

If you think about the idea of U being a plane at the origin in R3, we can think of V/U as the line of planes that can be made possible by adding our v's. Hence, this is why dim(V/U)=1. Further, π gives the plane for a chosen v as it's output v+U. Lastly, T~ is the map of all parallel sets (affine) to it's own null space, and outputs just the vector that's not in that nullspace v.

Nullspace and range of T~

Suppose TL(V,W). Then:

  • T~ is a linear map from V/(null(T)) to W
  • T~ is injective
  • range(T~)=range(T)
  • V/(null(T)) is isomorphic to range(T).

Proof
It's easy to show (1) as routine. For (2), suppose vV and T~(v+null(T))=0. Then Tv=0 so then vnull(T). Then Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^045b5b says that v+null(T)=0+null(T) so then null(T~)={0} so then T~ is injective.

The definition shows how (3) is true. For (4), use the FTOLM to show they have the same dimension, and thus there is some isomorphism. Further, T~ is this isomorphism.

3.F: Duality

The Dual Space and the Dual Map

linear functional

A linear functional of vector space V is an element of L(V,F).

For instance, the trace of a matrix is a linear function on F. Another example is φ:P(R)R where it's given as:

φ(p)=3p(5)+7p(4)

This is a linear functional. We give a special name to L(V,F):

dual space, V

The dual space of V, denoted V, is the vector space of all linear functionals on V. In other words, V=L(V,F)

dim(V)=dim(V)

Suppose V is finite-dimensional. Then V is also finite dimensional and is the same dimension as V.

Proof
dim(V)=dim(V)dim(F)=dim(V).

As such, then these two spaces are also isomorphic too.

dual basis

If v1,...,vn is a basis of V then the dual basis of v1,...,vn is the list φ1,...,φn of elements of V where each ϕj is the linear functional on V such that:

ϕj(vk)={1k=j0kj=χ(k=j)

Example

What is the dual basis of the standard basis e1,...,en for Fn?

φj(x1,...,xn)=xj

Clearly:

φj(ek)={1k=j0kj=χ(k=j)

The next result shows that the dual basis is indeed a basis. Thus, the terminology "dual basis" is justified.

Dual basis is a basis of the dual space

Suppose V is finite-dimensional. Then the dual basis of a basis of V is a basis of V.

Proof
Suppose v1,...,vn is a basis for V. Let φ1,...,φn denote the dual basis. To show that φ1,...,φn is LI, suppose a1,...,anF such that:

a1φ1++anφn=0

Now (a1φ1++anφn)(vj)=aj for j=1,...,n. Thus a1==an=0 so then φ1,...,φn is LI.

We also have n vectors, implying that it's a basis for V.

dual map, T

If TL(V,W) then the dual map of T is the linear map TL(W,V) defined by T(φ)=φT for φW.

If TL(V,W) and φW then T(φ) is defined above as the composition of the linear maps φ and T, so T(φ) is a linear map from V to F, so T(φ)V.

To show the linear WV part, if φ,ψW then:

T(φ+ψ)=(φ+ψ)T=φT+ψT=T(φ)+T(ψ)

and for λF:

T(λφ)=(λφ)T=λ(φT)=λT(φ)
Warning

Do not confuse D which is the dual of a linear map, and p which is the derivative of a polynomial p.

For example, define D as the standard derivative operator over the polynomials P(R). Suppose φ is the linear function on this space given by φ(p)=p(3). Then D(φ) is the linear functional on our polynomial space given by:

D(φ)(p)=(φD)(p)=φ(D(p))=φ(p)=p(3)

So then D(φ) is the linear functional that takes p to p(3).

Or if φ(p)=01p then:

D(φ)(p)=(φD)(p)=φ(Dp)=φ(p)=01p(x)dx=p(1)p(0)

Algebraic properties of dual maps

  • (S+T)=S+T for all S,TL(V,W)
  • (λT)=λT for all λF,TL(V,W)
  • (ST)=TS for all TL(U,V) and SL(V,W)

Proof
The reader (me) will prove the first two in a HW problem.

For the third one, suppose φW then:

(ST)(φ)=φ(ST)=(φS)T=T(φS)=TS(φ)

Thus (ST)=TS.

The Null Space and Range of the Dual of a Linear Map

Our goal here is to describe null(T) and the range in terms of the nullspace and range of T:

annihilator, U0

For UV the annihilator of U, denoted U0, is defined by:

U0={φV:φ(u)=0uU}

This is a "nullspace lite" for the φ, only applying for a subspace U. For example, suppose U is the subspace of P(R) consisting of all polynomial multiples of x2. If φ is the linear functional on the space defined by φ(p)=p(0), then φU0.

Note

Sometimes UV0 is used in case multiple vector spaces are used, as U0 depends on V. However, usually this is known from context and thus is omitted.

As an example, let e1,...,e5 denote the standard basis of R5. Let φ1,...,φ5 denote the dual basis of (R5). Suppose:

U=span(e1,e2)

Then U0=span(φ3,φ4,φ5). Why? We'll recall from Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^aab354 that:

φj(x1,...,x5)=xj

just selects the j-th coordinate.

First, suppose φspan(φ3,φ4,φ5). Then there are c3,c4,c5R such that:

φ=c3φ3+c4φ4+c5φ5

If uU then u=(u1,u2,0,0,0), so:

φ(u)=0

as a result, since it selects either the 3rd, 4th, or 5th entry. Thus φU0. So then span(φ3,...,φ5)U. For the other direction, suppose φU0. Since the dual basis is a basis of (R5) then there are ciR where:

φ=c1φ1++c5φ5

Since e1U and φU0 then:

0=φ(e1)=(c1φ1++c5φ5)(e1)=c1

The same is for e2, so c2=0. Hence φspan(φ3,φ4,φ5) and thus we get a subset the other way.

The annihilator is a subspace

Suppose UV. Then U0 is a subspace of V.

Proof
Clearly 0U0 where 0 is the zero-linear functional on V. This is because for all uU we have it that 0u=0 where the left 0 is the zero functional and the right 0 is in U.

Suppose φ,ψU0. Then φ,ψV. Further, φ(u)=ψ(u)=0 for every uU. If uU then:

(φ+ψ)(u)=φu+ψu=0+0=0

Thus we get closure under vector addition. A similar argument shows closure under scalar multiplication. Thus, U0 is a subspace of V.

The following proof could work as follows:

But this proof will be quicker than that:

Dimension of the annihilator

Suppose V is finite-dimensional and U is a subspace of V. Then:

dim(U)+dim(U0)=dim(V)

Proof
Let iL(U,V) be the inclusion map defined by i(u)=u for uU. Thus i is a linear map from VU. Using the FTOLM:

dim(range(i))+dim(null(i))=dim(V)

But null(i)=U0 and since dim(V)=dim(V) then:

dim(range(i))+dim(U0)=dim(V)

If φU then φ can be extended to a linear function ψ on V via HW 3 - Linear Maps#11. As such, then i(ψ)=φ so then φrange(i). Thus, range(i)=U so the left dimension becomes that for dim(U) as we want.

The null space of T

Suppose V,W are finite-dimensional and TL(V,W). Then:

  • null(T)=(range(T))0
  • dim(null(T))=dim(null(T))+dim(W)dim(V)

Proof
(a) First, suppose φnull(T). Thus 0=T(φ)=φT. Hence:

0=(φT)(v)=φ(Tv)

for any vV. Thus φ(range(T))0 so then is implied.

For the other way, suppose φ(range(T))0. Then φ(Tv)=0 for all vV. Thus, 0=φT=T(φ) so then φnull(T). Thus is implied.

(b) We have:

dim(null(T))=dim((range(T))0)=dim(W)dim(range(T))from (a)=dim(W)(dim(V)dim(null(T)))dim of annihilator=dim(null(T))+dim(W)dim(V)


The following is very useful as sometimes it's easier to prove that something is injective, rather than surjective.

T surjective is equivalent to T injective

Suppose V,W are finite-dimensional and TL(V,W). Then T is surjective iff T is injective.

Proof
The map TL(V,W) is surjective iff range(T)=W, which happens iff (range(T))0={0}, which happens iff null(T)={0} by Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^87da71, which happens iff T is injective.

The range of T

Suppose V,W are finite dimensional and TL(V,W). Then:

  • dim(range(T))=dim(range(T))
  • range(T)=(null(T))0

Proof
(a) We have

dim(range(T))=dim(W)dim(null(T))(FTOLM)=dim(W)dim(range(T))0(Dimension of dual space is same, and null space of T)=dim(range(T))(dim of an annihilator)

(b) First suppose φrange(T). Thus, there exists ψW such that φ=T(ψ). If vnull(T), then:

φ(v)=(T(ψ))v=(ψT)v=ψ(Tv)=ψ(0)=0

Hence φ(null(T))0. This implies that range(T)(null(T))0.

We will complete the proof by showing that range(T) and (null(T))0 have the same dimension, showing equality. Note that:

dim(range(T))=dim(range(T))=dim(V)dim(null(T))=dim(null(T)0)

where the first equality comes from (a), the second from the FTOLM. The third comes from Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^dc9ef2.

T is injective is equivalent to T is surjective

Suppose V,W are finite-dimensional and TL(V,W). Then T is injective iff T is surjective.

Proof
The map T is injective iff null(T)={0}, iff (null(T))0=V using dimensions. This only is true iff range(T)=V via Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^6e8978, which only happens if T is surjective.

The Matrix of the Dual of a Linear Map

We can now define the transpose of a matrix:

transpose, AT

The transpose of a matrix A, denoted AT, is the matrix obtained from A by interchanging the rows and columns. If A is m×n then AT is n×m whose entries are given by:

(AT)k,j=Aj,k

Notice that the transpose operator itself is linear (A+C)T=AT+CT and (λA)T=λAT for all m×n matrices A,C and all λF, and thus is a linear operator.

The transpose of the product of matrices

If A is m×n and C is an n×p matrix, then:

(AC)T=CTAT

Proof
Suppose 1kp and 1jm. Then:

((AC)T)k,j=(AC)j,k=r=1nAj,rCr,k=r=1nCk,rAr,j=(CTAT)k,j

Therefore (AC)T=(CTAT).

For the next lemma, we assume a n dimensional V and dual basis φiV, an m dimensional W along with it's dual basis ψiW. So M(T) s computer with respect to the bases just mentioned of V and W and M(T) is computed with respect to the dual bases of W and V.

the matrix of T is the transpose of the matrix of T

Suppose TL(V,W). Then M(T)=(M(T))T.

Note that usually T in the exponent is not talking about the exponentiation of T, which is an allowed operation.

Proof
Let A=M(T) and C=M(T). Suppose 1jm and 1kn. From the definition of M(T) we have:

T(ψj)=r=1nCr,jφr

The left side of the equation equals ψjT. So applying both sides of the equation on vk gives:

(ψjT)(vk)=r=1nCr,jφr(vk)=Ck,j

or:

(ψjT)(vk)=ψj(Tvk)=ψj(r=1mAr,kwr)=r=1mAr,kψj(wr)=Aj,k

Comparing the last line of both equations, then Ck,j=Aj,k so C=AT. Thus, M(T)=(M(T))T as desired.

The Rank of a Matrix

row rank, column rank

Suppose A is an m×n matrix with entries in F.

  • The row rank of A is the dimension of the span of the rows of A in F1,n.
  • The column rank of A is the dimension of the span of the columns of A in Fm,1.

For example, consider:

A=[47183529]

The row rank of A is the dimension of:

span(([4718],[3529]))

notice neither two vectors listed above are LD, so this is dimension 2. The row rank of A is 2.

The column rank is similar just with the columns of A. the span of this list is length 4, and must have at least dimension 2 since the first two vectors are LI with each other. But the span of the list is in F2,1 so it's dimension must be at most 2. Thus, the dimension is exactly 2, so the column rank of A is 2.

Dimension of range(T) equals rank of M(T)

Suppose V,W are finite-dimensional and TL(V,W). Then dim(range(T)) equals the column rank of M(T).

Proof
Suppose v1,...,vn is a basis of V and w1,...,wn is a basis of W. The function that takes wspan(Tv1,...,Tvn) to M(w) is easily seen to be an isomorphism from span(Tv1,...,Tvn) onto span(M(Tv1),...,M(Tvn)). Thus, their dimensions equal, where the last dimension equals the column rank of M(T).

It is easy to see that range(T)=span(Tv1,...,Tvn). Thus we have dim(range(T))=dim(span(Tv1,...,Tvn)) which is the column rank of M(T) as desired.

Row rank equals column rank

Suppose AFm,n. Then the row rank of A equals the column rank of A.

Proof
Define T:Fn,1Fm,1 by Tx=Ax. Thus M(T)=A, where M(T) is computed with respect to the standard bases of Fn,1 and Fm,1. Now:

column rank of A=column rank of M(T)=dim(range(T))=dim(range(T))=column rank of M(T)=column rank of AT=row rank of A

where:

rank

The rank of a matrix AFm,n is the column rank of A.