Chapter 3 - Linear Maps

3.A: The Vector Space of Linear Maps

Linear Map

A linear map from V to W is a function T:VW such that:

  1. T(u+v)=Tu+Tv for all u,vV (additivity)
  2. T(λv)=λ(Tv) for all λF,vV. (homogeneity)

Examples include:

Linear Maps and Basis of Domain

Suppose v1,...,vn is a basis of V and w1,...,wnW. Then there exists a unique linear map T:VW such that:

Tvj=wj

for each j(1,...,n).

Algebraic Operations on L(V,W)

Addition and scalar multiplication on L(V,W)

Suppose S,TL(V,W) and λF. The sum S+T and the product λT are the linear maps from V to W defined by:

(S+T)(v)=Sv+Tv,(λT)(v)=λ(Tv)

As a result, L(V,W) is a vector space (the zero linear map 0v=0 is 0 here).

For the rest of this section suppose U is a vector space:

Product of Linear Maps

If TL(U,V) and SL(V,W) then the product STL(U,W) is:

(ST)(u)=S(T(u))
Algebraic Properties of products of Linear Maps

Linear maps are:

  1. Associative: (T1T2)T3=T1(T2T3)
  2. Identity: TI=IT=T
  3. Distributive Properties: (S1+S2)T=S1T+S2T and S(T1+T2)=ST1+ST2
Linear maps take 0 to 0

Suppose T is a linear map from VW. Then T(0)=0

Proof
By additivity:

T(0)=T(0+0)=T(0)+T(0)

So adding the additive inverse to both sides gives T(0)=0.

3.B: Null Spaces and Ranges

null space, null(T)

For TL(V,W) the null space of T, denoted null(T), is the subset of V consisting of those vectors that T maps to 0W. So null(T)={vV:Tv=0W}.

Some examples include:

The nullspace is a subspace

Suppose TL(V,W). Then null(T) is a subspace of V.

Proof
Since T is linear, then T(0)=0 by Chapter 3 - Linear Maps#^c97a61. Thus, 0null(T).

Suppose u,vnull(T). Then:

T(u+v)=Tu+Tv=0+0=0

Similarly, we can show closure under scalar multiplication.

injective

A function T:VW is injective if Tu=Tv implies that u=v

Injectivity is equivalent to null space equals {0}

Let TL(V,W). Then T is injective iff nullT={0}.

Proof
Consider . We already know {0}nullT since it's a subspace of V and thus must contain the zero-vector. We'll show the other way now, so suppose vnull(T). Then:

Tv=0=T(0)

and because T is injective, then that implies that v=0, completing the proof for this direction.

Consider . Suppose u,vV are arbitrary, and that Tu=Tv. Then:

0=TuTv=T(uv)

thus uvnull)(T) which equals {0}, so then uv=0 so u=v so then T is injective.

Range

For T:VW the range of T is the subset of W consisting of those vectors that are of the form Tv for some vV:

range(T)={Tv:vV}

Some examples include:

The range is a subspace

If TL(V,W) then the range(T) is a subspace of W.

Proof

surjective

A function T:VW is surjective if its range equals W.

For instance, D is surjective since the range of D is the entire polynomial space.

Fundamental Theorem of Linear Maps

Fundamental Theorem of Linear maps

Suppose V is finite-dimensional and TL(V,W). Then range(T) is finite dimensional and:

dim(V)=dim(null(T))+dim(range(T))

A proof is seen in Lecture 14 - Linear Transformations ++#^6e22f5.

Some interesting properties:

Some other Notes

We can use this idea to show when and when not a system of linear equations has a solution. There's a lot more information on relationships to systems of linear equations, which you can find at Year3/Winter2024/MATH306-LinearAlgebraII/2015_Book_LinearAlgebraDoneRight.pdf#page=65 and onward. Some lemmas:

Summary of Lemmas

  • Homogeneous sytem of linear equations with more variables than equations has nonzero solutions.
  • An inhomogeneous system of linear equations with more equations than variables has no solution for some choice of the constant terms.

3.C: Matrices

Representing matrices

matrix, Aj,k

Let m,n denote positive integers. An m×n matrix is a rectangular array of elements of F with m rows and n columns:

A=(A1,1A1,nAm,1Am,n)

where Aj,k denotes the entry in row j and column k of A.

matrix of a linear map, M(T)

Suppose TL(V,W) and v1,...,vn is a basis of Vand w1,...,wm is a basis of W. The matrix of T with respect to these bases is the m×n matrix M(T) whose entries Aj,k are defined by:

Tvk=A1,kw1++Am,kwm

If the bases are not clear from the context, then the notation M(T,(v1,...,vn),(w1,...,wm)) is used.

You can use the following to help construct the matrix:
Pasted image 20240207235858.png

Thus:

Tvk=j=1mAj,kwj

Addition and Scalar Multiplication of Matrices

We've done matrix addition before and scalar multiplication:

As a result, then:

Matrix sum of linear maps

Suppose S,TL(V,W). Then M(S+T)=M(S)+T.

and:

The matrix of a scalar times a linear map

Suppose λF and TL(V,W). Then M(λT)=λM(T).

As such, we define this as a new vector space Fm,n. As a lemma:

dim(Fm,n)=mn

Matrix Multiplication

See Lecture 17 - Continuing Matrices#^1be6e5. It explains where matrix multiplication comes from.

Matrix Multiplication

Suppose A is an m×n matrix and C is n×p. Then AC is defined to be the m×p matrix whose entry in row j and column k is given by:

(AC)j,k=r=1nAj,rCr,k

Essentially, this is a dot product of the j-th row of A and the k-th column of C. Note that matrix multiplication isn't commutative.

The matrix of the product of linear maps

If TL(U,V) and SL(V,W) then M(ST)=M(S)M(T).

The proof of this is actually the derivation of matrix multiplication via Lecture 17 - Continuing Matrices#^1be6e5.

Notation

Suppose A is m×n. Then:

  • If 1jm then Aj, denotes the 1×n matrix consisting of row j of A.
  • If 1kn then A,k denotes the k×1 matrix consisting of column k of A.

As such, then multiplication of A left by C via AC is:

(AC)j,k=Aj,C,k

for j(1,...,m) and k(1,...,p).

Column of matrix product equals matrix times column

Suppose Am×n and Cn×p,. Then:

(AC),k=AC,k

For example, consider:
Pasted image 20240208001646.png
Pasted image 20240208001656.png
Pasted image 20240208001731.png

We generalize the results from the examples above to give:

Linear Combinations of Columns

Suppose A is an m×n matrix and c=[c1cn] is an n×1 matrix. Then:

Ac=c1A,1++cnA,n

In other words, Ac is a linear combination of the columns of A, with the scalars that multiply the columns coming from c.

3.D: Invertibility and Isomorphic Vector Spaces

Invertible Linear Maps

invertible, inverse

  • A linear map TL(V,W) is called invertible if there exists a linear map SL(W,V) such that ST equals the identity map on V and TS equals the identity map on W.
  • A linear map SL(W,V) satisfying ST=IV and TS=IW is called the inverse of T.

Note that the inverse is unique:

Proof
Suppose T is invertible and S1,S2 are inverses of T. Then:

S1=S1IW=S(TS2)=(S1T)S2=IVS2=S2

Thus S1=S2.

Since S is unique, we call it T1 to denote it as T's inverse.

Invertibility equals bijectivity

A linear map is invertible iff it is bijective (injective and surjective).

A proof is at Year3/Winter2024/MATH306-LinearAlgebraII/2015_Book_LinearAlgebraDoneRight.pdf#page=81. Some examples of non-invertible linear maps include:

Isomorphic Vector Spaces

isomorphism, isomorphic

An isomorphism is an invertible linear map. Two vector spaces are isomorphic if there is an isomorphism from one vector space onto the other one.

We are essentially relabeling all the vV to some new label Tv=wW. Hence, this is why certain spaces are so similar to one another, like R3 and P2(R). You might wonder if dimension has anything to do with it.

Dimension shows vector spaces as isomorphic

Two finite-dimensional vector spaces over F are isomorphic iff they have the same dimension.

Proof
First, suppose V and W are isomorphic finite-dimensional vector spaces. Thus there exists an isomorphism T:VW. Because T is invertible then null(T)={0} and range(T)=W So then:

dim(V)=dim(null(T))+dim(range(T))=0+dim(W)

Via Chapter 3 - Linear Maps#^15829c. Thus dim(V)=dim(W).

For the other way, suppose V,W are finite-dimensional vector spaces with the same dimension. Then v1,...,vn and w1,...,wn are bases for V,W respectively. Let TL(V,W) be defined by:

T(c1v1++cnvn)=c1w1++cnwn

Then T is a well-defined linear map because v1,...,vn is a basis for V via Chapter 3 - Linear Maps#^9a80a8. Also, T is surjective since w1,...,wn spans W. Furthermore null(T)={0} since w1,...,wn is LI, so T is injective. Because T is both injective and surjective, it's an isomorphism, so V,W are isomorphic as desired.

Thus any vector space V where dim(V)=n is isomorphic to Fn. If v1,...,vn is a basis for V and w1,...,wm is a basis for W then for each TL(V,W) we have a matrix M(T)Fm,n. So in other words, once bases have been set for V,W then M is a function from L(V,W) to Fm,n, so since M via Chapter 3 - Linear Maps#^bfb9a4, then M is invertible.

L(V,W) and Fm,n are isomorphic

Suppose v1,...,vn is a basis of V and w1,...,wm is a basis of W. Then M is an isomorphism between L(V,W) and Fm,n.

Proof
We already noted that M is linear. We need to prove that M is injective and surjective.

With injectivity, if TL(V,W) and M(T)=0 then Tvk=0 for all k=1,...,n and because v1,...,vn is a basis of V this implies T=0, showing M as injective via Chapter 3 - Linear Maps#^ba1b7d.

For surjectivity, suppose AFm,n. Let T be the linear map from V to W such that:

Tvk=j=1mAj,kwj

for k=1,...,n (see Chapter 3 - Linear Maps#^9a80a8). Obviously then M(T) equals A and thus the range of M equals Fm,n as desired.

dimL(V,W)=(dim(V))(dim(W))

Suppose V,W are finite dimensional. Then L(V,W) is finite dimensional where:

dimL(V,W)=(dim(V))(dim(W))

This comes from Chapter 3 - Linear Maps#^0f4054, Chapter 3 - Linear Maps#^3e5a28, and Chapter 3 - Linear Maps#^ee27e8.

Linear Maps Thought of as Matrix Multiplication

matrix of a vector, M(v)

Suppose vV and v1,...,vn is a basis for V. The matrix of v with respect to this basis is the n-by-1 matrix:

M(v)=(c1c2cn)

where c1,...,cn are the scalars such that:

v=c1v1++cnvn

Notice that the matrix of v depends on the given basis v1,...,vn, and usually it's clear from context.

Some examples:

(2705) M(x)=(x1xn)

Note that the function M is essentially a remapping/isomorphism from our vector vV and gives us the related matrix of v.

Recall that if A is m×n then A,k denotes the k-th column of A, thought of as an m×1 matrix. Next, M(vk) is computed with respect to the basis w1,...,wm of W:

M(T),k=M(vk)

Suppose TL(V,W) and v1,...,vn is a basis of V and w1,...,wm is a basis of W. Let 1kn. Then the k-th column of M(T) which is denoted by M(T),k equals M(vk).

This follows directly from the definitions of both.

Linear Maps act like matrix multiplication

Suppose TL(V,W) and vV. Suppose v1,...,vn is a basis of V and w1,...,wm is a basis of W. Then:

M(Tv)=M(T)M(v)

Proof
Suppose v=c1v1++cnvn, where all ciF. Then:

Tv=c1Tv1++cnTvn

Hence:

M(Tv)=c1M(Tv1)++cnM(Tvn)=c1M(T),1++cnM(T),n=M(T)v

The first equality comes from linearity of M, the second equality from Chapter 3 - Linear Maps#^fa150d, and the last one comes from Chapter 3 - Linear Maps#^93d307.

Operators

operator, L(V)

A linear map from a vector space to itself is called an operator. The notation L(V) denotes the set of all operators on V. In other words, L(V)=L(V,V).

Removing W from this may suggest that we can remove one condition from bijectivity of T, but recall from our earlier examples Chapter 3 - Linear Maps#^153ab0. But both of these are still not invertible despite being themselves operators. This is actually because both are infinite-dimensional vector spaces. But when they are finite dimensional, we get a remarkable result:

Injectivity is equivalent to surjectivity in finite dimensions

Suppose V is finite-dimensional and TL(V). Then the following are equivalent:

  • T is invertible
  • T is injective
  • T is surjective

Proof
Clearly if (i) holds then we get (ii, iii) for free. Suppose instead (ii) just holds. Then T is injective so then dim(null(T))=0 so we just have the zero vector in the null space, so via the FTOLM:

dim(range(T))=dim(V)dim(null(T))=dim(V)

so then range(T)=V implying surjectivity. So we get (iii), which gives us (i).

Instead if (iii) holds then T is surjective, so range(T)=V so then via the FTOLM we have dim(null(T))=dim(V)dim(range(T))=0 so then the null space is just the zero vector, showing injectivity. Hence T is invertible.