Lecture 22 - Finishing G. Eigenspaces, Starting 8.B

Recall that we said that since E(λ,T)=null(TλI). This was an eigenspace corresponding to λ. Similarly:

G(λ,T)=null(TλI)dim(V)

was the generalized eigenspace corresponding to λ. Non-zero vectors vG are called generalized eigenvectors.

Notice that if λ is not an eigenvalue then we know that null(TλI)={0} and thus we have it that TλI is injective. Because injectivity composed on itself is injective then (TλI)dim(V) must also be injective. Therefore:

λ is not an eigenvalue of TG(λ,T)={0}

This suggests that even though after enough operations of T we may add more eigenvectors, we cannot add eigenvalues. As such, there's really no such thing as "generalized eigenvalues" as if you called them that they're the exact same as just vanilla eigenvalues.

Similar to how different eigenvalue eigenvectors are LI, we have the following:

Proposition

Suppose λ1,...,λm are distinct eigenvalues of T and v1,...,vm are corresponding generalized eigenvectors. Then v1,...,vm are LI.

Proof
Let dim(V)=n. Consider:

i=1mαivi=0

To show that αi=0 for all i, let S be the operator defined by:

S=ji(TλjI)n=(Tλ1I)n(TλiI)n(TλmI)n

Applying S to both sides, we can try to get rid of all the αivi's except for one of them:

S(i=1mαivi)=0αiS(vi)=0

We can explore the meaning of S(vi). Recall that vi is a generalized eigenvector corresponding to λi. We can find the minimal ki such that:

(TλiI)ki+1vi=0

thus:

(TλiI)kivi0

Call this left vector wi. Applying it:

(TλiI)wi=0winull(TλiI)

So wi is an eigenvector of T w/ λi. Thus for any arbitrary λ:

(TλI)wi=Twiλwi=λiwiλwi=(λiλ)wi

So coming back to S(vi), apply (TλiI)ki to both sides:

(Tλi)kiαiS(vi)=αiS(wi)=αiji(λiλj)nwi=0

Since all ji then λiλj0 (remember, we have distinct eigenvalues, and wi0 since it's an eigenvector, so then αi=0 for all i. Thus we have LI.

Interesting Characteristics of Nilpotent Operators

If T is nilpotent, a basis of V, denoted β, for which M(T,β) is an UT matrix with only zeroes on the diagonal.

Proof
Recall that we had:

null(T0)null(Tk)=null(Tk+1)=

For some kZ+. We know null(T0)={0} and we don't care. Consider the basis for null(T) via v1,...,vβ1. We can extend this to a basis of null(T2), and so on up to null(Tk) which has to be V by the definition of being nilpotent (the range is the zero vector only, so then the nullspace is the whole space V):

v1,...,vβ1basis for null(T),v12,...,vβ22basis for null(T2),...,v1k,...,vβkkbasis for null(Tk)=V=β

We can make a matrix for this:

We get the expected result because if you take any vector v satisfying Tiv=0 then that means that Ti1(Tv)=0. In other words, when you input the vector vi to T means that because v1null(T) then Tv1=0. For v2null(T2) then T2v2=0 hence the diagonals being 0.

As an example, we had the matrix yesterday:

Example

Consider TL(R4), whose matrix representation w.r.t. the standard basis is:
M(T)=[0111001100010000]
Then:
M(T2)=M(T)2=[0012000100000000]
Then:
M(T3)=[0001000000000000]
and finally M(T4)=0 and for higher powers

Let v=(1,1,1,1). Tv=(3,2,1,0). T2v=(3,1,0,0). T3v=(1,0,0,0). T4v=0.

These 4 non-zero vectors T3v,T2v,Tv,v are all LI (the order here is just for the basis order for the next step). Since it's the same amount as the dimension of R4, then it's a basis. So even though M(T) was with respect to the standard basis, notice that if we call this new basis β:

M(T,β)=[0100001000010000]

The cool thing is that we can get 0's above the "diagonal" of 1's we have. But this process is generalizable. If Tkv0 but Tk+1v=0 (ie: we have nilpotency) then the list Tkv,...,Tv,v is already guarunteed to be LI, and thus they are already a basis and we can make the matrix above for T.

Where does the LI come from? Consider:

αkTkv++α1Tv+α0v=0Tk(LHS)=0α0v=0α0=0

We can repeat this with Tk1,...,T on the LHS and get that each αi=0 and thus we have LI.