Lecture 22 - Finishing G. Eigenspaces, Starting 8.B
Recall that we said that since . This was an eigenspace corresponding to . Similarly:
was the generalized eigenspace corresponding to . Non-zero vectors are called generalized eigenvectors.
Notice that if is not an eigenvalue then we know that and thus we have it that is injective. Because injectivity composed on itself is injective then must also be injective. Therefore:
This suggests that even though after enough operations of we may add more eigenvectors, we cannot add eigenvalues. As such, there's really no such thing as "generalized eigenvalues" as if you called them that they're the exact same as just vanilla eigenvalues.
Similar to how different eigenvalue eigenvectors are LI, we have the following:
Proposition
Suppose are distinct eigenvalues of and are corresponding generalized eigenvectors. Then are LI.
Proof
Let . Consider:
To show that for all , let be the operator defined by:
Applying to both sides, we can try to get rid of all the 's except for one of them:
We can explore the meaning of . Recall that is a generalized eigenvector corresponding to . We can find the minimal such that:
thus:
Call this left vector . Applying it:
So is an eigenvector of w/ . Thus for any arbitrary :
So coming back to , apply to both sides:
Since all then (remember, we have distinct eigenvalues, and since it's an eigenvector, so then for all . Thus we have LI.
☐
Interesting Characteristics of Nilpotent Operators
If is nilpotent, a basis of , denoted , for which is an UT matrix with only zeroes on the diagonal.
Proof
Recall that we had:
For some . We know and we don't care. Consider the basis for via . We can extend this to a basis of , and so on up to which has to be by the definition of being nilpotent (the range is the zero vector only, so then the nullspace is the whole space ):
We can make a matrix for this:
We get the expected result because if you take any vector satisfying then that means that . In other words, when you input the vector to means that because then . For then hence the diagonals being 0.
☐
As an example, we had the matrix yesterday:
Example
Consider , whose matrix representation w.r.t. the standard basis is:
Then:
Then:
and finally and for higher powers
Let . . . . .
These 4 non-zero vectors are all LI (the order here is just for the basis order for the next step). Since it's the same amount as the dimension of , then it's a basis. So even though was with respect to the standard basis, notice that if we call this new basis :
The cool thing is that we can get 0's above the "diagonal" of 1's we have. But this process is generalizable. If but (ie: we have nilpotency) then the list is already guarunteed to be LI, and thus they are already a basis and we can make the matrix above for .
Where does the LI come from? Consider:
We can repeat this with on the LHS and get that each and thus we have LI.