The idea of invariant subspaces asks whether the subspace mapping from maps into only. In the picture above we have an invariant, but if we had:
Then we don't have an invariant.
Invariant
Suppose , and we have subspace of . The subspace is an invariant under if, for all we have .
As some examples of invariant subspaces:
is an invariant subspace of .
The zero subspace is an invariant since for any .
Some cooler ones are as follows. Here is an invariant since all items get mapped to the zero vector which is in :
Likewise, the is also an invariant subspace of :
Why are we interested?
Suppose , where are all invariant under . Suppose any . Then where each are in . Hence, we can reduce our space into our corresponding smaller subspaces to talk about the ultimate higher ones.
As an example, consider , using as our field. Let with , where . We claim that is an invariant subspace of .
Proof
We need to show that for any and show that . Let be arbitrary. Then:
for some . Then:
but notice that :
thus is an invariant under .
☐
Notice that this transformation merely scaled our vector. We're going to hence define the related eigenvector, and the amount it gets scaled by is its eigenvalue.
Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Given :
A number is called an eigenvalue of if there exists such that and .
A vector is called an eigenvector of corresponding to eigenvalue if and .
For instance, let be the vector space having basis and let be the derivative map. Note that already, and we can see what it does to our vectors:
notice here that is an eigenvalue of . Furthermore, is its corresponding eigenvector.
Note that:
for all. Does this mean that is an eigenvector? Or that any is an eigenvalue? We don't really want this for any other transformations or spaces, hence why we remove these cases in the definition.
Note
We say that isn't an eigenvector, but could be a possible eigenvalue.
Notice the following. If is a (non-zero) eigenvector with e-val then:
so then so then all eigenvectors are in the nullspace of .
This gives rise to the following theorem:
Equivalences of Eigenvalues/Eigenvectors
The following are equivalent:
is an eigenvalue of
is not injective.
is not surjective.
is not invertible.
where where for any we have .
Proof
This is a baby proof, which is more outlined in the book. Suppose (1). Then there is some non-zero vector such that . Subtracting on both sides gives , so then . We've found something that maps a non-zero vector to the zero-vector, so then we've gotten (2).
If we have (2) we get (3, 4) as we've shown before. If you work backwards, you can suppose non-injectivity and extract the as an eigenvalue thing.
☐
Here we note that is the eigenspace of on .
Theorem
Suppose we have distinct eigenvalues are distinct eigenvalues of and are corresponding eigenvectors, where all . Then is LI in .
Proof
We prove this by contradiction. Assume instead that is LD. Note that itself is just LI always, and as we add more vectors, we then get LD. Hence, let be the smallest positive integer such that is LD. Then can be written as a linear combination of . Then:
where since all 's are eigenvectors, then:
multiply our initial definition of by to get:
But then equating coefficients says that , as subtracting both equations above gives
But since is LI, then all . So either or . We cannot have the latter since all the 's are unique, so then , so then is a contradiction as is non-zero.
☐
This is handy since we can construct bases from our corresponding eigenvalues and vectors.
Theorem
Suppose is a finite-dimensional vector space. Then each linear transformation on has at most distinct eigenvalues and corresponding eigenvectors.
Proof
Assume you had more than eigenvalues. Then that would give you more than that number of eigenvectors, which are all LI. But since the dimension is , then we have a LI list of longer than eigenvectors, which is a contradiction.
☐
Intro to Polynomial Operators
Given some polynomial and some operator , we can construct a new operator from this polynomial, namely is the linear operator in such that: