the rest really doesn't matter. What's important is that is a polynomial operator, derived from . Notice that it's an operator, so we operated it on something that can operate on.
As another example, let and , and and , where . So then:
which also equals:
it shouldn't surprise you that the multiplication of polynomials give the polynomial operator where :
Therefore, it follows that:
as an interesting property. Is it always true? But of course, and it's because that you can foil out polynomial operators just like normal polynomials, so the properties of polynomials carry over to these new objects.
As an example, since then by our theorem above then:
THIS IS BIGGGGGG!!!! Not always , but it does here. The thing here is that because we are only dealing with one operator and that:
And that we have distributivity and all the other good properties of which is a vector space, and therefore has those properties.
Theorem
Every operator on a finite-dimensional complex vector space has an eigenvalue, and associated eigenvector.
Note that this isn't true on real vector spaces. This is why. Think of the rotation transformation. Here in a real vector space you only have complex-valued eigenvalues, so then it won't work for real vector spaces.
Proof
Suppose that is a finite-dimensional complex vector space with . Further, let's suppose that . We'll need to show that has an eigenvalue.
Choose . Make a list:
eventually, we'll have a LD list, since there's vectors, so this list is definite LD. Hence, there's (not all 0) where:
But we can convert our operators into a polynomial operator!:
And we can factor our polynomial into a product of linear terms. But may be zero, and so on. But we have at least 1 that's non-zero, so the degree of our polynomial is at least degree 1 (really between 1 and ), so call it degree . Thus:
Thus, we can write this as a product of operators:
since . Recall that if then so then is an eigenvector and is it's eigen value. Go in the order of the polynomials. If then by the prior theorem then is an eigenvector with eigenvalue . If then we know that is an eigenvector with eigenvalue . Repeat forever. We know that we must have one of these has to be it since if we get to the end:
where since (we're dealing with a degree polynomial) so then if we get to that point then is an eigenvector with eigenvalue .
Boom!
☐
We can now always use an eigenvalue to simplify certain processes! An application is now detailed below:
Computing Higher Powers of Matrices
When we have 0's in matrices, we normally can diagonalize it, and thus compute higher powers of matrices. Let . We know that the diagonal is on the entries for all .
Upper Triangular
A matrix was upper triangular if there's 0's in the lower triangle (ie: there's data only in the diagonal and in the upper part):
for example, suppose that is a vector space of dimension 3, and is its corresponding basis . Let where:
Therefore:
but notice this! If I look at we can always write this as a linear combination of all 's where . Namely:
That's big! If I were to look at:
So then is a invariant subspace for all .
We tie everything together into one neat theorem:
Upper Triangular Transformation Properties
The following are equivalent:
has an upper triangular matrix representation w.r.t. some basis for .
for all
is -invariant for all .
Finishing Up
For the next lecture, we'll talk about the following theorem:
Theorem
Every operator , where is a finite-dimensional complex vector space has a matrix representation which is upper-triangular.