is normal (when ), is self-adjoint (and thus also normal) (when )
has an orthonormal eigenbasis.
has a diagonal matrix representation w.r.t. some orthonormal basis.
We proved the complex case as:
7.B: The Spectral Theorem
Recall that an operator (assuming finite dimensional over ) is diagonalizable iff there is a basis of eigenvectors (an eigenbasis) of . Thus, it is a basis consisting of eigenvectors of .
If you have such a basis , then thinking about the matrix is just gonna have eigenvalues on the diagonal:
where there could be repetition on our $\lambda_i
For the real case, here's the idea. Consider:
since then since we have vectors in then is LD. Thus, there are (not all zero) such that:
if we factor out the left side, we must have at least one linear term, thus showing at least one eigenvalue.
We now want to prove that given is self-adjoint, that we have a that has an orthonormal eigenbasis. Namely (a) implies (b).
Proof
Suppose is self-adjoint. We have a lot of lemmas to prove to get to the end of this.
A monic polynomial (leading term is equal to 1) is irreducible (can't factor into the product of linear terms) iff it has no zeroes, iff .
We want to prove that, given an irreducible and a self adjoint operator , the polynomial operator invertible.
To show this, we'll show is injective, as we already have the same dimension. Plug in a vector that's arbitrary:
thus the inner product is never 0, so then for any , thus, , so then . Thus is invertible.
Now we want to turn being self-adjoint to having an eigenvalue, assuming . Similar to the one we did in MATH 306, choose a vector where is a finite-dimensional, real inner product space (FDRIPS), where .
is self adjoint implies has an eigenvalue
Consider where is the dimension of . We have vectors so then this list is LD, so there are constants not all zero such that:
Thus . If we factor we get irreducable quadratics and linear terms, so a composition of monic irreducable quadratics and linear terms.
Every irreducable factor cannot send the vector to , so eventually you have to hit a linear term, and thus this term has our eigenvalue:
Thus is an eigenvector of any of the 's, which we have to have at least one as otherwise then which is a contradiction.
is self-adjoint implies that , a -invariant subspace, then
is also invariant.
and is self-adjoint, so .
is self-adjoint
To prove (a), choose any . Then consider any . Then:
thus it equals 0, so then consequently and thus is invariant.
To prove (b), notice for :
thus is self-adjoint.
To prove (c), apply (a) and (b) to instead of .
is self-adjoint for is FDRIPS, then has an orthonormal eigenbasis
Okay now let's get back to the real spectral theorem. Suppose is self-adjoint. We will prove this via induction on the dimension of .
As a base case, . Any (non-zero) vector is a scalar multiple of each other, so choose some vector then is an eigenvector of normal length, and thus is a valid eigenbasis.
Now suppose the theorem holds for all vector spaces of dimension or less. Suppose .
First, we know that since is self-adjoint, then has an eigenvalue, with its corresponding eigenvector . Thus where . Define .
Now consider . This is a one-dimensional subspace of . It's a -invariant subspace since it's the span of all eigenvectors, which are linear combinations. As such, we can use Lecture 12 - More on Spectral Theorem#^b7f429 to say that is also a -invariant subspace of . Namely it's dimension must be . Notice that:
is a self-adjoint operator on that space. By induction, then there's an eigenbasis for this operator. Call the basis .
Then is an eigenbasis (it's the right dimension and vectors in one subspace is not in the other one via the definition of ).
☐
Okay. World's longest proof is over. s. The question is, when does have an orthonormal eigenbasis?
Let's investigate this for a second. Which ones do we know have this property?
Suppose has an orthonormal eigenbasis. Then is as shown above, and thus is diagonal. But what about ? It also must be diagonal. This is because:
Notice that:
Thus is normal. Even better, all the $\lambda_i
For the real case, here's the idea. Consider:
since then since we have vectors in then is LD. Thus, there are (not all zero) such that:
if we factor out the left side, we must have at least one linear term, thus showing at least one eigenvalue.
We now want to prove that given is self-adjoint, that we have a that has an orthonormal eigenbasis. Namely (a) implies (b).
Proof
Suppose is self-adjoint. We have a lot of lemmas to prove to get to the end of this.
A monic polynomial (leading term is equal to 1) is irreducible (can't factor into the product of linear terms) iff it has no zeroes, iff .
We want to prove that, given an irreducible and a self adjoint operator , the polynomial operator invertible.
To show this, we'll show is injective, as we already have the same dimension. Plug in a vector that's arbitrary:
thus the inner product is never 0, so then for any , thus, , so then . Thus is invertible.
Now we want to turn being self-adjoint to having an eigenvalue, assuming . Similar to the one we did in MATH 306, choose a vector where is a finite-dimensional, real inner product space (FDRIPS), where .
is self adjoint implies has an eigenvalue
Consider where is the dimension of . We have vectors so then this list is LD, so there are constants not all zero such that:
Thus . If we factor we get irreducable quadratics and linear terms, so a composition of monic irreducable quadratics and linear terms.
Every irreducable factor cannot send the vector to , so eventually you have to hit a linear term, and thus this term has our eigenvalue:
Thus is an eigenvector of any of the 's, which we have to have at least one as otherwise then which is a contradiction.
is self-adjoint implies that , a -invariant subspace, then
is also invariant.
and is self-adjoint, so .
is self-adjoint
To prove (a), choose any . Then consider any . Then:
thus it equals 0, so then consequently and thus is invariant.
To prove (b), notice for :
thus is self-adjoint.
To prove (c), apply (a) and (b) to instead of .
is self-adjoint for is FDRIPS, then has an orthonormal eigenbasis
Okay now let's get back to the real spectral theorem. Suppose is self-adjoint. We will prove this via induction on the dimension of .
As a base case, . Any (non-zero) vector is a scalar multiple of each other, so choose some vector then is an eigenvector of normal length, and thus is a valid eigenbasis.
Now suppose the theorem holds for all vector spaces of dimension or less. Suppose .
First, we know that since is self-adjoint, then has an eigenvalue, with its corresponding eigenvector . Thus where . Define .
Now consider . This is a one-dimensional subspace of . It's a -invariant subspace since it's the span of all eigenvectors, which are linear combinations. As such, we can use Lecture 12 - More on Spectral Theorem#^b7f429 to say that is also a -invariant subspace of . Namely it's dimension must be . Notice that:
is a self-adjoint operator on that space. By induction, then there's an eigenbasis for this operator. Call the basis .
Then is an eigenbasis (it's the right dimension and vectors in one subspace is not in the other one via the definition of ).
☐
Okay. World's longest proof is over. s would have to be real since then which can only happen if the $\lambda_i
For the real case, here's the idea. Consider:
since then since we have vectors in then is LD. Thus, there are (not all zero) such that:
if we factor out the left side, we must have at least one linear term, thus showing at least one eigenvalue.
We now want to prove that given is self-adjoint, that we have a that has an orthonormal eigenbasis. Namely (a) implies (b).
Proof
Suppose is self-adjoint. We have a lot of lemmas to prove to get to the end of this.
A monic polynomial (leading term is equal to 1) is irreducible (can't factor into the product of linear terms) iff it has no zeroes, iff .
We want to prove that, given an irreducible and a self adjoint operator , the polynomial operator invertible.
To show this, we'll show is injective, as we already have the same dimension. Plug in a vector that's arbitrary:
thus the inner product is never 0, so then for any , thus, , so then . Thus is invertible.
Now we want to turn being self-adjoint to having an eigenvalue, assuming . Similar to the one we did in MATH 306, choose a vector where is a finite-dimensional, real inner product space (FDRIPS), where .
is self adjoint implies has an eigenvalue
Consider where is the dimension of . We have vectors so then this list is LD, so there are constants not all zero such that:
Thus . If we factor we get irreducable quadratics and linear terms, so a composition of monic irreducable quadratics and linear terms.
Every irreducable factor cannot send the vector to , so eventually you have to hit a linear term, and thus this term has our eigenvalue:
Thus is an eigenvector of any of the 's, which we have to have at least one as otherwise then which is a contradiction.
is self-adjoint implies that , a -invariant subspace, then
is also invariant.
and is self-adjoint, so .
is self-adjoint
To prove (a), choose any . Then consider any . Then:
thus it equals 0, so then consequently and thus is invariant.
To prove (b), notice for :
thus is self-adjoint.
To prove (c), apply (a) and (b) to instead of .
is self-adjoint for is FDRIPS, then has an orthonormal eigenbasis
Okay now let's get back to the real spectral theorem. Suppose is self-adjoint. We will prove this via induction on the dimension of .
As a base case, . Any (non-zero) vector is a scalar multiple of each other, so choose some vector then is an eigenvector of normal length, and thus is a valid eigenbasis.
Now suppose the theorem holds for all vector spaces of dimension or less. Suppose .
First, we know that since is self-adjoint, then has an eigenvalue, with its corresponding eigenvector . Thus where . Define .
Now consider . This is a one-dimensional subspace of . It's a -invariant subspace since it's the span of all eigenvectors, which are linear combinations. As such, we can use Lecture 12 - More on Spectral Theorem#^b7f429 to say that is also a -invariant subspace of . Namely it's dimension must be . Notice that:
is a self-adjoint operator on that space. By induction, then there's an eigenbasis for this operator. Call the basis .
Then is an eigenbasis (it's the right dimension and vectors in one subspace is not in the other one via the definition of ).
☐
Okay. World's longest proof is over. s are all real. And then so . So if it's a real vector space, then we get both normality and self-adjointness together. Thus:
When , then is normal is equivalent to having an orthonormal eigenbasis.
When , then is self-adjoint is equivalent to having an orthonormal eigenbasis.
The Spectral Theorem(s)
Suppose , the following are equivalent:
is normal (when ), is self-adjoint (and thus also normal) (when )
has an orthonormal eigenbasis.
has a diagonal matrix representation w.r.t. some orthonormal basis.
Proof
We already know (2) is equivalent with (3) from Chapter 5. We already know that (c) implies (a) via our explanation prior. Thus, we just need to show that we can go from (a) to any of (b) or (c). It actually depends on our field which one to do, based on if we have a real-vector space or complex-vector space.
In a more specific case (and without loss of generality), we focus on (a) implies (c). Suppose is normal, and is a complex vector space. Because is over , then we know that there exists an orthonormal basis, namely such that:, is UT:
thus:
The first column of is just , and the same for is .
Recall that since by an earlier theorem, so since:
so then:
But since then everything else must be 0's. As such, then .
Repeat this process for all columns. As such, is diagonal.
☐
For the real case, here's the idea. Consider:
since then since we have vectors in then is LD. Thus, there are (not all zero) such that:
if we factor out the left side, we must have at least one linear term, thus showing at least one eigenvalue.
We now want to prove that given is self-adjoint, that we have a that has an orthonormal eigenbasis. Namely (a) implies (b).
Proof
Suppose is self-adjoint. We have a lot of lemmas to prove to get to the end of this.
A monic polynomial (leading term is equal to 1) is irreducible (can't factor into the product of linear terms) iff it has no zeroes, iff .
We want to prove that, given an irreducible and a self adjoint operator , the polynomial operator invertible.
To show this, we'll show is injective, as we already have the same dimension. Plug in a vector that's arbitrary:
thus the inner product is never 0, so then for any , thus, , so then . Thus is invertible.
Now we want to turn being self-adjoint to having an eigenvalue, assuming . Similar to the one we did in MATH 306, choose a vector where is a finite-dimensional, real inner product space (FDRIPS), where .
is self adjoint implies has an eigenvalue
Consider where is the dimension of . We have vectors so then this list is LD, so there are constants not all zero such that:
Thus . If we factor we get irreducable quadratics and linear terms, so a composition of monic irreducable quadratics and linear terms.
Every irreducable factor cannot send the vector to , so eventually you have to hit a linear term, and thus this term has our eigenvalue:
Thus is an eigenvector of any of the 's, which we have to have at least one as otherwise then which is a contradiction.
is self-adjoint implies that , a -invariant subspace, then
is also invariant.
and is self-adjoint, so .
is self-adjoint
To prove (a), choose any . Then consider any . Then:
thus it equals 0, so then consequently and thus is invariant.
To prove (b), notice for :
thus is self-adjoint.
To prove (c), apply (a) and (b) to instead of .
is self-adjoint for is FDRIPS, then has an orthonormal eigenbasis
Okay now let's get back to the real spectral theorem. Suppose is self-adjoint. We will prove this via induction on the dimension of .
As a base case, . Any (non-zero) vector is a scalar multiple of each other, so choose some vector then is an eigenvector of normal length, and thus is a valid eigenbasis.
Now suppose the theorem holds for all vector spaces of dimension or less. Suppose .
First, we know that since is self-adjoint, then has an eigenvalue, with its corresponding eigenvector . Thus where . Define .
Now consider . This is a one-dimensional subspace of . It's a -invariant subspace since it's the span of all eigenvectors, which are linear combinations. As such, we can use Lecture 12 - More on Spectral Theorem#^b7f429 to say that is also a -invariant subspace of . Namely it's dimension must be . Notice that:
is a self-adjoint operator on that space. By induction, then there's an eigenbasis for this operator. Call the basis .
Then is an eigenbasis (it's the right dimension and vectors in one subspace is not in the other one via the definition of ).
☐
Okay. World's longest proof is over.