Lecture 12 - More on Spectral Theorem

Recall last time we proved:

The Spectral Theorem(s)

Suppose TL(V), the following are equivalent:

  1. T is normal (when F=C), T is self-adjoint (and thus also normal) (when F=R)
  2. V has an orthonormal eigenbasis.
  3. T has a diagonal matrix representation w.r.t. some orthonormal basis.

We proved the complex case as:

7.B: The Spectral Theorem

Recall that an operator TL(V) (assuming finite dimensional over F) is diagonalizable iff there is a basis of eigenvectors (an eigenbasis) of V. Thus, it is a basis consisting of eigenvectors of T.

If you have such a basis β, then thinking about the matrix is just gonna have eigenvalues on the diagonal:
M(T,β)=[λ1,000λ2000λn]
where there could be repetition on our $\lambda_i

For the real case, here's the idea. Consider:

v0,β={v,Tv,...,Tnv}

since dim(V)=n then since we have n+1 vectors in β then β is LD. Thus, there are α0,...,αn (not all zero) such that:

(α0I++αnTn)v=0

if we factor out the left side, we must have at least one linear term, thus showing at least one eigenvalue.

We now want to prove that given T is self-adjoint, that we have a V that has an orthonormal eigenbasis. Namely (a) implies (b).

Proof
Suppose T is self-adjoint. We have a lot of lemmas to prove to get to the end of this.

A monic polynomial p(x)=x2+bx+c (leading term is equal to 1) is irreducible (can't factor into the product of linear terms) iff it has no zeroes, iff b2<4c.

We want to prove that, given an irreducible p(x)=x2+bx+c and a self adjoint operator T=T, the polynomial operator p(T) invertible.

To show this, we'll show p(T) is injective, as we already have the same dimension. Plug in a vector 0vV that's arbitrary:

p(T)v,v=(T2+bT+cI)v,v=T2v,v+bTv,v+cIv,v=Tv,Tv+bTv,v+cv,v=Tv2+bTv,v+cv,vTv2|b|Tvv+cv2=(Tv|b|2v)2>0+(cb24)>0v

thus the inner product is never 0, so then p(T)v0 for any v0, thus, p(T)=0, so then null(p(T))={0}. Thus p(T) is invertible.

Now we want to turn T being self-adjoint to T having an eigenvalue, assuming V{0}. Similar to the one we did in MATH 306, choose a vector vV where V is a finite-dimensional, real inner product space (FDRIPS), where v0.

T is self adjoint implies T has an eigenvalue

Consider v,Tv,...,Tnv where n is the dimension of V. We have n+1 vectors so then this list is LD, so there are constants αi not all zero such that:

α0v+α1Tv++αnTnv=0(α0I+α1T++αnT)v=0

Thus p(T)v=0. If we factor p(T) we get irreducable quadratics and linear terms, so a composition of monic irreducable quadratics and linear terms.

Every irreducable factor cannot send the vector v to 0, so eventually you have to hit a linear term, and thus this term has our eigenvalue:

(Tλ1)(Tλk)(T2+b1T+c1)(T2+blT+cl)v applied to any are not 0v=0(Tλ1)(Tλk)v=0

Thus v is an eigenvector of any of the λi's, which we have to have at least one as otherwise then v=0 which is a contradiction.

T is self-adjoint implies that U, a T-invariant subspace, then

  • U is also T invariant.
  • T|UL(U) and is self-adjoint, so T|U=(T|U).
  • T|UL(U) is self-adjoint

To prove (a), choose any vU. Then consider any uU. Then:

Tv,u=vU,TuU

thus it equals 0, so then TvU consequently and thus U is T invariant.

To prove (b), notice for u1,u2U:

T|Uu1,u2=Tu1,u2=u1,Tu2=u1,T|Uu2

thus T|U is self-adjoint.

To prove (c), apply (a) and (b) to U instead of U.

T is self-adjoint for V is FDRIPS, then V has an orthonormal eigenbasis

Okay now let's get back to the real spectral theorem. Suppose T is self-adjoint. We will prove this via induction on the dimension of V.

As a base case, n=dim(V)=1. Any (non-zero) vector vV is a scalar multiple of each other, so choose some vector v0 then vv is an eigenvector of normal length, and thus is a valid eigenbasis.

Now suppose the theorem holds for all vector spaces of dimension k1 or less. Suppose dim(V)=k.

First, we know that since T is self-adjoint, then T has an eigenvalue, λ1 with its corresponding eigenvector v1. Thus Tv1=λ1v1 where v10. Define e1=v1v1.

Now consider span(e1). This is a one-dimensional subspace of V. It's a T-invariant subspace since it's the span of all eigenvectors, which are linear combinations. As such, we can use Lecture 12 - More on Spectral Theorem#^b7f429 to say that span(e1) is also a T-invariant subspace of V. Namely it's dimension must be k1. Notice that:

T|span(e1)L(span(e1))

is a self-adjoint operator on that space. By induction, then there's an eigenbasis for this operator. Call the basis β={e2,...,en}.

Then β={e1,e2,...,en} is an eigenbasis (it's the right dimension and vectors in one subspace is not in the other one via the definition of ).

Okay. World's longest proof is over. s. The question is, when does T have an orthonormal eigenbasis?

Let's investigate this for a second. Which ones do we know have this property?

Suppose T has an orthonormal eigenbasis. Then M(T,β) is as shown above, and thus is diagonal. But what about M(T)? It also must be diagonal. This is because:
M(T)=M(T)=[λ1,000λ2000λn]
Notice that:
M(TT)=M(T)M(T)=M(T)M(T)=M(TT)
Thus T is normal. Even better, all the $\lambda_i

For the real case, here's the idea. Consider:

v0,β={v,Tv,...,Tnv}

since dim(V)=n then since we have n+1 vectors in β then β is LD. Thus, there are α0,...,αn (not all zero) such that:

(α0I++αnTn)v=0

if we factor out the left side, we must have at least one linear term, thus showing at least one eigenvalue.

We now want to prove that given T is self-adjoint, that we have a V that has an orthonormal eigenbasis. Namely (a) implies (b).

Proof
Suppose T is self-adjoint. We have a lot of lemmas to prove to get to the end of this.

A monic polynomial p(x)=x2+bx+c (leading term is equal to 1) is irreducible (can't factor into the product of linear terms) iff it has no zeroes, iff b2<4c.

We want to prove that, given an irreducible p(x)=x2+bx+c and a self adjoint operator T=T, the polynomial operator p(T) invertible.

To show this, we'll show p(T) is injective, as we already have the same dimension. Plug in a vector 0vV that's arbitrary:

p(T)v,v=(T2+bT+cI)v,v=T2v,v+bTv,v+cIv,v=Tv,Tv+bTv,v+cv,v=Tv2+bTv,v+cv,vTv2|b|Tvv+cv2=(Tv|b|2v)2>0+(cb24)>0v

thus the inner product is never 0, so then p(T)v0 for any v0, thus, p(T)=0, so then null(p(T))={0}. Thus p(T) is invertible.

Now we want to turn T being self-adjoint to T having an eigenvalue, assuming V{0}. Similar to the one we did in MATH 306, choose a vector vV where V is a finite-dimensional, real inner product space (FDRIPS), where v0.

T is self adjoint implies T has an eigenvalue

Consider v,Tv,...,Tnv where n is the dimension of V. We have n+1 vectors so then this list is LD, so there are constants αi not all zero such that:

α0v+α1Tv++αnTnv=0(α0I+α1T++αnT)v=0

Thus p(T)v=0. If we factor p(T) we get irreducable quadratics and linear terms, so a composition of monic irreducable quadratics and linear terms.

Every irreducable factor cannot send the vector v to 0, so eventually you have to hit a linear term, and thus this term has our eigenvalue:

(Tλ1)(Tλk)(T2+b1T+c1)(T2+blT+cl)v applied to any are not 0v=0(Tλ1)(Tλk)v=0

Thus v is an eigenvector of any of the λi's, which we have to have at least one as otherwise then v=0 which is a contradiction.

T is self-adjoint implies that U, a T-invariant subspace, then

  • U is also T invariant.
  • T|UL(U) and is self-adjoint, so T|U=(T|U).
  • T|UL(U) is self-adjoint

To prove (a), choose any vU. Then consider any uU. Then:

Tv,u=vU,TuU

thus it equals 0, so then TvU consequently and thus U is T invariant.

To prove (b), notice for u1,u2U:

T|Uu1,u2=Tu1,u2=u1,Tu2=u1,T|Uu2

thus T|U is self-adjoint.

To prove (c), apply (a) and (b) to U instead of U.

T is self-adjoint for V is FDRIPS, then V has an orthonormal eigenbasis

Okay now let's get back to the real spectral theorem. Suppose T is self-adjoint. We will prove this via induction on the dimension of V.

As a base case, n=dim(V)=1. Any (non-zero) vector vV is a scalar multiple of each other, so choose some vector v0 then vv is an eigenvector of normal length, and thus is a valid eigenbasis.

Now suppose the theorem holds for all vector spaces of dimension k1 or less. Suppose dim(V)=k.

First, we know that since T is self-adjoint, then T has an eigenvalue, λ1 with its corresponding eigenvector v1. Thus Tv1=λ1v1 where v10. Define e1=v1v1.

Now consider span(e1). This is a one-dimensional subspace of V. It's a T-invariant subspace since it's the span of all eigenvectors, which are linear combinations. As such, we can use Lecture 12 - More on Spectral Theorem#^b7f429 to say that span(e1) is also a T-invariant subspace of V. Namely it's dimension must be k1. Notice that:

T|span(e1)L(span(e1))

is a self-adjoint operator on that space. By induction, then there's an eigenbasis for this operator. Call the basis β={e2,...,en}.

Then β={e1,e2,...,en} is an eigenbasis (it's the right dimension and vectors in one subspace is not in the other one via the definition of ).

Okay. World's longest proof is over. s would have to be real since then λi=λ1 which can only happen if the $\lambda_i

For the real case, here's the idea. Consider:

v0,β={v,Tv,...,Tnv}

since dim(V)=n then since we have n+1 vectors in β then β is LD. Thus, there are α0,...,αn (not all zero) such that:

(α0I++αnTn)v=0

if we factor out the left side, we must have at least one linear term, thus showing at least one eigenvalue.

We now want to prove that given T is self-adjoint, that we have a V that has an orthonormal eigenbasis. Namely (a) implies (b).

Proof
Suppose T is self-adjoint. We have a lot of lemmas to prove to get to the end of this.

A monic polynomial p(x)=x2+bx+c (leading term is equal to 1) is irreducible (can't factor into the product of linear terms) iff it has no zeroes, iff b2<4c.

We want to prove that, given an irreducible p(x)=x2+bx+c and a self adjoint operator T=T, the polynomial operator p(T) invertible.

To show this, we'll show p(T) is injective, as we already have the same dimension. Plug in a vector 0vV that's arbitrary:

p(T)v,v=(T2+bT+cI)v,v=T2v,v+bTv,v+cIv,v=Tv,Tv+bTv,v+cv,v=Tv2+bTv,v+cv,vTv2|b|Tvv+cv2=(Tv|b|2v)2>0+(cb24)>0v

thus the inner product is never 0, so then p(T)v0 for any v0, thus, p(T)=0, so then null(p(T))={0}. Thus p(T) is invertible.

Now we want to turn T being self-adjoint to T having an eigenvalue, assuming V{0}. Similar to the one we did in MATH 306, choose a vector vV where V is a finite-dimensional, real inner product space (FDRIPS), where v0.

T is self adjoint implies T has an eigenvalue

Consider v,Tv,...,Tnv where n is the dimension of V. We have n+1 vectors so then this list is LD, so there are constants αi not all zero such that:

α0v+α1Tv++αnTnv=0(α0I+α1T++αnT)v=0

Thus p(T)v=0. If we factor p(T) we get irreducable quadratics and linear terms, so a composition of monic irreducable quadratics and linear terms.

Every irreducable factor cannot send the vector v to 0, so eventually you have to hit a linear term, and thus this term has our eigenvalue:

(Tλ1)(Tλk)(T2+b1T+c1)(T2+blT+cl)v applied to any are not 0v=0(Tλ1)(Tλk)v=0

Thus v is an eigenvector of any of the λi's, which we have to have at least one as otherwise then v=0 which is a contradiction.

T is self-adjoint implies that U, a T-invariant subspace, then

  • U is also T invariant.
  • T|UL(U) and is self-adjoint, so T|U=(T|U).
  • T|UL(U) is self-adjoint

To prove (a), choose any vU. Then consider any uU. Then:

Tv,u=vU,TuU

thus it equals 0, so then TvU consequently and thus U is T invariant.

To prove (b), notice for u1,u2U:

T|Uu1,u2=Tu1,u2=u1,Tu2=u1,T|Uu2

thus T|U is self-adjoint.

To prove (c), apply (a) and (b) to U instead of U.

T is self-adjoint for V is FDRIPS, then V has an orthonormal eigenbasis

Okay now let's get back to the real spectral theorem. Suppose T is self-adjoint. We will prove this via induction on the dimension of V.

As a base case, n=dim(V)=1. Any (non-zero) vector vV is a scalar multiple of each other, so choose some vector v0 then vv is an eigenvector of normal length, and thus is a valid eigenbasis.

Now suppose the theorem holds for all vector spaces of dimension k1 or less. Suppose dim(V)=k.

First, we know that since T is self-adjoint, then T has an eigenvalue, λ1 with its corresponding eigenvector v1. Thus Tv1=λ1v1 where v10. Define e1=v1v1.

Now consider span(e1). This is a one-dimensional subspace of V. It's a T-invariant subspace since it's the span of all eigenvectors, which are linear combinations. As such, we can use Lecture 12 - More on Spectral Theorem#^b7f429 to say that span(e1) is also a T-invariant subspace of V. Namely it's dimension must be k1. Notice that:

T|span(e1)L(span(e1))

is a self-adjoint operator on that space. By induction, then there's an eigenbasis for this operator. Call the basis β={e2,...,en}.

Then β={e1,e2,...,en} is an eigenbasis (it's the right dimension and vectors in one subspace is not in the other one via the definition of ).

Okay. World's longest proof is over. s are all real. And then M(T)=M(T) so T=T. So if it's a real vector space, then we get both normality and self-adjointness together. Thus:

  • When F=C, then T is normal is equivalent to having an orthonormal eigenbasis.
  • When F=R, then T is self-adjoint is equivalent to having an orthonormal eigenbasis.
The Spectral Theorem(s)

Suppose TL(V), the following are equivalent:

  1. T is normal (when F=C), T is self-adjoint (and thus also normal) (when F=R)
  2. V has an orthonormal eigenbasis.
  3. T has a diagonal matrix representation w.r.t. some orthonormal basis.

Proof
We already know (2) is equivalent with (3) from Chapter 5. We already know that (c) implies (a) via our explanation prior. Thus, we just need to show that we can go from (a) to any of (b) or (c). It actually depends on our field F which one to do, based on if we have a real-vector space or complex-vector space.

In a more specific case (and without loss of generality), we focus on (a) implies (c). Suppose T is normal, and V is a complex vector space. Because V is over C, then we know that there exists an orthonormal basis, namely β={e1,...,en} such that:, M(T) is UT:
M(T)=[a11a12a1n0a22a2n00ann]
thus:
M(T)=M(T)=M(T)=[a1100a12a220a1na2nann]
The first column of M(T) is just Te1, and the same for M(T) is Te1.

Recall that since Tv=Tv by an earlier theorem, so since:
v=c1e1++cnenv2=i=1n|ci|
so then:
Te1=Te2|a11|2=|a11|2++|a1n|
But since |a11|2=|a11|2 then everything else must be 0's. As such, then a12,...,a1n=0.

Repeat this process for all n columns. As such, M(T) is diagonal.

For the real case, here's the idea. Consider:

v0,β={v,Tv,...,Tnv}

since dim(V)=n then since we have n+1 vectors in β then β is LD. Thus, there are α0,...,αn (not all zero) such that:

(α0I++αnTn)v=0

if we factor out the left side, we must have at least one linear term, thus showing at least one eigenvalue.

We now want to prove that given T is self-adjoint, that we have a V that has an orthonormal eigenbasis. Namely (a) implies (b).

Proof
Suppose T is self-adjoint. We have a lot of lemmas to prove to get to the end of this.

A monic polynomial p(x)=x2+bx+c (leading term is equal to 1) is irreducible (can't factor into the product of linear terms) iff it has no zeroes, iff b2<4c.

We want to prove that, given an irreducible p(x)=x2+bx+c and a self adjoint operator T=T, the polynomial operator p(T) invertible.

To show this, we'll show p(T) is injective, as we already have the same dimension. Plug in a vector 0vV that's arbitrary:

p(T)v,v=(T2+bT+cI)v,v=T2v,v+bTv,v+cIv,v=Tv,Tv+bTv,v+cv,v=Tv2+bTv,v+cv,vTv2|b|Tvv+cv2=(Tv|b|2v)2>0+(cb24)>0v

thus the inner product is never 0, so then p(T)v0 for any v0, thus, p(T)=0, so then null(p(T))={0}. Thus p(T) is invertible.

Now we want to turn T being self-adjoint to T having an eigenvalue, assuming V{0}. Similar to the one we did in MATH 306, choose a vector vV where V is a finite-dimensional, real inner product space (FDRIPS), where v0.

T is self adjoint implies T has an eigenvalue

Consider v,Tv,...,Tnv where n is the dimension of V. We have n+1 vectors so then this list is LD, so there are constants αi not all zero such that:

α0v+α1Tv++αnTnv=0(α0I+α1T++αnT)v=0

Thus p(T)v=0. If we factor p(T) we get irreducable quadratics and linear terms, so a composition of monic irreducable quadratics and linear terms.

Every irreducable factor cannot send the vector v to 0, so eventually you have to hit a linear term, and thus this term has our eigenvalue:

(Tλ1)(Tλk)(T2+b1T+c1)(T2+blT+cl)v applied to any are not 0v=0(Tλ1)(Tλk)v=0

Thus v is an eigenvector of any of the λi's, which we have to have at least one as otherwise then v=0 which is a contradiction.

T is self-adjoint implies that U, a T-invariant subspace, then

  • U is also T invariant.
  • T|UL(U) and is self-adjoint, so T|U=(T|U).
  • T|UL(U) is self-adjoint

To prove (a), choose any vU. Then consider any uU. Then:

Tv,u=vU,TuU

thus it equals 0, so then TvU consequently and thus U is T invariant.

To prove (b), notice for u1,u2U:

T|Uu1,u2=Tu1,u2=u1,Tu2=u1,T|Uu2

thus T|U is self-adjoint.

To prove (c), apply (a) and (b) to U instead of U.

T is self-adjoint for V is FDRIPS, then V has an orthonormal eigenbasis

Okay now let's get back to the real spectral theorem. Suppose T is self-adjoint. We will prove this via induction on the dimension of V.

As a base case, n=dim(V)=1. Any (non-zero) vector vV is a scalar multiple of each other, so choose some vector v0 then vv is an eigenvector of normal length, and thus is a valid eigenbasis.

Now suppose the theorem holds for all vector spaces of dimension k1 or less. Suppose dim(V)=k.

First, we know that since T is self-adjoint, then T has an eigenvalue, λ1 with its corresponding eigenvector v1. Thus Tv1=λ1v1 where v10. Define e1=v1v1.

Now consider span(e1). This is a one-dimensional subspace of V. It's a T-invariant subspace since it's the span of all eigenvectors, which are linear combinations. As such, we can use Lecture 12 - More on Spectral Theorem#^b7f429 to say that span(e1) is also a T-invariant subspace of V. Namely it's dimension must be k1. Notice that:

T|span(e1)L(span(e1))

is a self-adjoint operator on that space. By induction, then there's an eigenbasis for this operator. Call the basis β={e2,...,en}.

Then β={e1,e2,...,en} is an eigenbasis (it's the right dimension and vectors in one subspace is not in the other one via the definition of ).

Okay. World's longest proof is over.