HW 4 - The Spectral Theorem

7.B: The Spectral Theorem

2

Theorem

Suppose T is a self-adjoint operator on a finite-dimensional inner product space and 2,3 are the only eigenvalues of T. Then:

T25T+6I=0

Proof
Notice:

T25T+6I=(T2I)(T3I)

Let vV be applied to these. Since V is a finite-dimensional inner product space, we have to consider if V is a complex or a real vector space.

For a complex vector space, using the Spectral Theorem yields that, since T is self-adjoint and thus normal, that V has an orthonormal basis consisting of eigenvectors of T. Notice that since 2,3 are some eigenvalues, then their corresponding eigenvectors v1,v2 compose this basis. Notice since these are the only eigenvalues of T, then β={v1,v2} is the only valid orthonormal eigenbasis for V. As such, any vV is spanned via:

v=α1v1+α2v2

so then:

(T25T+6I)v=(T2I)(T3I)(α1v1+α2v2)=(T2I)(T3I)α1v1+(T2I)(T3I)α2v2=0=(T3I)(T2I)α1v1=0+0=0

where the 0's come from the fact that v1null(T2I) and v2null(T3I). Thus, since v was arbitrary, then T25I+6I=0.

For a real vector space, the same process applies since T is adjoint, so using the Real Spectral Theorem gives that V has the same orthonormal eigenbasis.

3

Theorem

Give an example of an operator TL(C3) such that 2 and 3 are the only eigenvalues of T while T25T+6I0.

Proof
As the previous theorem implies, we should construct T to not be self-adjoint. As such, define T by it's actions on the standard basis, given by the matrix:

M(T)=[210031002]

Notice that there the only eigenvalues here are 3 and 2. We can see this because there are only the eigenvectors:

v1=(1,1,0),v2=(1,0,0)

Now notice that:

M(T25T+6I)=[210031002]25[210031002]+6[100010001]=[400095004][105001550010]+[600060006]=[050000000]M(0,β)

Thus T25T+6I0.

6

Theorem

A normal operator on a complex inner product space is self-adjoint iff all its eigenvalues are real.

Proof
Consider normal operator TL(V) where V is a complex inner product space. Then by the Complex Spectral Theorem, then V has an orthonormal basis consisting of eigenvectors in T.

If T is self-adjoint then clearly by Chapter 7 - Operators on Inner Product Spaces#^6693ec then we have all eigenvalues of T as real.

For the other direction, suppose that T only has real eigenvalues λiR with corresponding eigenvectors v1,...,vn. We must have n of them because we have an orthonormal basis of eigenvectors by T (see previous paragraphs). And since T only has real eigenvalues, then v1,...,vn have only real corresponding eigenvalues.

Using Chapter 7 - Operators on Inner Product Spaces#^a30225, we can show T is self-adjoint by showing Tv,v as real. Notice that if vV is arbitrary, then since v1,...,vn is a basis then:

v=i=1nαivi

Now notice that:

Tv,v=T(i=1nαivi),i=1nαivi=i=1nαiT(vi),i=1nαivi=i=1nαiλivi,i=1nαivi=i=1nαiλivi,j=1nαjvj=i=1nλiαivi,j=1nαjvj=i,j=1nλiαiαjvi,vj=i=1nλiαiαi=i=1nλi|αi|2R

Where the last step is from the fact that λiR and even though αiC we know that |αi|2R regardless. Thus, we must have it that T is self-adjoint as a result, since vV was an arbitrary choice.

7

Theorem

Suppose V is a complex inner product space and TL(V) is a normal operator such that T9=T8. Then T is self-adjoint and T2=T.

Proof
Since T is normal, then V has an orthonormal basis consisting of eigenvectors of T, namely β={v1,...,vn} where each vi has corresponding eigenvalue λiC (we don't know yet if T is self-adjoint, so we'll assume they λi's are all complex for now). Suppose T9=T8.

Via the complex spectral theorem then since T is normal then T has a diagonal matrix with respect to our orthonormal basis:

M(T,β)=[λ1000λ2000λn]

since T8=T9:

[λ1000λ2000λn]8=[λ1000λ2000λn]9[λ18000λ28000λn8]=[λ19000λ29000λn9]

Thus each λi8=λi9. As a result, then factoring out gives that λi=0,1 as possibilities. As such, then all λiR so then T must be self-adjoint. Furthermore, then notice that for all λi's that if λi=0 then λi2=0 so λi=λi2. The same is said if λi=1. Hence, the following is true:

M(T2,β)=[λ12000λ22000λn2]=[λ1000λ2000λn]=M(T,β)

Thus T2=T.

8

Theorem

Give an example of an operator T on a complex vector space such that T9=T8 but T2T.

Proof
We need to just give an operator T that isn't normal, mainly one that isn't invertible. As such, have T be defined by actions on whatever basis β={v1,...,vn} of V (which is n dimensional). For simplicity, let V=C2 and F=C. Choose T such that:

M(T,β)=[0100]

Notice that:

M(T2,β)=[0100]2=[0000]M(T,β)

While:

M(T8,β)=0=M(T9,β)

9

Theorem

Suppose V is a complex inner product space. Then every normal operator on V has a square root. Namely, every normal operator TL(V) has an operator SL(V) where S2=T.

Proof
Since V is a complex inner product space and TL(V) is an arbitrary normal operator, then by the Complex Spectral Theorem then T has a diagonal matrix with respect to some orthonormal basis of V. Namely:

M(T,β)=[λ1000λ20000λn]

where λiC. Choose SL(V) to be where:

M(S,β)=[λ1000λ20000λn]

Notice that:

M(S2,β)=[λ1000λ20000λn]2=[λ1000λ20000λn]=M(T,β)

Thus S2=T.

10

Theorem

Give an example of a real inner product space V and TL(V) and real numbers b,c with b2<4c such that T2+bT+cI is not invertible.

Proof
As the hint suggests, we should try to have T not be self-adjoint. Namely, via the proof of Chapter 7 - Operators on Inner Product Spaces#^94dc7b, we should try to find some T such that T2v,vTv2. Have V=R2 and F=R. Define T where β is the standard basis by:

M(T,β)=[0110]$$Noticethatwecanfind:

\begin{align}
\mathcal{M}(T^2 + bT + cI, \beta) &= \begin{bmatrix}
0 & -1 \1 & 0
\end{bmatrix}^2 + b\begin{bmatrix}
0 & -1 \1 & 0
\end{bmatrix} + c \begin{bmatrix}
1 & 0 \
0 & 1
\end{bmatrix}\
&= \begin{bmatrix}
-1 & 0 \
0 & -1
\end{bmatrix} + \begin{bmatrix}
b & b \
0 & b
\end{bmatrix} + \begin{bmatrix}
c & 0 \
0 & c
\end{bmatrix}\
&= \begin{bmatrix}
-1 + b + c & b \
0 & -1 + b + c
\end{bmatrix}
\end

Chooseasanexample$b=0$and$c=1$.Then:

\mathcal{M}(T^2 + bT + cI, \beta) = \begin{bmatrix}
0 & 0 \
0 & 0
\end

Which is not invertible as it's the zero map. ☐ ### 11 > [!theorem] > Every self-adjoint operator on $V$ has a cube root, an operator $S \in \mathcal{L}(V)$ such that $S^3 = T$. *Proof* If $T \in \mathcal{L}(V)$ is a self-adjoint operator on $V$, then by either the Real or Complex Spectral Theorem, then $T$ has a diagonal matrix with respect to some orthonormal basis of $V$, called $\beta = \{v_1, ..., v_n\}$. As a result:

\mathcal{M}(T,\beta) = \begin{bmatrix}
\lambda_1 & 0 & \dots & 0 \
0 & \lambda_2 & \dots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \dots & \lambda_n
\end

Thenchoose$SL(V)$where:

\mathcal{M}(S,\beta) = \begin{bmatrix}
\sqrt[3]{\lambda_1} & 0 & \dots & 0 \
0 & \sqrt[3]{\lambda_2} & \dots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \dots & \sqrt[3]{\lambda_n}
\end

Whichworkssincenomatterourrealorcomplexspace,$λi3$isstillinthatspace.Thennoticethat:

\mathcal{M}(S^3,\beta) = \begin{bmatrix}
\sqrt[3]{\lambda_1} & 0 & \dots & 0 \
0 & \sqrt[3]{\lambda_2} & \dots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \dots & \sqrt[3]{\lambda_n}
\end{bmatrix}^3 = \begin{bmatrix}
\lambda_1 & 0 & \dots & 0 \
0 & \lambda_2 & \dots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \dots & \lambda_n
\end{bmatrix} = \mathcal{M}(T,\beta)

Thus $S^3 = T$ as desired. ☐ ### 14 > [!theorem] > Suppose $U$ is a finite-dimensional real vector space and $T \in \mathcal{L}(U)$. Then $U$ has a basis consisting of eigenvectors of $T$ iff there is an inner product on $U$ that makes $T$ into a self-adjoint operator. *Proof* $(\rightarrow)$: Suppose $U$ has a basis $u_1, ..., u_m$ consisting of eigenvectors of $T$. Thus $Tu_i = \lambda_iu_i$ for all $i \in \{1, ..., m\}$ where $\lambda_i \in \mathbb{C}$. Create the inner product $\braket{\cdot , \cdot}_U$ such that:

\braket{u_i, u_j}_U = \chi(i = j)

sothatany$ui,uiU=1$and$ui,ujU=0$for$ij$.Wefirstshowthatthisisavalidinnerproduct.PositiveDefiniteness:BythedefinitionitspositivedefiniteConjugatesymmetry:Since$,UR$thenithasconjugatesymmetry.Linearity:Notice$x,y,z$arespannedby$u1,...,um$sothenfor$a,bF$:

\begin{align}
\braket{a\vec{x} + b\vec{y}, \vec{z}} &= \left\langle a\sum_{i=1}^n \alpha_iu_i + b\sum_{i=1}^n \beta_iu_i, \sum_{i=1}^n \gamma_iu_i \right\rangle\
&= \left\langle \sum_{i=1}^n (a\alpha_i + b\beta_i)u_i, \sum_{i=1}^n \gamma_iu_i \right\rangle \
&= \sum_{i,j=1}^n (a\alpha_i + b\beta_i)\overline{\gamma_j} \left\langle u_i, u_j \right\rangle\
&= \sum_{i=1}^n (a\alpha_i + b\beta_i)\overline{\gamma_i} \
&= \sum_{i=1}^n a\alpha_i\braket{u_i, \gamma_iu_i} + \sum_{i=1}^n b\beta_i\braket{u_i, \gamma_iu_i}\
&= \sum_{i=1}^n a\braket{\alpha_iu_i, \gamma_iu_i} + \sum_{i=1}^n b\braket{\beta_iu_i, \gamma_iu_i}\
&= \braket{a\vec{x}, \vec{z}} + \braket{b\vec{y}, \vec{z}}
\end

Thus, notice that this forces all $u_1, ..., u_m$ to be specifically an *orthonormal* eigenbasis for $U$ (as the basis vectors are eigenvectors). Thus, using the Real Spectral Theorem, then $T$ must be self-adjoint. $(\leftarrow)$: Suppose there exists an inner product on $U$, denoted $\braket{\cdot, \cdot}_U$, that makes $T$ a self-adjoint operator. Since $U$ is a real vector space and $T$ is self-adjoint, then by the Real Spectral Theorem, then $U$ has an orthonormal basis $u_1, ..., u_m$ consisting of eigenvectors of $T$. ☐ ### 15 > [!theorem] > Find the matrix entry below that is covered up. > ![Pasted image 20240430205804.png](/img/user/1%20Attachments/12%20Images/Pasted%20image%2020240430205804.png) *Proof* Let:

A = \begin{bmatrix}
1 & 1 & 0 \
0 & 1 & 1 \
1 & 0 & a
\end

Weneedtodetermine$a$suchthat:

AA^* = A^*A

AA^* = \begin{bmatrix}
1 & 1 & 0 \
0 & 1 & 1 \
1 & 0 & a
\end{bmatrix}\begin{bmatrix}
1 & 0 & 1 \
1 & 1 & 0 \
0 & 1 & a
\end{bmatrix} = \begin{bmatrix}
2 & 1 & 1 \
1 & 2 & a \
1 & a & 1 + a^2
\end

A^*A = \begin{bmatrix}
1 & 0 & 1 \
1 & 1 & 0 \
0 & 1 & a
\end{bmatrix}\begin{bmatrix}
1 & 1 & 0 \
0 & 1 & 1 \
1 & 0 & a
\end{bmatrix} = \begin{bmatrix}
2 & 1 & a \
1 & 2 & 1 \
a & 1 & 1 + a^2
\end

Equatingentriesshowsthat:

\boxed

Thusthen:

A = \begin{bmatrix}
1 & 1 & 0 \
0 & 1 & 1 \
1 & 0 & 1
\end