Lecture 25 - Eigenvalues (cont.)

We continue to prove the following:

Theorem

Suppose TL(V) and there is a basis v1,...,vn denoted β, for which M(T,β) is U.T. Then T is invertible iff every diagonal entry in M(T,β) is non-zero.

Proof
Consider . We have:

M(T)=[00]

Assume akk entry of this matrix is 0:

M(T)=[000]

The bottom right matrix is still the zero matrix. Notice that T(v1),...,T(vk) is LI while they are in the span(v1,...,vk). Which is a contradiction. See the proof at Lecture 24 - Finishing Eigenstuff#^d13947 for a more specific example.

Now consider . Suppose the diagonal entries of M(T) are all non-zero:

M(T)=[λ10λ20000λn]

where λi is non-zero for all 1in. We need to show T is invertible. Since T is from V to V, then we only need to prove either injectivity or surjectivity.

We will show T is surjective. The first column says that:

T(v1)=λ1v1+0v2+=λ1v1

Since λ10 then:

v1=T(v1)λ1=T(1λv1)

So then v1range(T).

Repeat for the second column. Here:

T(v2)=a12v1+λ2v2T(v2)a12v1λ2=v2=T(v2λ2)a12λ2v1

Thus v2range(T) since the right vector is already in the range of T.

Repeat this process, up to n. Hence, all virange(T), so then V=range(T) showing T is surjective.

We're now going to use this theorem to prove a bigger theorem:

Theorem

Suppose TL(V) that has an U.T. matrix representation with respect to the basis v1,...,vn, denoted β. Then the eigenvalues of T are precisely the diagonal entries of M(T,β).

Proof
Notice that λ is an eigenvalue of T iff TλI is not invertible via Lecture 22 (online) - Invariant Subspaces#^5fc8ac. Thus, M(TλI) has a zero on the diagonal. Note that:

(TλI)vi=T(vi)λvi

And the matrix in a similar way:

M(TλI)=M(T)λIn×n

So if:

M(T)=[λ100λn]

Then we get the new matrix is:

M(TλI)=M(T)λIn×n=[λ1λ00λnλ]

Thus λ=λi for some i since we have a zero on the diagonal. This applies for all entries, in an if-and only if- type proof.

An Example

Let's do an example. Let V=span(ex+ex,exex)=span(v1,v2). Note that v1,v2 is a basis β for V so then dim(V)=2. Let D denote the derivative map. Let's look at:

M(D,β)=[0110]

since:

D(v1)=D(ex+ex)=exex=v2

and:

D(v2)=D(exex)=ex+ex=v1$$Noticethat$M(D,β)$isntuppertriangular.Butwhatifwewanttochangeourbasisvectorstomakeituppertriangular?Notethat:

v_1 + v_2 = 2e^x, D(v_1 + v_2) = 2e^x

thisisaneigenvectorfor$D$.Thus,$v1+v2=vλ$isaneigenvectorofeigenvalue$λ=1$.Letscreateanewbasis$β$using$v1+v2$andanewvector$vλ=4ex3ex$.Then:

M(D, \beta') = \begin{bmatrix}
1 & 4 \
0 & -1
\end

thefirstcolumnistrivial,andforthesecondcolumn:

D(4e^x - 3e^{-x}) = 4e^x + 3e^x = 4(2e^x) - (4e^x - 3e^{-x}) = 4v_\lambda - v_\lambda'

So since $M(D,\beta')$ has no zero then $D$ is invertible (there's no zeros on the diagonal). Furthermore, $\lambda = 1$ and $\lambda = -1$ are the eigenvalues of $D$. # Diagonal Matrix Representations Recall that: > [!definition] Diagonal Matrix > $A_{n \times n}$ is called *diagonal* if all entries off the diagonal are 0's. For instance, $\text{diag}(1,2,3) = \begin{bmatrix}1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{bmatrix}$. The zero matrix is a diagonal matrix. Note that diagonal matrices are always U.T., so then all the properties we got from them apply, such as reading the eigenvalues off the diagonal. But we want to ask **when** a diagonal matrix $M(T)$ for $T \in \mathcal{L}(V)$ exists? Suppose:

M(T) = \begin{bmatrix}
\lambda_1 & 0 & \dots \
0 & \lambda_2 & \dots \
0 & 0 & \ddots & \
0 & 0 & \dots & \lambda_n
\end

Then:

\begin{align}
T(v_1) = \lambda_1 v_1 \
T(v_2) = \lambda_2 v_2 \
\vdots \
T(v_n) = \lambda_n v_n
\end

soall$vi$areeigenvectorswithassociatedeigenvalues$λi$.Thisisactuallyaneigenbasis,namelyjustabasisconsistingofeigenvectors.Butthisworksbothways.Ifyoustartedwiththeeigenvectorsyoucanconstructthematrixbasedontheeigenbasis.Thisgivesrisetodefineaneigenspace:>[!definition]Eigenspace>Given$TL(V)$,$λ$isaneigenvalueof$T$,then:>

E(\lambda, T) = \text{null}(T - \lambda I)

isthesetofalleigenvectorscorrespondingto$λ$alongwith$0$.Thisistheeigenspaceassociatedwith$λ$.

Some facts about eigenspaces: