HW 6 - Finishing UT Matrices, Eigenspaces and Diagonal Matrices
5.B: Eigenvalues/Vectors and UT Matrices
14
We'll give an example of an operator who's matrix with respect to some basis contains only 0's on the diagonal, while the operator is still invertible. By the question's own admittance, we should consider non-U.T matrices for this.
Consider:
Here all the diagonals are 0's. For the sake of simplicity let , and is the standard basis of . Then since is as described above then:
So define for as:
notice that is as described above, but the inverse matrix I claim is:
Note that I can show this because:
and I can show that:
since if I claim this I can verify that:
and:
so then is invertible.
15
For simplicity let and let be the standard basis. Consider the matrix:
Here the entries in the diagonal are all non-zero, but is not invertible as we'll show. Since we need to show that isn't surjective or injective (just one). We'll do injectivity. To show it's not injective, we need to find two vectors such that but . Notice first that:
Hence, choose and . By the above showing, notice that and:
but , showing that isn't invertible.
16
Operators on complex vector spaces have an eigenvalue
Every operator on a finite-dimensional, nonzero, complex vector space has an eigenvalue.
We'll prove the above lemma using the below lemma:
Lemma
If is finite dimensional where then no linear map from is injective.
Proof
Let be arbitrary, where is:
finite-dimensional: ;
non-zero: ;
a complex vector space
Consider the transformation , where is defined by:
We should show that is itself linear. Let be arbitrary. Notice that for additivity:
and similarly for homogeneity:
thus is linear. Notice that so then we can use the aforementioned lemma to say that isn't injective. Thus, there's some vectors such that:
Thus implying that a new polynomial is where:
where notice that so . Namely, notice that is a polynomial of degree or less. Denote . Then:
But then:
as we defined above. Therefore:
But then clearly is an eigenvector for at least one where . So then has some eigenvalue associated with the transformation , completing the theorem.
☐
20
Theorem
Suppose is a finite-dimensional complex vector space and . Then has an invariant subspace of dimension for each .
Proof
For simplicity, let . Fix as arbitrary. Let be the basis vectors that spans .
Since is a finite-dimensional complex vector space and , then has an upper triangular matrix with respect to some basis of , which we use as where . As such, then the matrix of with respect to is U.T.
But this is equivalent to saying that is invariant under for each . As such, for whatever we've fixed, have:
as per the reasoning above, then is invariant under because of our conditions under , so then we can always find for each we've chosen. Furthermore, clearly is -dimensional as are all LI, so then any must be composed of these LI vectors, namely of them.
☐
5.C: Eigenspaces and Diagonal Matrices
1
Theorem
Suppose is diagonalizable. Then .
Proof
Let be diagonalizable. As such, then has a basis consisting of eigenvectors of , we denote as (where is implicitly defined). As such, consider each for any . Since each is an eigenvector of , we always get:
Now, consider the cases where or not. If , then so . Notice though that since is an eigenvector, so the only vector where would be since:
But that would contradict .
If instead then so then clearly . Notice though that since both as well as since it's an eigenvector.
Since is a basis and is linear, then any vector where is in the nullspace cannot be in the range, and vice versa, of . As such, then both sets are disjoint. It's clear that by the FTOLM, but furthermore we know that since they're disjoint subsets whose sum makes up that then this must be a direct sum.
☐
3
Theorem
Suppose is finite-dimensional and . The following are equivalent:
Proof
It's clear that if (1) is true then (2, 3) are both true via the definition of a direct sum as well as Chapter 1 - Vector Spaces#^038942.
Suppose (2). Notice if we prove (3) then (1) comes for free. By the FTOLM then:
But since are subspaces of , then we also have it that:
Thus subtracting both equations, it's clear that so then , proving (3), thus also getting (1) as well.
Suppose (3). Notice that if we prove (2) then (1) comes for free. Since and are subspaces of then:
But via our supposition then we know that the dimension of the negative term is 0, so:
As such, then we can represent any vector by a set of basis vectors where is the dimension of the null space, and is the dimension of the range (of ). Thus:
where and so then since was arbitrary it follows that is at least a normal sum of the null-space and range (of ).
☐
6
Theorem
Suppose is finite-dimensional, has distinct eigenvalues, and has the same eigenvectors as (not necessarily with the same eigenvalues). Then .
Proof
Suppose the suppositions in the theorem. For clarity, there's distinct eigenvalues for and thus has corresponding eigenvectors . Furthermore, since has the same eigenvectors as , then are eigenvectors of . We may have different 's so then these have corresponding eigenvalues .
Note here that since we have that these can form a basis of , as there's vectors and they are all LI with each other (since they're eigenvectors, otherwise they'd share 's and thus wouldn't be distinct). And note that for each that and furthermore .
Consider any vector . It can be composed into a linear combination of it's basis vectors:
Thus notice that:
and likewise:
Thus since was arbitrary it follows that .
☐
7
Theorem
Suppose has a diagonal matrix with respect to some basis of and that . Then appears on the diagonal of precisely times.
Proof
Since is a diagonal matrix, then
where may or may not be distinct. Clearly since is then:
where we take and consider just the distinct eigenvalues eigenvalues.
Let's prove this via induction over ., where we consider the arbitrary . Consider . Then:
This implies that is the only distinct eigenvalue from , so then:
and since appears times (since is ), then that's the same as as we needed.
Now suppose that the theorem holds for any . Consider the -th case. Here we have:
Notice that since the case is true, then if then by the inductive hypothesis then we have on the diagonal times as expected. If instead then we know that:
clearly we know that for , so then by the inductive hypotheses for each where that we will get times on the diagonal for . As such, if a total of times will appear for all 's and we subtract the other times, then we get the equality above, showing that therefore for we expect an appearance of times, as expected.
Thus, via the principle of mathematical induction, the theorem holds.
☐
8
Theorem
Suppose and . Then or is invertible.
Proof
Notice that in this case, and since we have the dimension above then:
Notice that since if that's not the case then we have which is a contradiction as itself isn't an eigenvector. As such, then we must have it that:
for some eigenvalue . As such, we have one other eigenvalue for , with some eigenvector .
Now assume for contradiction that and are both not invertible. That means that both are eigenvalues of . But that's a clear contradiction since if that's the case then we have it that instead:
but clearly since none of these dimensions can be 0, then we cannot share the 1 across all values of the dimensions of our eigenspaces above. Hence, then we must have a contradiction, so then the opposite of our assumption is true. Hence, or must be invertible.
☐
10
Theorem
Suppose and . Let denote the distinct nonzero eigenvalues of . Then:
Proof
First, we'll prove a nice lemma, that , as this will be useful later on. Let be arbitrary. Since (see the theorem) then notice that we can say that so then by definition then . So no matter what . Thus, we've proved our lemma.
Now consider:
which is true via a quick inductive proof, using our lemma as our base case. Notice then that using the idea of the dimension of direct sums gives the left side, and it being a subset implies the part:
☐
11
Theorem
is diagonalizable since the matrix with respect to the vectors is:
where:
Proof
Notice that:
and:
thus then if is the basis respective to these new vectors then:
as expected.
☐
12
Theorem
Suppose each have as eigenvalues. Then there is an invertible operator such that .
Proof
Notice that since then, if are the basis w.r.t. and is the basis w.r.t. then we know that:
and similarly:
Choose such that for . Namely:
Notice then that exists since:
and furthermore, by the similar properties for our bases.
Since 's form a basis for our vector space, then for any then:
Thus:
Thus .
☐
14
Theorem
Find such that 6,7 are eigenvalues of while does not have a diagonal matrix with respect to any basis of .
Proof
The basis we are respective to doesn't matter, other than that there is some basis as we have .
We'll want to find eigenvectors for our such that the sum of the dimensions of our eigenspaces per doesn't add up to 3. We can just just force one eigenvector to occur per . Hence, we have:
Notice here we have is an eigenvector for and similarly we have with eigenvector . Furthermore, notice that if then:
Notice that this is what we want! That's because we are always forced to have an UT matrix (and hence the diagonal tells what 's there are), so we have to use or in there on the diagonal, and we must have other vector components since if not then is an eigenvector of eigenvalue or .
We can show that if then and then:
Creates equations:
Notice if then (3) is satisfied ( is arbitrary), and from (2) we have , and from (1) we have . Thus, any multiple of ( is now arbitrary) is an eigenvector with that eigenvalue.
A similar process shows that if then any multiple of is an eigenvector.
If then (2,3) force use to have it that . Hence (1) becomes:
where since then we have it that so we'd have the zero vector. Hence, there's no other eigenvalues or vectors.
As such, then . Their sum is less than the dimension of , hence must not be diagonalizable.
☐
15
Theorem
Suppose such that 6,7 are eigenvalues of while does not have a diagonal matrix with respect to any basis of . Then there exists such that:
Proof
We claim that is this vector. Notice that if was an eigenvalue for , then would have 3 distinct 's, and since then that would imply that is diagonalizable which is a contradiction. Thus, must not be an eigenvalue.
As a result, then must be surjective (if it wasn't, then since then it would imply that is an eigenvalue which is a contradiction), so then there is some is non-zero such that . Then: