Suppose are vector spaces over . Then is a vector space over .
Dimension of a product is the sum of dimensions
Suppose are finite-dimensional vector spaces. Then is finite-dimensional and:
Products and Direct Sums
Products and direct sums
Suppose that are subspaces of . Define a linear map by:
Then is a direct sum iff is injective/invertible
A sum is a direct sum iff the dimensions add up
Suppose is finite-dimensional and are subspaces of . Then is a direct sum iff:
Quotients of Vector Spaces
Suppose and is a subspace of . Then is the subset of defined by:
affine subset, parallel
An affine subset of is a subset of of the form for some and some subspace of
For and being a subspace of , the affine subset is said to be parallel to .
quotient space,
Suppose is a subspace of . Then the quotient space is the set of all affine subsets of parallel to . In other words:
Two affine subsets parallel to are equal or disjoint
Suppose is a subspace of and . Then the following are equivalent:
addition and scalar multiplication on
Suppose is a subspace of . Then addition and scalar multiplication are defined on by:
for and .
Quotient space is a vector space
Suppose is a subspace of . Then , with the operations of addition and scalar multiplication as defined above, is a vector space.
quotient map,
Suppose is a subspace of . The quotient map is the linear map defined by:
for .
Dimension of a quotient space
Suppose is finite-dimensional and is a subspace of . Then:
Suppose . Define by:
Nullspace and range of
Suppose . Then:
is a linear map from to
is injective
is isomorphic to .
3.F: Duality
The Dual Space and the Dual Map
linear functional
A linear functional of vector space is an element of .
dual space,
The dual space of , denoted , is the vector space of all linear functionals on . In other words,
Suppose is finite-dimensional. Then is also finite dimensional and is the same dimension as .
dual basis
If is a basis of then the dual basis of is the list of elements of where each is the linear functional on such that:
Dual basis is a basis of the dual space
Suppose is finite-dimensional. Then the dual basis of a basis of is a basis of .
dual map,
If then the dual map of is the linear map defined by for .
Algebraic properties of dual maps
for all
for all
for all and
The Null Space and Range of the Dual of a Linear Maps
annihilator,
For the annihilator of , denoted , is defined by:
Dimension of the annihilator
Suppose is finite-dimensional and is a subspace of . Then:
The range of
Suppose are finite dimensional and . Then:
surjective is equivalent to injective
Suppose are finite-dimensional and . Then is surjective iff is injective.
is injective is equivalent to is surjective
Suppose are finite-dimensional and . Then is injective iff is surjective.
The Matrix of a Dual of a Linear Map
the matrix of is the transpose of the matrix of
Suppose . Then .
It's important to note here that and are the matrices above.
The Rank of a Matrix
rank
The rank of a matrix is the column rank of .
row rank, column rank
Suppose is an matrix with entries in .
The row rank of is the dimension of the span of the rows of in .
The column rank of is the dimension of the span of the columns of in .
Chapter 6: Inner Product Spaces
Riesz Representation Theorem
Suppose is finite-dimensional and is a linear functional on . Then there is a unique vector such that:
for every .
Calculating for Riesz.
To calculate the unique such that then:
Chapter 7: Operators on Inner Product Spaces
7.A: Self-Adjoint and Normal Operators
Adjoint Operators
adjoint,
Suppose . The adjoint of is the function such that:
for all .
The adjoint is a linear map
If then .
Properties of the adjoint
For all and :
where is the identity operator on .
If instead , then . Here is an inner product space over .
The "flippy flippy" theorem:
Null space and range of
Suppose . Then:
conjugate transpose
The conjugate transpose of an matrix is the matrix obtained by interchanging the rows and columns and then taking the complex conjugate of each entry.
The matrix of
Let . Suppose is an orthonormal basis of and is an orthonormal basis of . Then:
is the conjugate transpose of:
Self-Adjoint Operators
self-adjoint
An operator is called self-adjoint if . In other words, is self-adjoint iff:
for all
Eigenvalues of self-adjoint operators are real
Every eigenvalue of a self-adjoint operator is real
Over , is real for all only for self-adjoint operators.
Suppose is a complex inner product space and . Then is self-adjoint iff:
for every .
Normal Operators
normal
An operator on an inner product space is called normal if it commutes with its adjoint.
is normal if .
is normal iff for all .
An operator is normal iff:
for all .
For normal, have the same eigenvectors
Suppose is normal and is an eigenvector of with eigenvalue . Then is also an eigenvector of with eigenvalue .
Orthogonal eigenvectors for normal operators
Suppose is normal. Then eigenvectors of corresponding to distinct eigenvalues are orthogonal.
Note that these vectors don't have to be unit length, but we'd probably want them to be unit vectors.
Theorem
Suppose is normal. Then:
Theorem
Suppose is normal. Then:
for all .
7.B: Spectral Theorem
Complex Spectral Theorem
Suppose and . Then the following are equivalent:
is normal
has an orthonormal basis consisting of eigenvectors of .
has a diagonal matrix with respect to some orthonormal basis of .
Real Spectral Theorem
Suppose and . Then the following are equivalent:
is self-adjoint
has an orthonormal basis consisting of eigenvectors of .
has a diagonal matrix with respect to some orthonormal basis of
Self-adjoint operators and invariant subspaces
Suppose is self-adjoint and is a subspace of that is invariant under . Then:
is invariant under .
is self-adjoint
is self-adjoint.
7.C: Positive Operators and Isometries
Positive Operators
positive operator
An operator is called positive if is self-adjoint and:
for all .
Positive operators are required to be self-adjoint. Only really needed for real vector spaces though. The requirement can be dropped for complex .
square root
An operator is called a square root of an operator if .
Characterization of positive operators
Let . Then the following are equivalent:
is positive
is self-adjoint and all eigenvalues of are non-negative
has a positive square root
has a self-adjoint square root
s.t. .
Each positive operator has only one positive square root
Every positive operator on has a unique positive square root.
Isometry
isometry
An operator is called an isometry if:
for all .
In other words, an operator is an isometry if it preserves norms.
Characterization of isometries
is an isometry
for all
is orthonormal for every orthonormal list of vectors in
an orthonormal basis of such that is orthonormal.
is an isometry
is invertible and .
Notice that as an isometry must be normal since . Thus using spectral theorem will work.
Description of isometries when .
Suppose is a complex inner product space and . Then the following are equivalent:
is an isometry
There is an orthonormal basis of consisting of eigenvectors of whose corresponding eigenvalues all have absolute value of 1.
7.D: Polar Decomposition and SVD
Polar Decomposition
If is a positive operator then denotes the unique positive square root of .
Polar Decomposition
Suppose . Then an isometry such that:
Singular Value Decomposition
singular values
Suppose . The singular values of are the eigenvalues of with each eigenvalues repeated time.
Singular-Value Decomposition
Suppose has singular values . Then there exist orthonormal bases and such that:
for every .
SVD
To do the SVD:
Determine the transformation or it's matrix .
Get the eigenvalues of and them. These are the singular values.
Get an ONEB by using the eigenvalues of and determining the eigenvectors for each singular-value/eigenvalue.
The 's are just .
Note
To find for SVD, just do where for the inverse matrix you can just put the recipriocal for each diagonal entry.
Chapter 8: Operators on Complex Vector Spaces
8.A: Generalized Eigenvectors and Nilpotent Operators
Null Spaces of Powers of an Operator
Sequence of increasing null spaces
Suppose . Then:
Equality in the sequence of null spaces
Suppose . Suppose is a nonnegative integer such that . Then:
Null spaces stop growing
Suppose . Let . Then:
is a direct sum of and
Suppose . Let . Then:
Generalized Eigenvectors
generalized eigenvector
Suppose and is an eigenvalue of . A vector is called a generalized eigenvector of corresponding to if and:
for some .
generalized eigenspace,
Suppose and . The generalized eigenspace of corresponding to , denoted , is defined to be the set of all generalized eigenvectors of corresponding to , along with the .
Note that the definitions above don't require that . However, the lemma below just will use this in a specific case.
Description of generalized eigenspaces
Suppose and . Then .
Linearly Independent generalized eigenvectors
Let . Suppose are distinct eigenvalues of and are corresponding generalized eigenvectors. Then is linearly independent.
Nilpotent Operators
nilpotent
An operator is called nilpotent if some power of it equals .
Nilpotent operator raised to dimension of domain is
Suppose is nilpotent. Then
Matrix of a nilpotent operator
Suppose is a nilpotent operator on . Then a basis of w.r.t. which the matrix of has the form:
so all entries on and below the diagonal are 0's.
8.B: Decomposition of an Operator
Description of operators on complex vector spaces
Suppose is a complex vector space and . Let be the distinct eigenvalues of .
Each is invariant under .
Each is nilpotent.
A basis of generalized eigenvectors
Suppose is a complex vector space and . Then there is a basis of consisting of generalized eigenvectors of .
Algorithm
To get this basis:
Find the basis of by finding all generalized eigenvectors of each space.
Combine/concatenate the bases together.
multiplicity
Suppose . The multiplicity of an eigenvalue of is defined to be the dimension of the corresponding generalized eigenspace . In other words, the multiplicity of an eigenvalue of equals .
Sum of the multiplicities equals
Suppose is a complex vector space and . Then the sum of the multiplicities of all eigenvalues of equals .
Block diagonal matrix with upper-triangular blocks
Suppose is a complex vector space and . Let be the distinct eigenvalues of , with multiplicities . Then there is a basis of with respect to which has a blcok diagonal matrix like seen above:
where each is a upper-triangular matrix:
The idea is each is UT because of the basis of so then putting them together gives the desired result.
Square Roots
Identity plus nilpotent has a square root
Suppose is nilpotent. Then has a square root.
Over , invertible operators have square roots
Suppose is a complex vector space and is invertible. Then has a square root.
8.C: Character and Minimal Polynomials
Characteristic Polynomial
characteristic polynomial
Suppose is a complex vector space and . Let denote the distinct eigenvalues of with multiplicities . The polynomial:
is called the characteristic polynomial of .
Degree and zeros of characteristic polynomial
Suppose is a complex vector space and . Then:
the characteristic polynomial of has degree
the zeroes of the characteristic polynomial of are the eigenvalues of .
Cayley-Hamilton Theorem
Suppose is a complex vector space and . Let denote the characteristic polynomial of . Then .
Minimal Polynomial
Minimal Polynomial
Suppose . Then there is a unique monic polynomial of smallest degree such that .
Make sure it's monic, so the leading coefficient is 1.
Fact
Any polynomial with can only happen iff is a multiple of .
is a multiple of
The zeroes of are precisely the eigenvalues of .
How to find
To get the minimal polynomial you:
Find using it's eigenvalues and multiplicities
Reduce a degree and see if where is your guess.
8.D: Jordan Form
Basis corresponding to a nilpotent operator
Suppose is nilpotent. Then and such that:
is a basis of
Jordan Basis
Suppose . A basis of is called a Jordan basis for if w.r.t. this basis then has a block diagonal matrix:
where each is an UT matrix of the form:
Any complex vector space has a jordan basis.
How to find Jordan basis
To get the jordan basis for a transformation in complex :
Get a nilpotent operator .
Concatenate the basis
Repeat for all
Or try:
In general:
Choose such that for .
Find a basis for where .
Repeat for growing values of , moving over the difference in dimensions of our nullspace chain.
Chapter 9: Complexification
9.A: Complexification of a Vector Space
Complexification
complexification of ,
Suppose is a real vector space.
The complexification of , denoted , equals . An element of is an ordered pair where , but we'll write this as .
Addition on is defined by:
for .
Complex scalar multiplication on is defined by:
for and .