We say that and are isomorphic iff there exists an invertible linear map . Such a map is called an isomorphism.
A little bit about this word, as it'll come around in abstract algebra a lot. The idea here is that the vectors in each are of the same structure. They just have different labels. For instance, notice that:
If we define our map where:
Notice that is linear, injective, and surjective, so is invertible and thus and is an isomorphism. So these sets are algebraically the same. The vectors only differ in the way that they look.
But notice in our example that the dimensions of and are of the same dimension. It turns out that if their dimensions are the same, then they are isomorphic.
Dimension dictates isomorphism
If are finite-dimensional vector spaces. Then are isomorphic iff .
Let be our vector space. Let be a rotation CCW by some radians. Let be the standard basis for . Let's find .
Here:
and:
thus:
notice that for any :
but we can check this with the matrices themselves:
and the RHS is:
so then equating the matrices then we get various trig identities we've used before!
An Aside
Think about . It takes as input some and outputs a matrix in .
Theorem
Suppose are finite-dimensional vector spaces. Then: and are isomorphic.
This is easy to prove. We just need a ibjective map that takes in a and gives us some . That's !
Proof
Fix some basis for , say and a basis for . Call the former and the latter . We'll show that defined by:
is an isomorphism. We can first check this is true via injective. If the matrix sends anything to then must have a zero-vector nullspace, showing injectivity. Surjectivity can be shown by showing that the matrix map always has .
Due to the isomorphism and knowing the dimension for our matrix, we can say they are isomorphic.
☐
Therefore, then:
Notice that since the set of linear transformations is finite-dimensional, then there's some basis of all linear transformations between and . Our rotation matrices come from just setting or similar in one entry, and 0's everywhere else.