Where are finite dimensional, and is a basis for and likewise is a basis for . We constructed our transformation matrix via:
creating our matrix representation of a map:
namely the first equation becomes the first column of the matrix we get. Note that we get where:
is the row
is the column
represents the we are scaling and adding with the other vectors
We get a matrix, where and . Note that this has to be this way, since the input of the matrix needs to be the number of columns of our matrix, which is , which is the dimensional size of our input. Hence, we have to do the flipping over the diagonal like we did before.
Let's look at an example of a different linear transformation.
Let , be defined by reflection across the plane .
We won't do the proof itself of additivity and homogeneity, but consider the following:
But let's find *a* matrix representation of $T$. Notice that matrix depends on the constructions from the bases given, ie $\mathcal{B}$ and $\mathcal{C}$ so we can choose any bases.
Thus, we could choose as our bases for . But we'd have to find the reflection of each over the plane, which is a lot of work. Instead, we could just choose the normal vector and two vectors in the plane .
So let's not use the standard basis. Can we find two, non-parallel, non-zero vectors on the plane? Let's just choose one vector . Let's also just choose another vector . These vectors are on the plane, and clearly non-parallel since they're LI.
Thus, choose as our last basis vector, so choose . Notice that our domain and co-domain are the same, so we can use for both. Thus, look at the transformations for all our vectors:
We know the general geometry of this, so we can fill our blanks intuitively:
Thus extract the matrix:
Notice that the order here matters! The column number indicates which basis vector number in the range we map to. is an ordered list, not a set, so the order here is important.
One last thing of note is that, had we choose a different basis, like the standard basis , we could still find the matrix representation of our map:
However, these matrices do have similar properties. In this way, these matrices are similar matrices.
Something Else
Consider:
We won't prove this, but is a linear map here. So if we have some as a basis for , and is a basis for , and is a basis for . Let's say we have . We also have . Let's denote the matrices correspondingly.
Suppose we want to compute . How would we do this? Let's calculate for some :
So we can just define the matrix such that the -th entry is . This correspondingly is an matrix, and this definition shouldn't suprise you since this is the matrix multiplication definition!!!!