Linear Algebra is the study of linear maps on spaces. We'll look at using in conjunction with when looking at our scalar fields. Then we'll generalize using to a general notion of a vector space. Lastly, we'll talk about subspaces.
1.A : and
Be familiar with the properties of and :
Complex Numbers
A complex number is an ordered pair where but we write this as
The set of all complex numbers is denoted by
Addition and mulitiplication on are defined by:
Here
If we identify with a real number . So . Usually is just and . You can verify that by definition.
Complex numbers follow these properties:
Properties of Complex Arithmetic
For the following, let be arbitrary:
Commutativity: and
Associativity: and
Identities: and
Additive Inverse: For every , there exists some number such that
Multiplicative Inverse: For every there is some number such that .
We want to really emphasize the proof of showing one of these properties from solely the definitions alone:
Multiplicative Associativity
Show that for any .
Proof
Let and be arbitrary (). Then notice that using the rules of complex multiplication and addition we get the following:
And similarly:
The commutativity of multiplication and addition of shows that
☐
, subtraction, division
Let :
Let denote the additive inverse of . So is the unique complex number such that:
Subtraction on is defined by:
For let denote the multiplicative inverse of . Thus is the unique complex number such that:
Division on is defined by:
For future reference, to allow a lot of future theorems and proofs to apply to both and , we denote the generic field to stand for either or:
Elements of are called scalars.
For and we define . Clearly and for all .
Lists
We define as the following:
The set
To allow the generalization of any we need to talk about lists:
list, length
Suppose . A list of length is an ordered collection of elements, separated by commas:
Two lists are equal iff they have the same length and the same elements in the same order.
Note that sometimes we don't specify the length of a list. But if the length of a list is infinite, as is , then it is not a list.
A list of length 0 is with length . Lists are different from sets in that order matters for lists, and repetitions are allowed (whereas with sets that's not the case).
Lists vs. Sets
The lists and aren't equal, but the sets and are equal.
The lists and aren't equal, but the sets and are (equivalent to ).
Thus we get:
is the set of all lists of length of elements of . For and we say that is the -th coordinatte of .
We geometrically cannot comprehend when , or for when , so we have to consider the algebraic manipulations instead:
Addition in
Addition in is defined by adding corresponding coordinates:
Commutativity of addition in
If then .
Proof
Let be arbitrary. Then:
☐
We define for any entries for the corresponding .
We may want to draw these vectors in but keep in mind that this is more of a visual aid to aid in what's going on. Normally we won't care about the geometry of, say 5-dimensional space, but we'll need to use the algebra to actually verify and prove that our intuition is correct in those spaces:
Additive Inverse in
For the additive inverse of , denoted , is the vector such that:
In other words, if then
Geometrically, it means flipping the original vector 180 degrees:
Scalar Multiplication in
The product of a number and a vector in is computed by multiplying each coordinate of the vector by :
here and
Here scales the size of by a factor, and if is negative then it also flips the vector and scales it:
Fields
A field is a set containing at least two distinct elements called 0 and 1, along with operations of addition and multiplication satisfying all the properties listed in Chapter 1 - Vector Spaces#^4af3e2. So then are fields, as well as . The set is also a field with the usually operations except we define . in this set
1.B: Definition of Vector Space
Addition, Scalar Multiplication
An addition on a set is a function that assigns an element to each pair of elements .
A scalar multiplication on a set is a function that assigns an element to each and each .
Vector Space
A vector space is a set along with an addition on and a scalar multiplication on such that the following properties hold:
Commutativity: for all .
Associativity: and for all and all .
Additive Identity: There exists an element such that for all .
Additive Inverse: For every there exists such that .
Multiplicative Identity: for all .
Distributive Properties: and for all and all .
Elements of a vector space are called vectors or points. We say, based on the scalar multiplication operation, that is a vector space over . A vector space over is a real vector space, and over is a complex vector space.
Example
is defined to be the set of all sequences of elements of :
Addition and scalar multiplication are as expected. is a vector space, as you can verify.
The book references Lecture 2 - Vector Spaces (cont.)#^86311a as an example. But notice that we can think about how and are really similar. They are special cases of the vector space since a list of length of numbers in can be thought of as a function from to , and a sequence of numbers in can be thought of as a function from the set of positive integers to . Ie: is really just and is just .
Unique Additive Identity
A vector space has a unique additive identity .
Proof
We already know that exists via the definition. Suppose and are both additive identities for our vector space . Then:
The first equality comes from applying the definition of the additive identity, and same with the last equality. The middle equality comes from the commutativity of vector addition. Thus, .
☐
Unique Additive Inverse
Every element in vector space has a unique additive inverse.
Proof
Suppose is a vector space, and let be arbitrary. We know from the definitions that , the additive inverse of , exists. Now, consider $w', w' \in V $, and suppose they're different. Notice that:
☐
For notation's sake, we say that if , then the additive inverse is just . Further, instead of writing we'll just write instead. Further, in general since we are restating that is a vector space, in general we implicitly say that is a vector space over .
The number times a vector
for every .
Proof
For , we have:
Add the additive inverse of to both sides to get that as expected.
☐
Clearly here we note that the vectors and scalars are getting mixed, so we'll usually denote the vectors with the arrow () while we'll use no arrow for scalars ().
A number times the vector
for all .
Proof
For , we have:
Again, add the additive inverse of to both sides to get that as expected.
☐
The number -1 times a vector
for every .
Proof
Let be arbitrary. Then:
Thus is the additive inverse for , so then it's .
☐
1.C: Subspaces
Subspace
A subset is called a subspace of if is also a vector space (using the same addition and scalar multiplication as on ).
For instance, the set , is a subspace. We can check for if a subspace is still a subspace if we have the following:
Conditions for a subspace
A subset is a subspace of iff satisfies the following three conditions:
Additive Identity:
Closed Under Addition: implies that
Closed Under Scalar Multiplication: and implies that
Proof
If is a subspace of then satisfies the three conditions above (by definition). Conversely, suppose satisfies the three conditions above. The first condition ensures that the additive identity of is in . The second condition ensures addition makes sense on . Similarly for the 3rd condition.
If , then since by Chapter 1 - Vector Spaces#^21334b, so the additive inverse always exists. Commutativity, associativity, distributivity, and so on all are satisfied in because they are satisfied in .
Therefore, is a subspace of .
☐
Sums of Subspaces
Sum of Subsets
Suppose are subsets of . The sum of , denoted , is the set of all possible sums of elements of . More precisely:
Let's look at an example:
Example
Suppose is the set of all elements of whose second and third coordinates equal , and is the set of all elements of whose first and third coordinates equal :
Then:
Example
Suppose that . Then:
As a fact, if are subspaces of , then is the smallest subspace of containing .
Proof
For clarity, call . Clearly , and that is closed under addition and scalar multiplication. So therefore Chapter 1 - Vector Spaces#^2bacbe implies that is a subspace of .
Now for size. Clearly all are contained in , since we can just choose solely from any above, and the rest of the vectors would be where . This justifies the largeness of our set.
Conversely, every subspace of containing contains (because subspaces must contain all finite sums of their elements), so is as small as can be (we can't make any other subspaces that are smaller without missing some ).
Therefore, must be the smallest subspace of containing .
☐
Direct Sums
Suppose are subspaces of . Every element of can be written in the form:
where each is in . We create a new definition using this idea:
Direct Sum
Suppose are subspaces of .
The sum is called a direct sum if each element of can be written in only one way as a sum , where each .
If is a direct sum, then denotes where the notation serves as an indication that this is a direct sum.
This looks like a confusing definition, but the examples really help here:
An Example
Suppose is the subspace of whose vectors' last coordinate equals 0, and is the subspace of whose vectors' first two coordinates equals 0. Then .
A Counterexample
Let , , and . is not a direct sum because every vector in can be composed of:
Where the first vector is from , the second from , and the third from . But isn't the direct sum of these sets since can be written in two different ways as a sum with each . Specifically:
or even:
Where again the first vector is from , the second from , and the third from .
The definition of direct sum requires that every vector in the sum have a unique representation as an appropriate sum.
This definition can be hard to use to prove things, but luckily there's an easy condition that we can use instead:
Condition for a direct sum
Suppose are subspaces of . Then is a direct sum iff the only way to write as a sum , where each . is by taking each .
Lastly, we have an important definition that you'll want to definitely use!
Direct sum of two subspaces
Suppose and are subspaces of . Then is a direct sum iff .
Proof
First suppose that is a direct sum. If then , where and . By the unique representation of as the sum of a vector in and a vector in , we have . Thus .
For the other way, suppose . To prove that is a direct sum, suppose , and . We only need to show that . The equation implies that , so then . Hence , so therefore .
☐
Note that this only works for two subspaces!!! If you have any more you are considering, then it doesn't work.