Chapter 1 - Vector Spaces

Intro

Linear Algebra is the study of linear maps on spaces. We'll look at using R in conjunction with C when looking at our scalar fields. Then we'll generalize using Rn,Cn to a general notion of a vector space. Lastly, we'll talk about subspaces.

1.A : Rn and Cn

Be familiar with the properties of R and C:

Complex Numbers

  • A complex number is an ordered pair (a,b) where a,bR but we write this as a+bi
  • The set of all complex numbers is denoted by C={a+bi:a,bR}
  • Addition and mulitiplication on C are defined by:
    • (a+bi)+(c+di)=(a+c)+(b+d)i
    • (a+bi)(c+di)=(acbd)+(ad+bc)i
    • Here a,b,c,dR

If aR we identify a+0i with a real number a. So RC. Usually 0+bi is just bi and 0+1i=i. You can verify that i2=1 by definition.

Complex numbers follow these properties:

Properties of Complex Arithmetic

For the following, let α,β,λC be arbitrary:

  • Commutativity: α+β=β+α and αβ=βα
  • Associativity: (α+β)+λ=α+(β+λ) and α(βλ)=(αβ)λ
  • Identities: λ+0=λ and λ1=λ
  • Additive Inverse: For every αC, there exists some number γC such that α+γ=0
  • Multiplicative Inverse: For every αinC there is some number λC such that αλ=1.
  • Distributive Property: $\lambda(\alpha + \beta) = \lambda\alpha + \lambda\beta

We want to really emphasize the proof of showing one of these properties from solely the definitions alone:

Multiplicative Associativity

Show that αβ=βα for any α,βC.

Proof
Let α=a+biC and β=c+diC be arbitrary (a,b,c,dR). Then notice that using the rules of complex multiplication and addition we get the following:

αβ=(a+bi)(c+di)=(acbd)+(ad+bc)i

And similarly:

βα=(c+di)(a+bi)=(cadb)+(cd+da)i

The commutativity of multiplication and addition of R shows that αβ=βα

α, subtraction, division

Let α,βC:

  • Let α denote the additive inverse of α. So α is the unique complex number such that:
α+(α)=0
  • Subtraction on C is defined by:
βα=β+(α)
  • For α0 let 1α denote the multiplicative inverse of α. Thus 1α is the unique complex number such that:
α(1α)=1
  • Division on C is defined by:
βα=β(1α)

For future reference, to allow a lot of future theorems and proofs to apply to both R and C, we denote the generic field F to stand for either or:

Lists

We define R2,R3 as the following:

R2,R3

  • The set R2={(x,y):x,yR}
  • R3={(x,y,z):x,y,zR}

To allow the generalization of any Rn we need to talk about lists:

list, length

Suppose nN. A list of length n is an ordered collection of n elements, separated by commas:

(x1,...,xn)

Two lists are equal iff they have the same length and the same elements in the same order.

Note that sometimes we don't specify the length of a list. But if the length of a list is infinite, as is (x1,x2,...), then it is not a list.

A list of length 0 is () with length n=0. Lists are different from sets in that order matters for lists, and repetitions are allowed (whereas with sets that's not the case).

Lists vs. Sets

  • The lists (3,5) and (5,3) aren't equal, but the sets {3,5} and {5,3} are equal.
  • The lists (4,4) and (4,4,4) aren't equal, but the sets {4,4} and {4,4,4} are (equivalent to {4}).

Thus we get:

Fn

Fn is the set of all lists of length n of elements of F. For (x1,...,xn)Fn and j{1,...,n} we say that xj is the j-th coordinatte of (x1,...,xn).

We geometrically cannot comprehend Rn when n4, or for Cn when n2, so we have to consider the algebraic manipulations instead:

Addition in Fn

Addition in Fn is defined by adding corresponding coordinates:

(x1,...,xn)+(y1,...,yn)=(x1+y1,...,xn+yn)
Commutativity of addition in Fn

If x,yFn then x+y=y+x.

Proof
Let x=(x1,...,xn),y=(y1,...,yn)Fn be arbitrary. Then:

x+y=(x1,...,xn)+(y1,...,yn)=(x1+y1,...,xn+yn)=(y1+x1,...,yn+xn)=(y1,...,yn)+(x1,...,xn)=y+x

We define 0=(0,...,0) for any n entries for the corresponding Fn.

We may want to draw these vectors in R2 but keep in mind that this is more of a visual aid to aid in what's going on. Normally we won't care about the geometry of, say 5-dimensional space, but we'll need to use the algebra to actually verify and prove that our intuition is correct in those spaces:

Additive Inverse in Fn

For xFn the additive inverse of x, denoted x, is the vector xFn such that:

x+(x)=0

In other words, if x=(x1,...,xn) then x=(x1,...,xn)

Geometrically, it means flipping the original vector 180 degrees:

Scalar Multiplication in Fn

The product of a number λ and a vector in Fn is computed by multiplying each coordinate of the vector by λ:

λ(x1,...,xn)=(λx1,...,λxn)

here λF and (x1,...,xn)Fn

Here λ scales the size of x by a factor, and if λ is negative then it also flips the vector and scales it:

Fields

A field is a set containing at least two distinct elements called 0 and 1, along with operations of addition and multiplication satisfying all the properties listed in Chapter 1 - Vector Spaces#^4af3e2. So then R,C are fields, as well as Q. The set {0,1} is also a field with the usually operations except we define 1+1=0. in this set

1.B: Definition of Vector Space

Addition, Scalar Multiplication

  • An addition on a set V is a function that assigns an element u+vV to each pair of elements u,vV.
  • A scalar multiplication on a set V is a function that assigns an element λvV to each λF and each vV.

Vector Space

A vector space is a set V along with an addition on V and a scalar multiplication on V such that the following properties hold:

  • Commutativity: u+v=v+u for all u,vV.
  • Associativity: (u+v)+w=u+(v+w) and (ab)v=a(bv) for all u,v,wV and all a,bF.
  • Additive Identity: There exists an element 0V such that v+0=v for all vV.
  • Additive Inverse: For every vV there exists wV such that v+w=0.
  • Multiplicative Identity: 1v=v for all vV.
  • Distributive Properties: a(u+v)=au+av and (a+b)v=av+bv for all a,bF and all u,vV.

Elements of a vector space are called vectors or points. We say, based on the scalar multiplication operation, that V is a vector space over F. A vector space over R is a real vector space, and over C is a complex vector space.

Example

F is defined to be the set of all sequences of elements of F:

F={(x1,x2,...):xjFjN}

Addition and scalar multiplication are as expected. F is a vector space, as you can verify.

The book references Lecture 2 - Vector Spaces (cont.)#^86311a as an example. But notice that we can think about how Fn and F are really similar. They are special cases of the vector space FS since a list of length n of numbers in F can be thought of as a function from {1,2,3,...,n} to F, and a sequence of numbers in F can be thought of as a function from the set of positive integers to F. Ie: Fn is really just F{1,2,...,n} and F is just F{1,2,...}.

Unique Additive Identity

A vector space V has a unique additive identity 0.

Proof
We already know that 0 exists via the definition. Suppose 0 and 0 are both additive identities for our vector space V. Then:

0=0+0=0+0=0

The first equality comes from applying the definition of the additive identity, and same with the last equality. The middle equality comes from the commutativity of vector addition. Thus, 0=0.

Unique Additive Inverse

Every element vV in vector space V has a unique additive inverse.

Proof
Suppose V is a vector space, and let vV be arbitrary. We know from the definitions that wV, the additive inverse of v, exists. Now, consider $w', w' \in V $, and suppose they're different. Notice that:

w=w+0=w+(v+w)=(w+v)+w=0+w=w


For notation's sake, we say that if vV, then the additive inverse is just vV. Further, instead of writing w+(v) we'll just write wv instead. Further, in general since we are restating that V is a vector space, in general we implicitly say that V is a vector space over F.

The number 0 times a vector

0v=0 for every vV.

Proof
For vV, we have:

0v=(0+0)v=0v+0v

Add the additive inverse of 0v to both sides to get that 0=0v as expected.

Clearly here we note that the vectors and scalars are getting mixed, so we'll usually denote the vectors with the arrow (0) while we'll use no arrow for scalars (0).

A number times the vector 0

a0=0 for all aF.

Proof
For aF, we have:

a0=a(0+0)=a0+a0

Again, add the additive inverse of a0 to both sides to get that 0=a0 as expected.

The number -1 times a vector

(1)v=v for every vV.

Proof
Let vV be arbitrary. Then:

v+(1)v=1v+(1)v=(1+(1))v=0v=0

Thus (1)v is the additive inverse for v, so then it's v.

1.C: Subspaces

Subspace

A subset UV is called a subspace of V if U is also a vector space (using the same addition and scalar multiplication as on V).

For instance, the set {(x1,x2,0:x1,x2F}F3, is a subspace. We can check for if a subspace is still a subspace if we have the following:

Conditions for a subspace

A subset UV is a subspace of V iff U satisfies the following three conditions:

  1. Additive Identity: 0U
  2. Closed Under Addition: u,wU implies that u+wU
  3. Closed Under Scalar Multiplication: aF and uU implies that auU

Proof
If U is a subspace of V then U satisfies the three conditions above (by definition). Conversely, suppose U satisfies the three conditions above. The first condition ensures that the additive identity of V is in U. The second condition ensures addition makes sense on U. Similarly for the 3rd condition.

If uU, then uU since (1)u=u by Chapter 1 - Vector Spaces#^21334b, so the additive inverse always exists. Commutativity, associativity, distributivity, and so on all are satisfied in U because they are satisfied in V.

Therefore, U is a subspace of V.

Sums of Subspaces

Sum of Subsets

Suppose U1,...,Um are subsets of V. The sum of U1,...,Um, denoted U1+...+Um, is the set of all possible sums of elements of U1,...,Um. More precisely:

U1++Um={u1++um:u1U1,...,umUm}

Let's look at an example:

Example

Suppose U is the set of all elements of F3 whose second and third coordinates equal 0, and W is the set of all elements of F3 whose first and third coordinates equal 0:

U={(x,0,0)F3:xF},W={(0,y,0)F3:yF}

Then:

U+W={(x,y,0):x,yF}
Example

Suppose that U={(x,x,y,y)F4:x,yF},W={(x,x,x,y)F4:x,yF4}. Then:

U+W={(x,x,y,z)F4:x,y,zF}

As a fact, if U1,...,Um are subspaces of V, then U1+...+Um is the smallest subspace of V containing U1,...,Um.

Proof
For clarity, call Uc=U1++Um. Clearly 0Uc, and that Uc is closed under addition and scalar multiplication. So therefore Chapter 1 - Vector Spaces#^2bacbe implies that Uc is a subspace of V.

Now for size. Clearly U1,...,Um all are contained in Uc, since we can just choose unUn solely from any Ui above, and the rest of the vectors would be 0jUj where ij. This justifies the largeness of our set.

Conversely, every subspace of V containing U1,...,Um contains Uc (because subspaces must contain all finite sums of their elements), so Uc is as small as can be (we can't make any other subspaces that are smaller without missing some Ui).

Therefore, Uc must be the smallest subspace of V containing U1,...,Um.

Direct Sums

Suppose U1,...,Um are subspaces of V. Every element of U1++Um can be written in the form:

u1++um

where each uj is in Uj. We create a new definition using this idea:

Direct Sum

Suppose U1,...,Um are subspaces of V.

  • The sum U1++Um is called a direct sum if each element of U1++Um can be written in only one way as a sum u1++um, where each ujUj.
  • If U1++Um is a direct sum, then U1Um denotes U1++Um where the notation serves as an indication that this is a direct sum.

This looks like a confusing definition, but the examples really help here:

An Example

Suppose U is the subspace of F3 whose vectors' last coordinate equals 0, and W is the subspace of F3 whose vectors' first two coordinates equals 0. Then F3=UV.

A Counterexample

Let U1={(x,y,0)F3:x,yF}, U2={(0,0,z)F3:zF}, and U3={(0,y,y)F3:yF}. U1+U2+U3 is not a direct sum because every vector in F3 can be composed of:

(x,y,z)=(x,y,0)+(0,0,z)+(0,0,0)

Where the first vector is from U1, the second from U2, and the third from U3. But F3 isn't the direct sum of these sets since (0,0,0) can be written in two different ways as a sum u1+u2+u3 with each ujUj. Specifically:

(0,0,0)=(0,1,0)+(0,0,1)+(0,1,1)

or even:

(0,0,0)=(0,0,0)+(0,0,0)+(0,0,0)

Where again the first vector is from U1, the second from U2, and the third from U3.

The definition of direct sum requires that every vector in the sum have a unique representation as an appropriate sum.

This definition can be hard to use to prove things, but luckily there's an easy condition that we can use instead:

Condition for a direct sum

Suppose U1,,Um are subspaces of V. Then U1++Um is a direct sum iff the only way to write 0 as a sum u1++um, where each ujUj. is by taking each uj=0.

I would write the proof, but it's very non-intuitive and the proof can be seen in Year3/Winter2024/MATH306-LinearAlgebraII/2015_Book_LinearAlgebraDoneRight.pdf#pages=23 if you're so curious.

Lastly, we have an important definition that you'll want to definitely use!

Direct sum of two subspaces

Suppose U and W are subspaces of V. Then U+W is a direct sum iff UW={0}.

Proof
First suppose that U+W is a direct sum. If vUW then 0=v+(v), where vU and vW. By the unique representation of 0 as the sum of a vector in U and a vector in W, we have v=0. Thus UW={0}.

For the other way, suppose UW={0}. To prove that U+W is a direct sum, suppose uU,wW, and 0=u+w. We only need to show that u=w=0. The equation implies that u=wW, so then uUW. Hence u=0, so therefore w=0.

Note that this only works for two subspaces!!! If you have any more you are considering, then it doesn't work.