HW 1 - Vector Spaces

1.A

# 1

Inverse Complex Number
Suppose a,bR and one a,b is not equal to 0. Find c,dR such that:

1a+bi=c+di

Proof

I claim that the values:

c=aa2+b2,d=ba2+b2

Works. First, let a+bi be arbitrary, such that one of a,b0 . Then notice that:

1a+bi=abi(a+bi)(abi)=abia2+b2=aa2+b2+ba2+b2i

Thus, choose c=aa2+b2R and d=ba2+b2R.

# 3

Find two distinct square roots of i.

Proof
We are asked to find i, but more specifically, what value x has it such that x2=i. Notice first that if we find some x value that satisfies this, then x also satisfies this since:

(x)2=x2=i

So therefore, let's just consider the +x case. Suppose, for instance, that x can be represented as a complex number, so xC. Then x=a+bi where a,bR. Therefore:

x2=(a+bi)2=a2b2+2abi=i

So clearly, then 2ab=1 and we also require that a2b2=0. Looking at the latter, then:

a2b2=0a2=b2|a|=|b|

Now notice the first equation:

2ab=1ab=12

Now since 12>0 then either both a,b are negative, or a,b are positive. This is okay, as if we replace a,b with a,b in the definition for x, we expect that our solution is still correct:

x2=(abi)2=[(1)(a+bi)]2=(a+bi)2=

Thus, we can just consider the +a,+b case. Thus, we know |a|=a and |b|=b, so since |a|=|b| then a=b. Thus, let's just consider the a variable:

ab=12aa=12a2=12a=±22

Therefore, since a=b and we need to consider the a,b case, then our two solutions are:

x1=22+22i,x2=2222i

# 8

Existence of the multiplicative inverse.

For all αC with α0, there exists a unique βC such that αβ=1.

Proof
Let αC be arbitrary, where α0. Then, by definition, α=a+bi for some a,bR. Notice that for a+bi that both cannot be 0, since if that's the case then a+bi=0=α but that contradicts α0.

But from HW 1 - Vector Spaces#1 we know that there exists the number c+diC such that 1a+bi=c+di. Choose β=c+di. Then, replace a+bi with α and c+di with β and we get that 1α=β. Multiply both sides by α and we get that 1=αβ.

Thus, we can always choose β such that αβ=1.

# 11

Theorem

There doesn't exist some λC such that:

λ(23i,5+4i,6+7i)=(125i,7+22i,329i)

Proof
Assume for contradiction that there is such a λC. Then, we can use the operation of scalar multiplication to show that:

λ(23i,5+4i,6+7i)=(2λ3λi,5λ+4λi,6λ+7λi)=(125i,7+22i,329i)

By definition, these vectors are equal if their components equal, so then:

2λ3λi=125i,...

Notice then that we require that real components equal with real components, and likewise for imaginary (derived from the definition of complex numbers). Then:

2λ=12,3λ=5

So the first equation would indicate λ=6, but plugging into the second equation we get that (3)λ=(3)(6)=185. This is a contradiction, so via proof by contradiction, then the theorem holds.

# 12

Theorem

(x+y)+z=x+(y+z) for all x,y,zFn.

Proof
We use the fact that F is a field, so it contains the property of associativity. Thus, let x,y,zFn be arbitrary. We may say that:

x=(x1,...,xn),y=(y1,...,yn),z=(z1,...,zn)

Now, consider the following:

(x+y)+z=[(x1,...,xn)+(y1,...,yn)]+(z1,...,zn)=(x1+y1,...,xn+yn)+(z1,...,zn)(Vector Addition Operator)=((x1+y1)+z1,...,(xn+yn)+zn)(Vector Addition Operator)=(x1+(y1+z1),...,xn+(yn+zn))(Associativity of F)=(x1,...,xn)+(y1+z1,...,yn+zn)(Vector Addition)=(x1,...,xn)+[(y1,...,yn)+(z1,...,zn)](Vector Addition)=x+(y+z)

1.B

# 1

Theorem

(v)=v for all vV.

Proof
Notice that:

(v)+((v))=0(Additive Inverse)v+(v)+((v))=v(Add v to both sides)0+((v))=v(Definition of 0)(v)=v(Definition of 0)

# 2

Theorem

Suppose that aF and vV, and av=0. Then either a=0 or v=0.

Proof
If a=0 already then we are done, so instead suppose that a0. Notice then that, by the properties of the field F that the multiplicative inverse 1a exists. Thus:

1a(av)=v(aa=1)1a0=v(Given)0=v(Theorem 1.30 from the Textbook)

Therefore, we either get a=0, or we get v=0.

# 4

The empty set is not a vector space. It doesn't even include 0, since if it did then 0 is itself a contradiction. But 0 is needed in the set to consider it a vector space. Thus, is not a vector space.

# 5

Theorem

The conditions of a vector space V are unchanged if, instead of confirming the additive inverse for every vV, we instead say that 0v=0 for all vV.

Proof

We try to prove that the additive inverse exists for all vV using only the definition of a vector space (except for the additive inverse's existence itself) and the condition that 0v=0.

Let vV be arbitrary. Notice that:

0v=0(Given)(1+(1))v=0(01+(1))v+(v)=0(Distributive Property)

Thus, we found the vector v such that the properties of the additive inverse exists

1.C

# 1

a

The set:

S={(x1,x2,x3)F3:x1+2x2+3x3=0}

Is a subspace.

  1. 0=(0,0,0)S.
  2. Let x,yS be arbitrary. You may say that x=(x1,x2,x3) such that x1+2x2+3x3=0. Similarly, y=(y1,y2,y3) and y1+2y2+3y3=0. Notice then that: $$
    x + y = (x_1 + y_1, x_2 + y_2, x_3 + y_3)
Butnoticethat:

\begin{align}
(x_1 + y_1) + 2(x_2 + y_2) + 3(x_3 + y_3) &= (x_1 + 2x_2 +3x_3) + (y_1 + 2y_2 + 3y_3)\
&= 0 + 0 \
&= 0
\end

Sobydefinitionthen$x+yS$.3.Let$αF$bearbitrary.Noticethat$αx=(αx1,αx2,αx3)$,andthat:

\begin{align}
\alpha x_1 + 2\alpha x_2 + 3\alpha x_3 &= \alpha(x_1 + 2x_2 + 3x_3) \
&= \alpha(0) \
&= 0
\end

So then by definition again $\alpha x \in S$. #### b The set:

S = {(x_1, x_2, x_3) \in \mathbb{F} : x_1 + 2x_2 + 3x_3 = 4}

Is *not* a subspace. The $\vec{0} = (0,0,0) \notin S$ since $0 + 2\cdot 0 + 3\cdot 0 = 0 \neq 4$. #### c The set:

S = {(x_1, x_2, x_3) \in \mathbb{F}^3 : x_1x_2x_3 = 0}

Is *not* a subspace, since if we consider $v = (1,1,0)$ and $w = (0,0,1)$, then $v + w = (1,1,1) \notin S$. #### d The set:

S = {(x_1, x_2, x_3) \in \mathbb{F}^3 : x_1 = 5x_3}

*is* a subspace. Similar to my reasoning for 1.a, $\vec{0} \in S$, the set is closed under vector addition still, and also closed under scalar multiplication. ### \# 3 > [!theorem] > The set of differentiable real-valued functions $f$ on the interval $(-4,4)$ such that $f'(-1) = 3f(2)$ is a subspace of $\mathbb{R}^{(-4,4)}$. *Proof* For convenience, denote our set $S$: 1. $\vec{0}$ in this case is the function $f(x) = 0$ defined on $(-4,4)$. Notice that $f'(x) = 0$ still, so then:

f'(-1) = 0 = 3f(2) = 3\cdot 0

Sothen$0S$.2.Let$f,gS$bearbitrary,sothus$f(1)=3f(2)$andlikewisefor$g$aswell.Noticethenthat$(f+g)(x)=f(x)+g(x)$,sothen:

(f+g)'(-1) = f'(-1) + g'(-1) = 3f(2) + 3g(2) = 3(f(2) + g(2)) = 3(f+g)(2)

Sothen$f+gS$.3.Let$αR$bearbitrary,and$fS$.Then$f(1)=3f(2)$.Thus,since$(αf)(x)=αf(x)$then:

(\alpha f)'(-1) = \alpha f'(-1) = \alpha 3f(2) = 3\alpha f(2) = 3(\alpha f)(2)

So therefore $\alpha f \in S$. Thus, $S$ is a subspace of $\mathbb{R}^{(-4,4)}$. ☐ ### \# 5 > [!theorem] > $\mathbb{R}^2$ is *not* a subspace of $\mathbb{C}^2$. *Proof* Namely, the scalar multiplication fails to be closed. Notice that we can have $\alpha = i$, but if we had some vector, say $\vec{x} = (1,1)$, then $\alpha \vec{x} = i(1,1) = (i,i) \notin \mathbb{R}^2$. ☐ ### \# 7 We need to find a nonempty subset $U$ of $\mathbb{R}^2$ such that $U$ is closed under addition and taking additive inverses ($-u \in U$), but $U$ isn't a subspace of $\mathbb{R}^2$. Notice that as a result we'd require that $\vec{0} \in U$ since $u + (-u) = \vec{0} \in U$ by definition. Thus, we need to find a way to make $U$ not be closed under scalar multiplication. The only way this could happen is that we scale by $\alpha$ such that the values of $U$ that make the vector are of a smaller "resolute" set. An example of this would be $U = \{(x_1, x_2) : x_1, x_2 \in \mathbb{Q}\}$. Here we see that $\vec{0} \in U$ while also $U$ is closed under vector addition for similar reasons it is in $\mathbb{R}$. However, $U$ isn't closed under scalar multiplication since notice that, as an example, the vector $\vec{u} = (1, 1) \in U$, but if $\alpha = \sqrt{2}$, then:

\alpha \vec{u} = (\sqrt{2}, \sqrt{2}) \notin U

Thus $U$ isn't a subspace. ### \# 8 We need to give an example of a nonempty subset $U$ of $\mathbb{R}^2$ such that $U$ is closed under scalar multiplication, but $U$ isn't a subspace of $\mathbb{R}$. Notice too that by definition we could have $0\vec{u} = \vec{0} \in U$, so the zero vector is in $U$. By being forced to be closed under scalar multiplication, we can have things like $x_2 = x_1 + 1$, $x_2 = f(x_1)$ where $f$ isn't a linear function, and if we instead had $x_2 = cx_1$ then we just get a vector space which we don't want. Thus, let's try a different approach. We know a $U \cup W$ may not necessarily be a subspace. This happens when we have $0$'s in one variable that affect the other variables be non-zero. Thus, consider if $U = \{(x_1, x_2) : x_1, x_2 \in \mathbb{R} \land (x_1 = 0 \lor x_2 = 0\}$. So for instance $(1,0), (0,1) \in U$ but $(1,1) \notin U$. Notice by definition that $(0,0) \in U$ and better still if we have $\alpha (x_1, 0) = (\alpha x_1, 0) \in U$ and likewise if we did $\alpha (0, x_2) \in U$. But notice that if we have $(0,1) = v \in U$ and $(1,0) = u \in U$, then $u + v = (1,1) \notin U$, so $U$ is not a subspace. ### \# 11 > [!theorem] > The intersection of every collection of subspaces of $V$ is a subspace of $V$. *Proof* We prove this in a similar way we'd prove any subspace of $V$. Denote the collection of all intersected subspaces of $V$ as $U_S$. 1. Notice that $\vec{0} \in U_S$ since all subspaces $U \in U_S$ have $\vec{0} \in U$ by definition of subspace. 2. Let $\vec{v}, \vec{u} \in U_S$ be arbitrary. It follows that $\vec{v}, \vec{u} \in U_S$ are in all subspaces $U \in U_S$. But for each subspace $U \in U_S$ since $U$ is a subspace then since $\vec{v}, \vec{u} \in U$ then clearly $\vec{v} + \vec{u} \in U$ for each subspace. Thus, since it's true for all subspaces $U \in U_S$ then clearly when we intersect them all then $\vec{v} + \vec{u}$ is also in $U_S$ as required. 3. Let $\alpha \in \mathbb{F}$ and $\vec{v} \in U_S$ be arbitrary. It follows that $\vec{v} \in U_S$ is in all subspaces $U \in U_S$. But for each subspace $U \in U_S$ since $U$ is a subspace then since $\vec{v} \in U$ then clearly $\alpha \vec{v} \in U$ for each subspace. Then, since it's true for all subspaces $U \in U_S$ then clearly when we intersect them all then $\alpha \vec{v}$ is also in $U_S$ as required. ☐ ### \# 12 > [!theorem] > Let $U, W$ be subspaces of $V$. Then $U \cup W$ is a subspace of $V$ iff one of the subspaces is contained in the other (ie: $U \subseteq W$ or $W \subseteq U$). *Proof* Let $U, W$ be arbitrary subspaces of $V$. We do the ($\rightarrow$) direction first: 1. Consider that $U \cup W$ is a subspace of $V$. First, notice that if $W \subseteq U$ then we are done, so suppose that $W \not \subseteq U$. If this is the case, then there exists some element, let's call it $y$, such that $y \in U$ and $y \notin W$. We'll try to prove that instead $U \subseteq W$. Notice that, via proof by contradiction, if $U \not \subseteq W$, then there'd be possibly another element $z$ such that $z \in W$ and $z \notin U$. But look! Notice that both $y,z \in U \cup W$. Since $U \cup W$ is a subspace, then clearly $y + z \in U \cup W$. If $y + z \in W$ as case 1, then since $z \in W$ as well then $y + z + (-z) \in W$, so then $y \in W$, which is a contradiction! As a case 2, we'd consider $y + z \in U$ and consider that since $y \in U$ then $y + z + (-y) \in U$ so then $z \in U$ which is also a contradiction. Thus, no matter what, we always get a contradiction, so then $U \not \subseteq W$ is false, so $U \subseteq W$. Therefore, either $W \subseteq U$ or $U \subseteq W$ as required. 2. Consider that $U \subseteq W$ or $W \subseteq U$. Without loss of generality, say $U \subseteq W$. Then if we consider $U \cup W$ then we notice that $U \cup W = W$, which is given as a subspace. So $U \cup W$ is a subspace. ☐ ### \# 15 > [!theorem] > Suppose $U$ is a subspace of $V$. Then $U + U = U$. *Proof* Notice that:

U + U = {u_1 + u_2 | u_1, u_2 \in U}

Butsince$U$itselfisavectorspace(reallyasubspace),thensince$u1,u2U$then$u1+u2U$always,soreally:

U + U = {u | u \in U} = U

Thus $U + U = U$. ☐ ### \# 19 It isn't necessarily the case that $U_1 = U_2$, as the following counterexample shows: Consider how $U_1, U_2, W$ are subspaces of $V$. Notice that:

U_1 + W = {u_1 + w | u_1 \in U_1, w \in W}

andsimilarfor$U2+W$.Itisalsogiventhat$U1+W=U2+W$,soequatingbothsidessaysthat:

U_1 + W = {u_1 + w | u_1 \in U_1, w \in W} = {u_2 + w | u_2 \in U_2, w \in W} = U_2 + W

Butthissuggeststhatweshouldfinddifferentsubspaces$U1,U2,W$suchthat$W$seemsto"connect"thetwotomakethelargervectorspace$V$,orsomerelativelylargeenoughvectorspace.Let$W=V=R3$.Further,let$U1={(x,y,0):x,yR}$and$U2={(0,x,y):x,yR}$.Noticethathereall$U1,U2,W$aresubspacesof$V=R3$(even$W$,because$WV$evenif$W=V$),butseethat:

U_1 + W = V = U_2 + W

While still $U_1 \neq U_2$. Thus, this is a valid counterexample to the claim that $U_1 = U_2$. > [!note] > I could've had $W = \{(x, 0, y) : x, y \in \mathbb{R}^3\}$, and had $U_1 = \{(x,0,0) : x \in \mathbb{R}\}$ and $U_2 = \{(0,0,x) : x \in \mathbb{R}\}$ and the same argument would apply (except here we don't require that $W = V$). ### \# 20 Suppose

U = {(x,x,y,y) \in \mathbb{F}^4 : x, y \in \mathbb{F}}

Iproposethesubspace:

W = {(0,x, 0, y) : x ,y \in \mathbb{F}}

willallow$F4=UW$.Weprovethatthisiscaseasfollows,usingiffstatementsonly(toallowforan$=$insteadofloose$$).Let$xF4$bearbitrary,sothen:

\vec{x} = (x,y,z,w): x,y,z,w \in \mathbb

Noticethatwecansplitthevector$x$intothefollowing:

\vec{x} = (x, 0, 0, 0) + (0,y,0,0) + (0,0,z,0) + (0,0,0,w)

viatheoperationsimposedon$F4$.Now,noticethatfor$y$since$F$isafieldthenclearly$y=x+(yx)$.Likewise,$w=z+(wz)$.Wecanmakethosereplacementsasfollows:

\vec{x} = (x, 0, 0, 0) + (0,x + (y -x),0,0) + (0,0,z,0) + (0,0,0,z + (w - z))

Andundertheoperationsofvectoraddition,$x$canbeorganizedas:

\begin{align}
\vec{x} &= (x, x, 0, 0) + (0,0, z, z) + (0,y-x, 0, 0) + (0, 0, 0, w-z) \
&= (x,x,z,z) + (0,y-x, 0, w-z)
\end

Butnoticethat$(x,x,z,z)U$and$(0,yz,0,wz)W$,soclearlyatleast$U+W=F4$.Nowweshowitsadirectsumspecifically.Noticethat:

U \cap W = {0}

Since if some vector $\vec{y} \neq \vec{0}$ was such that $\vec{y} \in U$ and $\vec{y} \in W$ then clearly, since $\vec{y} = (x,y,z,w)$ then $x = 0$, which would make $y = 0$ and so on for $z,w = 0$, so then $\vec{y} = \vec{0}$ this whole time. Thus, $U, W$ are a direct sum (via 1.45), so then $U \oplus W = \mathbb{F}^4$. ### \# 23 > [!proposition] > If $U_1, U_2, W$ are subspaces of $W$ such that: >

V = U_1 \oplus W = U_2 \oplus W

Then$U1=U2$.

This is a false proposition. Consider V={(x,0,y):x,yR}. Then let U1={(x,0,0):xR}, U2={(0,0,y):yR}, and W={(z,0,z):zR3}. Notice that these follow what the proposition says, but clearly U1U2.

# 24

Theorem

A function f:RR is even if f(x)=f(x) and odd if f(x)=f(x) for all xR. Let Ue denote the set of real-valued even functions on R and likewise Uo for all the odd functions on R. Then RR=UeUo.

Proof
We need to find a way to first see if we can span RR solely by a combination of a single even function, and a single odd function. Then we need to show that these combinations are unique.

First, let fRR be an arbitrary function. Then notice that:

f(x)=f(x)+f(x)2+f(x)f(x))2

But notice the function fe on the left is an even function since:

fe(x)=f(x)+f(x)2,fe(x)=f(x)+f(x)2=fe(x)

Similarly, the fo function on the right is odd since:

fo(x)=f(x)f(x)2,fo(x)=f(x)f(x)2=fo(x)

Thus, we can always write f as a sum of an even and odd function:

f(x)=fe(x)+fo(x)

Now we show that this combination must be unique. We'll try to show that UeUo via definition 1.45. Notice that the zero function f0(x)=0 is both odd and even (try this yourself), so then f0UeUo. We now show that any other non-zero function fUeUo. Assume via contradiction that fUeUo. Then f(x)=f(x) and f(x)=f(x). But notice then that if we combine the equations that:

f(x)=f(x)=f(x)2f(x)=0f(x)=0

But this contradicts f not being the zero-function, so then there mustn't be any non-zero function in the set. Thus UeUo={f0}, so Ue,Uo are a direct sum.

Therefore, UeUo=RR.