1.A
# 1
Inverse Complex Number
Suppose a , b ∈ R and one a , b is not equal to 0. Find c , d ∈ R such that:
1 a + b i = c + d i
Proof
I claim that the values:
c = a a 2 + b 2 , d = − b a 2 + b 2 Works. First, let a + b i be arbitrary, such that one of a , b ≠ 0 . Then notice that:
1 a + b i = a − b i ( a + b i ) ( a − b i ) = a − b i a 2 + b 2 = a a 2 + b 2 + − b a 2 + b 2 i Thus, choose c = a a 2 + b 2 ∈ R and d = − b a 2 + b 2 ∈ R .
☐
# 3
Find two distinct square roots of i .
Proof
We are asked to find i , but more specifically, what value x has it such that x 2 = i . Notice first that if we find some x value that satisfies this, then − x also satisfies this since:
( − x ) 2 = x 2 = i So therefore, let's just consider the + x case. Suppose, for instance, that x can be represented as a complex number, so x ∈ C . Then x = a + b i where a , b ∈ R . Therefore:
x 2 = ( a + b i ) 2 = a 2 − b 2 + 2 a b i = i So clearly, then 2 a b = 1 and we also require that a 2 − b 2 = 0 . Looking at the latter, then:
a 2 − b 2 = 0 ⇒ a 2 = b 2 ⇒ | a | = | b | Now notice the first equation:
2 a b = 1 ⇒ a b = 1 2 Now since 1 2 > 0 then either both a , b are negative, or a , b are positive. This is okay, as if we replace a , b with − a , b in the definition for x , we expect that our solution is still correct:
x 2 = ( − a − b i ) 2 = [ ( − 1 ) ( a + b i ) ] 2 = ( a + b i ) 2 = … Thus, we can just consider the + a , + b case. Thus, we know | a | = a and | b | = b , so since | a | = | b | then a = b . Thus, let's just consider the a variable:
a b = 1 2 a ⋅ a = 1 2 a 2 = 1 2 a = ± 2 2 Therefore, since a = b and we need to consider the − a , − b case, then our two solutions are:
x 1 = 2 2 + 2 2 i , x 2 = − 2 2 − 2 2 i ☐
# 8
Existence of the multiplicative inverse.
For all α ∈ C with α ≠ 0 , there exists a unique β ∈ C such that α β = 1 .
Proof
Let α ∈ C be arbitrary, where α ≠ 0 . Then, by definition, α = a + b i for some a , b ∈ R . Notice that for a + b i that both cannot be 0, since if that's the case then a + b i = 0 = α but that contradicts α ≠ 0 .
But from HW 1 - Vector Spaces#1 we know that there exists the number c + d i ∈ C such that 1 a + b i = c + d i . Choose β = c + d i . Then, replace a + b i with α and c + d i with β and we get that 1 α = β . Multiply both sides by α and we get that 1 = α β .
Thus, we can always choose β such that α β = 1 .
☐
# 11
There doesn't exist some λ ∈ C such that:
λ ( 2 − 3 i , 5 + 4 i , − 6 + 7 i ) = ( 12 − 5 i , 7 + 22 i , − 32 − 9 i )
Proof
Assume for contradiction that there is such a λ ∈ C . Then, we can use the operation of scalar multiplication to show that:
λ ( 2 − 3 i , 5 + 4 i , − 6 + 7 i ) = ( 2 λ − 3 λ i , 5 λ + 4 λ i , − 6 λ + 7 λ i ) = ( 12 − 5 i , 7 + 22 i , − 32 − 9 i ) By definition, these vectors are equal if their components equal, so then:
2 λ − 3 λ i = 12 − 5 i , . . . Notice then that we require that real components equal with real components, and likewise for imaginary (derived from the definition of complex numbers). Then:
2 λ = 12 , − 3 λ = − 5 So the first equation would indicate λ = 6 , but plugging into the second equation we get that ( − 3 ) λ = ( − 3 ) ( 6 ) = 18 ≠ − 5 . This is a contradiction, so via proof by contradiction, then the theorem holds.
☐
# 12
( x + y ) + z = x + ( y + z ) for all x , y , z ∈ F n .
Proof
We use the fact that F is a field , so it contains the property of associativity. Thus, let x , y , z ∈ F n be arbitrary. We may say that:
x = ( x 1 , . . . , x n ) , y = ( y 1 , . . . , y n ) , z = ( z 1 , . . . , z n ) Now, consider the following:
( x + y ) + z = [ ( x 1 , . . . , x n ) + ( y 1 , . . . , y n ) ] + ( z 1 , . . . , z n ) = ( x 1 + y 1 , . . . , x n + y n ) + ( z 1 , . . . , z n ) ( Vector Addition Operator ) = ( ( x 1 + y 1 ) + z 1 , . . . , ( x n + y n ) + z n ) ( Vector Addition Operator ) = ( x 1 + ( y 1 + z 1 ) , . . . , x n + ( y n + z n ) ) ( Associativity of F ) = ( x 1 , . . . , x n ) + ( y 1 + z 1 , . . . , y n + z n ) ( Vector Addition ) = ( x 1 , . . . , x n ) + [ ( y 1 , . . . , y n ) + ( z 1 , . . . , z n ) ] ( Vector Addition ) = x + ( y + z ) ☐
1.B
# 1
Proof
Notice that:
( − v → ) + ( − ( − v → ) ) = 0 → ( Additive Inverse ) v → + ( − v → ) + ( − ( − v → ) ) = v → ( Add v → to both sides ) 0 → + ( − ( − v → ) ) = v → ( Definition of 0 → ) − ( − v → ) = v → ( Definition of 0 → ) ☐
# 2
Suppose that a ∈ F and v → ∈ V , and a v → = 0 → . Then either a = 0 or v → = 0 → .
Proof
If a = 0 already then we are done, so instead suppose that a ≠ 0 . Notice then that, by the properties of the field F that the multiplicative inverse 1 a exists. Thus:
1 a ( a v → ) = v → ( a a = 1 ) 1 a 0 → = v → ( Given ) 0 → = v → ( Theorem 1.30 from the Textbook ) Therefore, we either get a = 0 , or we get v → = 0 → .
☐
# 4
The empty set ∅ is not a vector space. It doesn't even include 0 → , since if it did then 0 → ∈ ∅ is itself a contradiction. But 0 → is needed in the set to consider it a vector space. Thus, ∅ is not a vector space.
# 5
The conditions of a vector space V are unchanged if, instead of confirming the additive inverse for every v → ∈ V , we instead say that 0 v → = 0 → for all v → ∈ V .
Proof
We try to prove that the additive inverse exists for all v → ∈ V using only the definition of a vector space (except for the additive inverse's existence itself) and the condition that 0 v → = 0 → .
Let v → ∈ V be arbitrary. Notice that:
0 v → = 0 → ( Given ) ( 1 + ( − 1 ) ) v → = 0 → ( 0 ≡ 1 + ( − 1 ) ) v → + ( − v → ) = 0 → ( Distributive Property ) Thus, we found the vector − v → such that the properties of the additive inverse exists
☐
1.C
# 1
a
The set:
S = { ( x 1 , x 2 , x 3 ) ∈ F 3 : x 1 + 2 x 2 + 3 x 3 = 0 } Is a subspace.
0 → = ( 0 , 0 , 0 ) ∈ S .
Let x , y ∈ S be arbitrary. You may say that x = ( x 1 , x 2 , x 3 ) such that x 1 + 2 x 2 + 3 x 3 = 0 . Similarly, y = ( y 1 , y 2 , y 3 ) and y 1 + 2 y 2 + 3 y 3 = 0 . Notice then that: $$
x + y = (x_1 + y_1, x_2 + y_2, x_3 + y_3)
B u t n o t i c e t h a t : \begin{align}
(x_1 + y_1) + 2(x_2 + y_2) + 3(x_3 + y_3) &= (x_1 + 2x_2 +3x_3) + (y_1 + 2y_2 + 3y_3)\
&= 0 + 0 \
&= 0
\end
S o b y d e f i n i t i o n t h e n $ x + y ∈ S $ . 3. L e t $ α ∈ F $ b e a r b i t r a r y . N o t i c e t h a t $ α x = ( α x 1 , α x 2 , α x 3 ) $ , a n d t h a t : \begin{align}
\alpha x_1 + 2\alpha x_2 + 3\alpha x_3 &= \alpha(x_1 + 2x_2 + 3x_3) \
&= \alpha(0) \
&= 0
\end
You can't use 'macro parameter character #' in math mode So then by definition again $\alpha x \in S$. #### b The set: So then by definition again $\alpha x \in S$. #### b The set: S = {(x_1, x_2, x_3) \in \mathbb{F} : x_1 + 2x_2 + 3x_3 = 4}
You can't use 'macro parameter character #' in math mode Is *not* a subspace. The $\vec{0} = (0,0,0) \notin S$ since $0 + 2\cdot 0 + 3\cdot 0 = 0 \neq 4$. #### c The set: Is *not* a subspace. The $\vec{0} = (0,0,0) \notin S$ since $0 + 2\cdot 0 + 3\cdot 0 = 0 \neq 4$. #### c The set: S = {(x_1, x_2, x_3) \in \mathbb{F}^3 : x_1x_2x_3 = 0}
You can't use 'macro parameter character #' in math mode Is *not* a subspace, since if we consider $v = (1,1,0)$ and $w = (0,0,1)$, then $v + w = (1,1,1) \notin S$. #### d The set: Is *not* a subspace, since if we consider $v = (1,1,0)$ and $w = (0,0,1)$, then $v + w = (1,1,1) \notin S$. #### d The set: S = {(x_1, x_2, x_3) \in \mathbb{F}^3 : x_1 = 5x_3}
You can't use 'macro parameter character #' in math mode *is* a subspace. Similar to my reasoning for 1.a, $\vec{0} \in S$, the set is closed under vector addition still, and also closed under scalar multiplication. ### \# 3 > [!theorem] > The set of differentiable real-valued functions $f$ on the interval $(-4,4)$ such that $f'(-1) = 3f(2)$ is a subspace of $\mathbb{R}^{(-4,4)}$. *Proof* For convenience, denote our set $S$: 1. $\vec{0}$ in this case is the function $f(x) = 0$ defined on $(-4,4)$. Notice that $f'(x) = 0$ still, so then: *is* a subspace. Similar to my reasoning for 1.a, $\vec{0} \in S$, the set is closed under vector addition still, and also closed under scalar multiplication. ### \# 3 > [!theorem] > The set of differentiable real-valued functions $f$ on the interval $(-4,4)$ such that $f'(-1) = 3f(2)$ is a subspace of $\mathbb{R}^{(-4,4)}$. *Proof* For convenience, denote our set $S$: 1. $\vec{0}$ in this case is the function $f(x) = 0$ defined on $(-4,4)$. Notice that $f'(x) = 0$ still, so then: f'(-1) = 0 = 3f(2) = 3\cdot 0
S o t h e n $ 0 → ∈ S $ . 2. L e t $ f , g ∈ S $ b e a r b i t r a r y , s o t h u s $ f ′ ( − 1 ) = 3 f ( 2 ) $ a n d l i k e w i s e f o r $ g $ a s w e l l . N o t i c e t h e n t h a t $ ( f + g ) ′ ( x ) = f ′ ( x ) + g ′ ( x ) $ , s o t h e n : (f+g)'(-1) = f'(-1) + g'(-1) = 3f(2) + 3g(2) = 3(f(2) + g(2)) = 3(f+g)(2)
S o t h e n $ f + g ∈ S $ . 3. L e t $ α ∈ R $ b e a r b i t r a r y , a n d $ f ∈ S $ . T h e n $ f ′ ( − 1 ) = 3 f ( 2 ) $ . T h u s , s i n c e $ ( α f ) ′ ( x ) = α f ′ ( x ) $ t h e n : (\alpha f)'(-1) = \alpha f'(-1) = \alpha 3f(2) = 3\alpha f(2) = 3(\alpha f)(2)
You can't use 'macro parameter character #' in math mode So therefore $\alpha f \in S$. Thus, $S$ is a subspace of $\mathbb{R}^{(-4,4)}$. ☐ ### \# 5 > [!theorem] > $\mathbb{R}^2$ is *not* a subspace of $\mathbb{C}^2$. *Proof* Namely, the scalar multiplication fails to be closed. Notice that we can have $\alpha = i$, but if we had some vector, say $\vec{x} = (1,1)$, then $\alpha \vec{x} = i(1,1) = (i,i) \notin \mathbb{R}^2$. ☐ ### \# 7 We need to find a nonempty subset $U$ of $\mathbb{R}^2$ such that $U$ is closed under addition and taking additive inverses ($-u \in U$), but $U$ isn't a subspace of $\mathbb{R}^2$. Notice that as a result we'd require that $\vec{0} \in U$ since $u + (-u) = \vec{0} \in U$ by definition. Thus, we need to find a way to make $U$ not be closed under scalar multiplication. The only way this could happen is that we scale by $\alpha$ such that the values of $U$ that make the vector are of a smaller "resolute" set. An example of this would be $U = \{(x_1, x_2) : x_1, x_2 \in \mathbb{Q}\}$. Here we see that $\vec{0} \in U$ while also $U$ is closed under vector addition for similar reasons it is in $\mathbb{R}$. However, $U$ isn't closed under scalar multiplication since notice that, as an example, the vector $\vec{u} = (1, 1) \in U$, but if $\alpha = \sqrt{2}$, then: So therefore $\alpha f \in S$. Thus, $S$ is a subspace of $\mathbb{R}^{(-4,4)}$. ☐ ### \# 5 > [!theorem] > $\mathbb{R}^2$ is *not* a subspace of $\mathbb{C}^2$. *Proof* Namely, the scalar multiplication fails to be closed. Notice that we can have $\alpha = i$, but if we had some vector, say $\vec{x} = (1,1)$, then $\alpha \vec{x} = i(1,1) = (i,i) \notin \mathbb{R}^2$. ☐ ### \# 7 We need to find a nonempty subset $U$ of $\mathbb{R}^2$ such that $U$ is closed under addition and taking additive inverses ($-u \in U$), but $U$ isn't a subspace of $\mathbb{R}^2$. Notice that as a result we'd require that $\vec{0} \in U$ since $u + (-u) = \vec{0} \in U$ by definition. Thus, we need to find a way to make $U$ not be closed under scalar multiplication. The only way this could happen is that we scale by $\alpha$ such that the values of $U$ that make the vector are of a smaller "resolute" set. An example of this would be $U = \{(x_1, x_2) : x_1, x_2 \in \mathbb{Q}\}$. Here we see that $\vec{0} \in U$ while also $U$ is closed under vector addition for similar reasons it is in $\mathbb{R}$. However, $U$ isn't closed under scalar multiplication since notice that, as an example, the vector $\vec{u} = (1, 1) \in U$, but if $\alpha = \sqrt{2}$, then: \alpha \vec{u} = (\sqrt{2}, \sqrt{2}) \notin U
You can't use 'macro parameter character #' in math mode Thus $U$ isn't a subspace. ### \# 8 We need to give an example of a nonempty subset $U$ of $\mathbb{R}^2$ such that $U$ is closed under scalar multiplication, but $U$ isn't a subspace of $\mathbb{R}$. Notice too that by definition we could have $0\vec{u} = \vec{0} \in U$, so the zero vector is in $U$. By being forced to be closed under scalar multiplication, we can have things like $x_2 = x_1 + 1$, $x_2 = f(x_1)$ where $f$ isn't a linear function, and if we instead had $x_2 = cx_1$ then we just get a vector space which we don't want. Thus, let's try a different approach. We know a $U \cup W$ may not necessarily be a subspace. This happens when we have $0$'s in one variable that affect the other variables be non-zero. Thus, consider if $U = \{(x_1, x_2) : x_1, x_2 \in \mathbb{R} \land (x_1 = 0 \lor x_2 = 0\}$. So for instance $(1,0), (0,1) \in U$ but $(1,1) \notin U$. Notice by definition that $(0,0) \in U$ and better still if we have $\alpha (x_1, 0) = (\alpha x_1, 0) \in U$ and likewise if we did $\alpha (0, x_2) \in U$. But notice that if we have $(0,1) = v \in U$ and $(1,0) = u \in U$, then $u + v = (1,1) \notin U$, so $U$ is not a subspace. ### \# 11 > [!theorem] > The intersection of every collection of subspaces of $V$ is a subspace of $V$. *Proof* We prove this in a similar way we'd prove any subspace of $V$. Denote the collection of all intersected subspaces of $V$ as $U_S$. 1. Notice that $\vec{0} \in U_S$ since all subspaces $U \in U_S$ have $\vec{0} \in U$ by definition of subspace. 2. Let $\vec{v}, \vec{u} \in U_S$ be arbitrary. It follows that $\vec{v}, \vec{u} \in U_S$ are in all subspaces $U \in U_S$. But for each subspace $U \in U_S$ since $U$ is a subspace then since $\vec{v}, \vec{u} \in U$ then clearly $\vec{v} + \vec{u} \in U$ for each subspace. Thus, since it's true for all subspaces $U \in U_S$ then clearly when we intersect them all then $\vec{v} + \vec{u}$ is also in $U_S$ as required. 3. Let $\alpha \in \mathbb{F}$ and $\vec{v} \in U_S$ be arbitrary. It follows that $\vec{v} \in U_S$ is in all subspaces $U \in U_S$. But for each subspace $U \in U_S$ since $U$ is a subspace then since $\vec{v} \in U$ then clearly $\alpha \vec{v} \in U$ for each subspace. Then, since it's true for all subspaces $U \in U_S$ then clearly when we intersect them all then $\alpha \vec{v}$ is also in $U_S$ as required. ☐ ### \# 12 > [!theorem] > Let $U, W$ be subspaces of $V$. Then $U \cup W$ is a subspace of $V$ iff one of the subspaces is contained in the other (ie: $U \subseteq W$ or $W \subseteq U$). *Proof* Let $U, W$ be arbitrary subspaces of $V$. We do the ($\rightarrow$) direction first: 1. Consider that $U \cup W$ is a subspace of $V$. First, notice that if $W \subseteq U$ then we are done, so suppose that $W \not \subseteq U$. If this is the case, then there exists some element, let's call it $y$, such that $y \in U$ and $y \notin W$. We'll try to prove that instead $U \subseteq W$. Notice that, via proof by contradiction, if $U \not \subseteq W$, then there'd be possibly another element $z$ such that $z \in W$ and $z \notin U$. But look! Notice that both $y,z \in U \cup W$. Since $U \cup W$ is a subspace, then clearly $y + z \in U \cup W$. If $y + z \in W$ as case 1, then since $z \in W$ as well then $y + z + (-z) \in W$, so then $y \in W$, which is a contradiction! As a case 2, we'd consider $y + z \in U$ and consider that since $y \in U$ then $y + z + (-y) \in U$ so then $z \in U$ which is also a contradiction. Thus, no matter what, we always get a contradiction, so then $U \not \subseteq W$ is false, so $U \subseteq W$. Therefore, either $W \subseteq U$ or $U \subseteq W$ as required. 2. Consider that $U \subseteq W$ or $W \subseteq U$. Without loss of generality, say $U \subseteq W$. Then if we consider $U \cup W$ then we notice that $U \cup W = W$, which is given as a subspace. So $U \cup W$ is a subspace. ☐ ### \# 15 > [!theorem] > Suppose $U$ is a subspace of $V$. Then $U + U = U$. *Proof* Notice that: Thus $U$ isn't a subspace. ### \# 8 We need to give an example of a nonempty subset $U$ of $\mathbb{R}^2$ such that $U$ is closed under scalar multiplication, but $U$ isn't a subspace of $\mathbb{R}$. Notice too that by definition we could have $0\vec{u} = \vec{0} \in U$, so the zero vector is in $U$. By being forced to be closed under scalar multiplication, we can have things like $x_2 = x_1 + 1$, $x_2 = f(x_1)$ where $f$ isn't a linear function, and if we instead had $x_2 = cx_1$ then we just get a vector space which we don't want. Thus, let's try a different approach. We know a $U \cup W$ may not necessarily be a subspace. This happens when we have $0$'s in one variable that affect the other variables be non-zero. Thus, consider if $U = \{(x_1, x_2) : x_1, x_2 \in \mathbb{R} \land (x_1 = 0 \lor x_2 = 0\}$. So for instance $(1,0), (0,1) \in U$ but $(1,1) \notin U$. Notice by definition that $(0,0) \in U$ and better still if we have $\alpha (x_1, 0) = (\alpha x_1, 0) \in U$ and likewise if we did $\alpha (0, x_2) \in U$. But notice that if we have $(0,1) = v \in U$ and $(1,0) = u \in U$, then $u + v = (1,1) \notin U$, so $U$ is not a subspace. ### \# 11 > [!theorem] > The intersection of every collection of subspaces of $V$ is a subspace of $V$. *Proof* We prove this in a similar way we'd prove any subspace of $V$. Denote the collection of all intersected subspaces of $V$ as $U_S$. 1. Notice that $\vec{0} \in U_S$ since all subspaces $U \in U_S$ have $\vec{0} \in U$ by definition of subspace. 2. Let $\vec{v}, \vec{u} \in U_S$ be arbitrary. It follows that $\vec{v}, \vec{u} \in U_S$ are in all subspaces $U \in U_S$. But for each subspace $U \in U_S$ since $U$ is a subspace then since $\vec{v}, \vec{u} \in U$ then clearly $\vec{v} + \vec{u} \in U$ for each subspace. Thus, since it's true for all subspaces $U \in U_S$ then clearly when we intersect them all then $\vec{v} + \vec{u}$ is also in $U_S$ as required. 3. Let $\alpha \in \mathbb{F}$ and $\vec{v} \in U_S$ be arbitrary. It follows that $\vec{v} \in U_S$ is in all subspaces $U \in U_S$. But for each subspace $U \in U_S$ since $U$ is a subspace then since $\vec{v} \in U$ then clearly $\alpha \vec{v} \in U$ for each subspace. Then, since it's true for all subspaces $U \in U_S$ then clearly when we intersect them all then $\alpha \vec{v}$ is also in $U_S$ as required. ☐ ### \# 12 > [!theorem] > Let $U, W$ be subspaces of $V$. Then $U \cup W$ is a subspace of $V$ iff one of the subspaces is contained in the other (ie: $U \subseteq W$ or $W \subseteq U$). *Proof* Let $U, W$ be arbitrary subspaces of $V$. We do the ($\rightarrow$) direction first: 1. Consider that $U \cup W$ is a subspace of $V$. First, notice that if $W \subseteq U$ then we are done, so suppose that $W \not \subseteq U$. If this is the case, then there exists some element, let's call it $y$, such that $y \in U$ and $y \notin W$. We'll try to prove that instead $U \subseteq W$. Notice that, via proof by contradiction, if $U \not \subseteq W$, then there'd be possibly another element $z$ such that $z \in W$ and $z \notin U$. But look! Notice that both $y,z \in U \cup W$. Since $U \cup W$ is a subspace, then clearly $y + z \in U \cup W$. If $y + z \in W$ as case 1, then since $z \in W$ as well then $y + z + (-z) \in W$, so then $y \in W$, which is a contradiction! As a case 2, we'd consider $y + z \in U$ and consider that since $y \in U$ then $y + z + (-y) \in U$ so then $z \in U$ which is also a contradiction. Thus, no matter what, we always get a contradiction, so then $U \not \subseteq W$ is false, so $U \subseteq W$. Therefore, either $W \subseteq U$ or $U \subseteq W$ as required. 2. Consider that $U \subseteq W$ or $W \subseteq U$. Without loss of generality, say $U \subseteq W$. Then if we consider $U \cup W$ then we notice that $U \cup W = W$, which is given as a subspace. So $U \cup W$ is a subspace. ☐ ### \# 15 > [!theorem] > Suppose $U$ is a subspace of $V$. Then $U + U = U$. *Proof* Notice that: U + U = {u_1 + u_2 | u_1, u_2 \in U}
B u t s i n c e $ U $ i t s e l f i s a v e c t o r s p a c e ( r e a l l y a s u b s p a c e ) , t h e n s i n c e $ u 1 , u 2 ∈ U $ t h e n $ u 1 + u 2 ∈ U $ a l w a y s , s o r e a l l y : U + U = {u | u \in U} = U
You can't use 'macro parameter character #' in math mode Thus $U + U = U$. ☐ ### \# 19 It isn't necessarily the case that $U_1 = U_2$, as the following counterexample shows: Consider how $U_1, U_2, W$ are subspaces of $V$. Notice that: Thus $U + U = U$. ☐ ### \# 19 It isn't necessarily the case that $U_1 = U_2$, as the following counterexample shows: Consider how $U_1, U_2, W$ are subspaces of $V$. Notice that: U_1 + W = {u_1 + w | u_1 \in U_1, w \in W}
a n d s i m i l a r f o r $ U 2 + W $ . I t i s a l s o g i v e n t h a t $ U 1 + W = U 2 + W $ , s o e q u a t i n g b o t h s i d e s s a y s t h a t : U_1 + W = {u_1 + w | u_1 \in U_1, w \in W} = {u_2 + w | u_2 \in U_2, w \in W} = U_2 + W
B u t t h i s s u g g e s t s t h a t w e s h o u l d f i n d d i f f e r e n t s u b s p a c e s $ U 1 , U 2 , W $ s u c h t h a t $ W $ s e e m s t o " c o n n e c t " t h e t w o t o m a k e t h e l a r g e r v e c t o r s p a c e $ V $ , o r s o m e r e l a t i v e l y l a r g e e n o u g h v e c t o r s p a c e . L e t $ W = V = R 3 $ . F u r t h e r , l e t $ U 1 = { ( x , y , 0 ) : x , y ∈ R } $ a n d $ U 2 = { ( 0 , x , y ) : x , y ∈ R } $ . N o t i c e t h a t h e r e a l l $ U 1 , U 2 , W $ a r e s u b s p a c e s o f $ V = R 3 $ ( e v e n $ W $ , b e c a u s e $ W ⊆ V $ e v e n i f $ W = V $ ) , b u t s e e t h a t : U_1 + W = V = U_2 + W
You can't use 'macro parameter character #' in math mode While still $U_1 \neq U_2$. Thus, this is a valid counterexample to the claim that $U_1 = U_2$. > [!note] > I could've had $W = \{(x, 0, y) : x, y \in \mathbb{R}^3\}$, and had $U_1 = \{(x,0,0) : x \in \mathbb{R}\}$ and $U_2 = \{(0,0,x) : x \in \mathbb{R}\}$ and the same argument would apply (except here we don't require that $W = V$). ### \# 20 Suppose While still $U_1 \neq U_2$. Thus, this is a valid counterexample to the claim that $U_1 = U_2$. > [!note] > I could've had $W = \{(x, 0, y) : x, y \in \mathbb{R}^3\}$, and had $U_1 = \{(x,0,0) : x \in \mathbb{R}\}$ and $U_2 = \{(0,0,x) : x \in \mathbb{R}\}$ and the same argument would apply (except here we don't require that $W = V$). ### \# 20 Suppose U = {(x,x,y,y) \in \mathbb{F}^4 : x, y \in \mathbb{F}}
I p r o p o s e t h e s u b s p a c e : W = {(0,x, 0, y) : x ,y \in \mathbb{F}}
w i l l a l l o w $ F 4 = U ⊕ W $ . W e p r o v e t h a t t h i s i s c a s e a s f o l l o w s , u s i n g i f f s t a t e m e n t s o n l y ( t o a l l o w f o r a n $ = $ i n s t e a d o f l o o s e $ ⊆ $ ) . L e t $ x → ∈ F 4 $ b e a r b i t r a r y , s o t h e n : \vec{x} = (x,y,z,w): x,y,z,w \in \mathbb
N o t i c e t h a t w e c a n s p l i t t h e v e c t o r $ x → $ i n t o t h e f o l l o w i n g : \vec{x} = (x, 0, 0, 0) + (0,y,0,0) + (0,0,z,0) + (0,0,0,w)
v i a t h e o p e r a t i o n s i m p o s e d o n $ F 4 $ . N o w , n o t i c e t h a t f o r $ y $ s i n c e $ F $ i s a f i e l d t h e n c l e a r l y $ y = x + ( y − x ) $ . L i k e w i s e , $ w = z + ( w − z ) $ . W e c a n m a k e t h o s e r e p l a c e m e n t s a s f o l l o w s : \vec{x} = (x, 0, 0, 0) + (0,x + (y -x),0,0) + (0,0,z,0) + (0,0,0,z + (w - z))
A n d u n d e r t h e o p e r a t i o n s o f v e c t o r a d d i t i o n , $ x → $ c a n b e o r g a n i z e d a s : \begin{align}
\vec{x} &= (x, x, 0, 0) + (0,0, z, z) + (0,y-x, 0, 0) + (0, 0, 0, w-z) \
&= (x,x,z,z) + (0,y-x, 0, w-z)
\end
B u t n o t i c e t h a t $ ( x , x , z , z ) ∈ U $ a n d $ ( 0 , y − z , 0 , w − z ) ∈ W $ , s o c l e a r l y a t l e a s t $ U + W = F 4 $ . N o w w e s h o w i t ′ s a d i r e c t s u m s p e c i f i c a l l y . N o t i c e t h a t : U \cap W = {0}
You can't use 'macro parameter character #' in math mode Since if some vector $\vec{y} \neq \vec{0}$ was such that $\vec{y} \in U$ and $\vec{y} \in W$ then clearly, since $\vec{y} = (x,y,z,w)$ then $x = 0$, which would make $y = 0$ and so on for $z,w = 0$, so then $\vec{y} = \vec{0}$ this whole time. Thus, $U, W$ are a direct sum (via 1.45), so then $U \oplus W = \mathbb{F}^4$. ### \# 23 > [!proposition] > If $U_1, U_2, W$ are subspaces of $W$ such that: > Since if some vector $\vec{y} \neq \vec{0}$ was such that $\vec{y} \in U$ and $\vec{y} \in W$ then clearly, since $\vec{y} = (x,y,z,w)$ then $x = 0$, which would make $y = 0$ and so on for $z,w = 0$, so then $\vec{y} = \vec{0}$ this whole time. Thus, $U, W$ are a direct sum (via 1.45), so then $U \oplus W = \mathbb{F}^4$. ### \# 23 > [!proposition] > If $U_1, U_2, W$ are subspaces of $W$ such that: >
V = U_1 \oplus W = U_2 \oplus W
T h e n $ U 1 = U 2 $ .
This is a false proposition. Consider V = { ( x , 0 , y ) : x , y ∈ R } . Then let U 1 = { ( x , 0 , 0 ) : x ∈ R } , U 2 = { ( 0 , 0 , y ) : y ∈ R } , and W = { ( z , 0 , z ) : z ∈ R 3 } . Notice that these follow what the proposition says, but clearly U 1 ≠ U 2 .
# 24
A function f : R → R is even if f ( − x ) = f ( x ) and odd if f ( − x ) = − f ( x ) for all x ∈ R . Let U e denote the set of real-valued even functions on R and likewise U o for all the odd functions on R . Then R R = U e ⊕ U o .
Proof
We need to find a way to first see if we can span R R solely by a combination of a single even function, and a single odd function. Then we need to show that these combinations are unique.
First, let f ∈ R R be an arbitrary function. Then notice that:
f ( x ) = f ( x ) + f ( − x ) 2 + f ( x ) − f ( − x ) ) 2 But notice the function f e on the left is an even function since:
f e ( x ) = f ( x ) + f ( − x ) 2 , f e ( − x ) = f ( − x ) + f ( x ) 2 = f e ( x ) Similarly, the f o function on the right is odd since:
f o ( x ) = f ( x ) − f ( − x ) 2 , f o ( − x ) = f ( − x ) − f ( x ) 2 = − f o ( x ) Thus, we can always write f as a sum of an even and odd function:
f ( x ) = f e ( x ) + f o ( x ) Now we show that this combination must be unique. We'll try to show that U e ⊕ U o via definition 1.45. Notice that the zero function f 0 ( x ) = 0 is both odd and even (try this yourself), so then f 0 ∈ U e ∩ U o . We now show that any other non-zero function f ∉ U e ∩ U o . Assume via contradiction that f ∈ U e ∩ U o . Then f ( x ) = f ( − x ) and f ( − x ) = − f ( x ) . But notice then that if we combine the equations that:
f ( x ) = f ( − x ) = − f ( x ) ⇒ 2 f ( x ) = 0 ⇒ f ( x ) = 0 But this contradicts f not being the zero-function, so then there mustn't be any non-zero function in the set. Thus U e ∩ U o = { f 0 } , so U e , U o are a direct sum.
Therefore, U e ⊕ U o = R R .
☐