Thus then using the orthonormal basis from an earlier example:
where doing a bit of simplification shows that:
Notice that our formula for is dependent on both our orthonormal basis as well as our chosen . However, by Chapter 6 (cont.) - Finishing Inner Product Spaces#^a857c6, then is uniquely determined by , so the RHS of our definition of is the same regardless of which orthonormal basis of that is chosen.
6.C: Orthogonal Complements and Minimization Problems
orthogonal complement,
If is a subset of , then the orthogonal complement of , denoted , is the set of all vectors in that are orthogonal to every vector in :
For instance, if is a line in then is the plane containing the origin that is perpendicular to . If is a plane in , then is the line containing the origin that is perpendicular to .
Basic properties of the orthogonal complement
If is a subset of , then is a subspace of
If is a subset of , then
If are subsets of and then .
Proof
(a) Suppose is a subset of . Then for every . So . Suppose . If then:
Thus . If then:
Thus . Thus is a subspace of .
(b) Suppose . Then implying that so .
(c) Suppose . Then so then thus .
(d) Suppose and . Then so so .
(e) Suppose and . Suppose , then for all . So for all , so thus .
☐
Recall that if are subspaces then is a direct sum iff . In a similar manner:
Direct sum of a subspace and its orthogonal complement
Suppose is a finite-dimensional subspace of . Then:
Proof is a subset of , so then . Since then we have equality. Thus, if then we are good. Notice that we can always do that because if then is an orthonormal basis of :
choose as defined above. Clearly since is an orthonormal list. For each we have:
so is orthogonal to each vector in , so . Thus showing the sum .
☐
Dimension of the orthogonal complement
Suppose is finite-dimensional and is a subspace of . Then:
The orthogonal complement of the orthogonal complement
Suppose is finite dimensional subspace of . Then:
Proof
First we show (). Suppose . Then for every . Because is orthogonal to every vector in then .
For the other direction, suppose . By Chapter 6 (cont.) - Finishing Inner Product Spaces#^8fdc3c, we can write where and . We have . Because and (from using ) then by closure under v. addition. Thus , so then is orthogonal to itself, so , so so .
☐
orthogonal projection,
Suppose is a finite-dimensional subspace of . The orthogonal projection of onto is the operator defined as follows. For write where and . Then .
The following problem is really common: given a subspace of and a point , find a point such that is as small as possible. The next lemma shows that this problem is solved by taking .
Minimizing the distance to a subspace
Suppose is a finite-dimensional subspace of , and . Then:
Furthermore the inequality above is an equality iff .
Proof
Here we only get equality if the top line is an equality, iff iff .
☐
Let's do a cool example! Let's find a polynomial with real coefficients and degree at most 5 that approximates as well as possible on the interval in the sense that:
is as small as possible. We'll compare to the Taylor Series approximation. Let denote the real inner product space of continuous real-valued functions on with inner product:
Let be the function defined by . Let denote the subspace of consisting of the polynomials with real coefficients and degree at most 5. Our problem can be reformulated as:
doing this here shows that is the function defined by:
as a close approximation. And to see how good it is, check it out:
Can you see the red sine wave? That's ! That's right, this sine wave is really well approximated by the blue curve.
Compare that with the 5-th degree Taylor polynomial over the same interval
Look at how much better this approximation is! That's because taylor polynomials really are only good near . Other than that, they start to really suck.