Chapter 6 (cont.) - Finishing Inner Product Spaces

We continue from the discussion of linear functions via Chapter 6 - Inner Product Spaces#^4afaec.

Linear Functionals on Inner Product Spaces

See our definitions of linear functional via 3.F in:

linear functional

A linear functional of vector space V is an element of L(V,F).

But notice that now inner products, over some constant vV can be formed via:

ϕ:VF,ϕ(vin)=vin,v

where ϕL(V,F). For example, consider φ:P2(R)R given by:

φ(p)=11p(t)(cos(πt))dt

is a linear functional. But it is not obvious that for all pP2(R) that:

φ(p)=p,u

because notice that u=cos(πt)P2(R).

Riesz Representation Theorem

Suppose V is finite-dimensional and φ is a linear functional on V. Then there is a unique vector uV such that:

φ(v)=v,u

for every vV.

Proof
First we show there exists a vector uV such that φ(v)=v,u for every vV. Let e1,...,en be an orthonormal basis of V. Then:

φ(v)=φ(i=1nv,eiei)=i=1nv,eiφ(ei)=v,i=1nφ(ei)ei

Where the first equality comes from Chapter 6 - Inner Product Spaces#^ef9c3b. Thus, setting:

u=i=1nφ(ei)ei

Then φ(v)=v,u for every vV as desired.

Now we prove that only one vector uV has the desired behavior. Suppose u1,u2V are such that:

v,u1=v,u2=φ(v)

for every vV. Then:

0=v,u1v,u2=v,u1u2

for every vV. Taking v=u1u2 shows that then u1u2=0 so then u1=u2, showing uniqueness.

As an example, let's find uP2(R) such that:

11p(t)(cos(πt))dt=11p(t)u(t)dt

for all pP2(R). The way to do this is to let:

φ(p)=11p(t)cos(πt)dt

Where clearly φ is linear here. As such, then apply Chapter 6 (cont.) - Finishing Inner Product Spaces#^a857c6, by applying:

u=i=1nφ(ei)ei

Thus then using the orthonormal basis from an earlier example:

u(x)=i=13φ(ei)ei=(1112cos(πt)dt)12+(1132tcos(πt)dt)32x+(11458(t21/3)cos(πt)dt)458(x21/3)

where doing a bit of simplification shows that:

u(x)=452π2(x21/3)

Notice that our formula for u is dependent on both our orthonormal basis e1,...,en as well as our chosen φ. However, by Chapter 6 (cont.) - Finishing Inner Product Spaces#^a857c6, then u is uniquely determined by φ, so the RHS of our definition of u is the same regardless of which orthonormal basis e1,...,en of V that is chosen.

6.C: Orthogonal Complements and Minimization Problems

orthogonal complement, U

If U is a subset of V, then the orthogonal complement of U, denoted U, is the set of all vectors in V that are orthogonal to every vector in U:

U={vV:v,u=0uU}

For instance, if U is a line in R3 then U is the plane containing the origin that is perpendicular to U. If U is a plane in R3, then U is the line containing the origin that is perpendicular to U.

Basic properties of the orthogonal complement

  • If U is a subset of V, then U is a subspace of V
  • {0}=V
  • V={0}
  • If U is a subset of V, then UU{0}
  • If U,W are subsets of V and UW then WU.

Proof
(a) Suppose U is a subset of V. Then 0,u=0 for every uU. So 0U. Suppose v,wU. If uU then:

v+w,u=v,u+v,w=0+0=0

Thus v+wU. If λF then:

λv,u=λv,u=λ0=0

Thus λvU. Thus U is a subspace of V.

(b) Suppose vV. Then v,0=0 implying that v{0} so {0}=V.
(c) Suppose vV. Then v,v=0 so then v=0 thus V={0}.
(d) Suppose UV and vUU. Then v,v=0 so v=0 so UU{0}.
(e) Suppose U,WV and UW. Suppose vW, then v,u=0 for all uW. So v,u=0 for all uU, so vU thus WU.

Recall that if U,W are subspaces then V is a direct sum V=UW iff UW={0}. In a similar manner:

Direct sum of a subspace and its orthogonal complement

Suppose U is a finite-dimensional subspace of V. Then:

V=UU

Proof
U is a subset of V, so then UU{0}. Since 0U,U then we have equality. Thus, if V=U+U then we are good. Notice that we can always do that because if vV then e1,...,em is an orthonormal basis of U:

v=i=1mv,eieiu+vi=1mv,eieiw

choose u,w as defined above. Clearly uU since e1,...,em is an orthonormal list. For each j=1,...,m we have:

w,ej=v,ejv,ej=0

so w is orthogonal to each vector in span(e1,...,em), so wU. Thus v=u+w showing the sum V=U+U.

Dimension of the orthogonal complement

Suppose V is finite-dimensional and U is a subspace of V. Then:

dim(U)=dim(V)dim(U)

Proof
Apply Chapter 3 (cont.) - Products and Quotients of Vector Spaces#^5193d7.

The orthogonal complement of the orthogonal complement

Suppose U is finite dimensional subspace of V. Then:

U=(U)

Proof
First we show (). Suppose uU. Then u,v=0 for every vU. Because u is orthogonal to every vector in U then u(U).

For the other direction, suppose v(U). By Chapter 6 (cont.) - Finishing Inner Product Spaces#^8fdc3c, we can write v=u+w where uU and wU. We have vu=wU. Because v(U) and u(U) (from using ) then w=vu(U) by closure under v. addition. Thus w=vuU(U), so then vu is orthogonal to itself, so vu=0, so v=u so vU.

orthogonal projection, PU

Suppose U is a finite-dimensional subspace of V. The orthogonal projection of V onto U is the operator PUL(V) defined as follows. For vV write v=u+w where uU and wU. Then PUv=u.

PU is well defined since we can use Chapter 6 (cont.) - Finishing Inner Product Spaces#^8fdc3c.

Properties of the orthogonal projection PU

  • PUL(V)
  • PUu=u for all uU.
  • PUw=0 for all wU.
  • range(PU)=U.
  • null(PU)=U.
  • vPUvU
  • PU2=PU
  • PUvv
  • For every orthonormal basis e1,...,em of U:
PUv=i=1mv,eiei

Proof
The proof of these is very easy to do, and is found in 2015_Book_LinearAlgebraDoneRight#page=197.

Minimization Problems

The following problem is really common: given a subspace U of V and a point vV, find a point uU such that vu is as small as possible. The next lemma shows that this problem is solved by taking u=PUv.

Minimizing the distance to a subspace

Suppose U is a finite-dimensional subspace of V, vV and uU. Then:

vPUvvu

Furthermore the inequality above is an equality iff u=PUv.

Proof

vPUv2vPUv2+PUvu2(0PUvu2)=(vPUv)+(PUvu)2Pythagorean Theorem=vu2

Here we only get equality if the top line is an equality, iff PUvu=0 iff u=PUv.

Let's do a cool example! Let's find a polynomial u with real coefficients and degree at most 5 that approximates sin(x) as well as possible on the interval [π,π] in the sense that:

ππ|sin(x)u(x)|2dx

is as small as possible. We'll compare to the Taylor Series approximation. Let CR[π,π] denote the real inner product space of continuous real-valued functions on [π,π] with inner product:

ππf(x)g(x)dx

Let vCR[π,π] be the function defined by v(x)=sin(x). Let U denote the subspace of CR[π,π] consisting of the polynomials with real coefficients and degree at most 5. Our problem can be reformulated as:

Find uU such that vu is as small as possible.

To compute this, apply Gram-Schmidt via Chapter 6 - Inner Product Spaces#^d43651 to the basis βs={1,x,x2,x3,x4,x5} of U, producing an orthonormal basis e1,...,e6 of U. Then, again using the inner product given by the Gram-Schmidt Procedure, compute PUv using Chapter 6 (cont.) - Finishing Inner Product Spaces#^3861ef part (i), where for every orthonormal basis and using m=6 in this case:

PUv=i=16v,eiei

doing this here shows that PUv is the function u defined by:

u(x)=0.987862x0.155271x3+0.00564312x5

as a close approximation. And to see how good it is, check it out:

desmos-graph(1).png

Can you see the red sine wave? That's sin(x)! That's right, this sine wave is really well approximated by the blue u(x) curve.

Compare that with the 5-th degree Taylor polynomial over the same interval

desmos-graph(2).png

Look at how much better this approximation is! That's because taylor polynomials really are only good near |x|<2. Other than that, they start to really suck.