This lecture is much more of a lecture on the applications of what we've learned so far.
Quadratic Forms
Quadratic Forms
A quadratic form in commuting variables is a homogeneous polynomial of degree 2 in terms of those variables.
Here homogeneous here just says that every monomial in the polynomial is of degree 2. As an example:
is a quadratic form of commuting variables . This is homogeneous since each term is of degree 2 or less. If we added a then it wouldn't fit the form since we need exactly the second degree of a combination of our .
If we have a quadratic form then:
We don't really need to do the entire summation, as . As such, we may restrict to have . Notice:
You may guess that since is commuting that we have some freedom. This will be the case for .
As an instance of this, consider:
We want to convert the LHS to the matrix representation we noticed before. We got the entries in the matrix by doing the actual matrix transformations and confirming these things. This works as long as:
We want to make symmetric, so we can do that by splitting our terms! Thus have and . Thus:
is a symmetric matrix. Notice in general for all the terms we have the entries on the diagonal, and the antidiagonal is the sums of the other entries for .
Symmetry in
Every quadratic form can be put in the form:
where is a symmetric matrix.
The operator where then must be normal and self-adjoint (if we restrict ). Namely, is defined by . Thus is self-adjoint since . Using the Spectral Theorem, then we must have an orthonormal eigenbasis for which is diagonal.
If we let:
Thus:
In particular, we have , or , where for each eigenvector.
So given a quadratic form:
This is a nice theorem that we just proved:
Theorem
Given a symmetric matrix , there is an orthogonal change of variables or that transforms the quadratic form into where there are no cross terms. Here .
Using these!
When you graph quadratic forms, ellipses, hyperbolas, etc. are all different shapes that had constrained properties. The theorem says we can make a change of variables to allow for an elimination of other potential cases.
For instance, try to graph:
We want to try to graph this out. Here we have a quadratic form:
So we have an orthonormal basis for . We can go through our Linear Analysis I methods to find our eigenvectors. Doing so yields that and with and . Notice their dot product is 0, as expected as they are orthogonal (although they are not normal) but we can normalize. As such:
so then define:
notice that $P^T = P here. So then we introduce our new coordinates:
Thus:
So we can graph the ellipse in the coordinate system, noting that it's equivalent to its standard form:
Then we just convert from back to . Recall that we had:
Thus we just transform our to in a transforming way:
Here $P$ reflects across $y = x$ and rotates CCW by some angle $\alpha$. Thus, we transform into the new axes:
Taylor Series with Multiple Variables
In Calc IV you learn about the second-derivative test. We now have all the tools using Linear Algebra to show why that works entirely.
We first look at the extremal values of a quadratic form. For instance if we have:
then:
If we know what does to unit vectors, then we know what it does to all vectors. So we can just look at the unit sphere for these vectors. Suppose , so it lives there. The question we'll look at (tomorrow) is: what are the extreme values of when ?