Jason (jcreed) wrote,
Jason
jcreed

A pretty basic linear algebra question that arose while I was trying to believe the Riesz representation theorem:

Say I've got some real Hilbert space H and a finite dimensional linear space V, and a continuous linear (and so bounded) map f : H → V. Say G := the orthogonal complement of the kernel of f. I'm pretty sure f restricted to G is a witness that G and rng f are isomorphic, but somehow I can only prove injectivity. To see injectivity, just consider the preimage of 0 under f. By definition, that's the kernel of f, and its intersection with G is obviously just {0} because G's its orthogonal complement. So, okay, f |G is injective because the preimage of 0 is {0}. Why is it surjective on rng f? The best I can say is that if I pick some x in rng f, say f(y) = x, and y is not in G, then I can find some witness z in ker f that y is not in G, having specifically the property that y * z = n > 0. (where * is supposed to be the dot product) Now I can ignore my original y and look at y - nz / z * z instead. This vector at least has the property that z doesn't show that it's not in G, for z * (y - nz / z * z) = z * y - n (z * z / z * z) = 0. Also f(y - n z / z * z) = f(y) - nf(z) / z * z = f(y) = x because z in ker(f). So we still have something in the preimage of x, that stays orthogonally clear of at least some vector in ker f. But we're supposed to stay clear of all of them — I imagine there's some limiting process by which I keep knocking off all of them, but I can't remember enough linear algebra to suss it out.




Got it: I don't need to repeat this projection process, but I think I need to use Cauchy completion to show that doing it once is enough.

Suppose again that f(x) = y and z ∈ ker f is sadly such that the dot product x * z is nonzero. The trick is, choose z such that it's a unit vector that maximizes the dot product x * z. Now x - (x * z) z ∈ (ker f), and still f(x - (x * z) z) = y.

We have two questions to address. 1. Why is it the case that x - (x * z) z ∈ (ker f)? If we could find another unit vector w ∈ ker f that had a nonzero dot product with x - (x * z) z, some simple calculations show that w - (w * z) z would also have a nonzero dot product with x - (x * z) z. So we might as well think about w - (w * z) z instead of w, since it's nicely orthogonal to z. Hell, let's just suppose wlog that w is already orthogonal to z. We're going to (towards a contradiction) find a unit vector that has a greater dot product with x than z originally did. Define
n = x * z
m = x * w
And consider the k-indexed family of unit vectors
(z + kw) / ||z + kw||
Obviously if k is 0, we get z, and as k goes to infinity we get w.
Dot this with x and we get
(n + km) / ||z + kw||
Compute the k-derivative of this at k = 0 and we find that it's m. So if we just rotate infinitesimally towards w, we nudge up n a little bit, contradicting the assumption that n was maximal.

2. Why are we allowed to find a z that maximizes the dot product? The set of values that can come out of a dot product against x is certainly bounded by the square norm of x. I think Cauchy completeness comes in when we want to say that the supremum can actually be achieved.
Tags: math
Subscribe
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 4 comments