April 30th, 2011

beartato phd

Undo does not imply PutPut

I have to conclude that my brain does very weird things at night while I sleep. The evidence I have consists of
(a) my math/programming productivity drops off rapidly if I get sleep-deprived
(b) I often wake up early in the morning with (not always correct, but potential) solutions to last night's problems almost fully formed

Sometimes they are solutions to problems I wasn't even particularly trying to solve.

For instance, I woke up this morning with my head full of thoughts about lenses for some reason. I hadn't really thought about them for months, but I went through the exercise of seeing that lenses with the GetPut and PutGet nicely form a category, but then tragically (and I think this is coming from some flashback to overhearing the tail end of an advisor meeting between Aaron Bohannon and Benjamin Pierce) that category is not obviously cartesian closed. I tried to think if there was any way of thinking about some sort of magic special pair type that would provide a monoidal closed structure together with lens-functions being the lolli, but I didn't get anywhere.

What did spool itself out of my brain after that point was an example located somewhere in the "Lens Bestiary" on slide 70 of the slides I linked above, that shows there's a property that is a slight (and strict!) weakening of the "forgetful" lenses. I did some searching later and found that other people have called this property "Undo", for example on page 10 of Algebra of bidirectional model synchronization.

A recap of basic lens stuff: a lens from A to B is a pair of functions
get : A → B
put : A x B → A
such that
get(put(a, b)) = b ("PutGet")
put(a, get(a)) = a ("GetPut")
A lens is a way of "viewing" a piece of data of type A as instead being type B --- this view being the "get" function --- that is sort of reversible, in that you can change the "view" and have those changes reflected back into the original data --- what you do is take your original data a, and the new b that you want, and give them to "put". The "PutGet" axiom says that if you put b in, you'll get b out, and "GetPut" says that if you put the unchanged view of a back in, it won't change anything.

"Forgetfulness" in the slides above (elsewhere called I think "PutPut"), is a further property that says if you do two puts, only the last one matters.
put(put(a, b1), b2) = put(a, b2) ("PutPut")

The weakening I had in mind was saying that if you do a put of some foreign data, and then a put of the original view B-data, you get back your original A-data:
put(put(a, b), get(a)) = a ("Undo")

A counterexample to show that this notion of "Undo" is actually strictly weaker than "forgetful" is as follows:
A = nat
B = {0,1,2}
get(a) = a mod 3
put(a, b) = case (b - a) mod 3 of 
  0 => a
| 1 => a + 1
| 2 => a - 1

What's going on here? put's job is to force the new a to have the desired remainder mod 3. If it's already the same as b, leave it alone. If b is one more than a mod 3, we bump a up by one. If b is one less than a mod 3, we bump a down by one.

I imagine the natural number a to be like bead on a helical wire, spiraling up (and down!) to infinity, and b ∈ {0,1,2} is like a huge tall skinny magnet that I can move around the outside of the helix, pulling the bead towards it. If I just move the magnet back and forth, the bead will just move back to where it started, but if I move the magnet all the way around in one or more full revolutions around the helix, I can make the bead make progress up or down the wire.
lens
For example I have:
put(put(put(0,1),2),0) = 3
but
put(put(0,1),0) = 0
So two puts aren't always the same as one put, (it fails PutPut) but a put immediately followed by a put of the original get does have no effect. (it has Undo)
beartato phd

(no subject)

Separate math question:
I would like a probability distribution d over functions R^2 → R. Think of randomly generated height-maps for a video game or something. So if we do the standard probability theory thing and let Ω be the underlying probability space, then d is a function Ω → R^2 → R. I would like:
(1) d is always smooth, i.e. d(ω) ∈ C∞(R^2 → R) function for all ω
(2) d is rotation-, translation-, mirror-invariant, i.e. for any subset S of C∞(R^2 → R), and any isometry π : R^2 → R^2 we have
μ({ω | d(ω) ∈ S}) = μ({ω | d(ω) o π ∈ S})
(where μ is the measure that lives back on Ω)
(3) d is never flat, i.e. for only perhaps a set of ω of measure zero is it the case that there exists x and y such that d(ω)(x,y) has value 0 and all (first and higher order) derivatives zero.
(4) d is efficiently computable, i.e. for all ω (except perhaps a set of measure zero) we have that the function d(ω) is a function that can be computed in a finite amount of time, assuming a reasonable set of O(1) common operations on real numbers like + - / exp log, etc., whatever. It is each particular point-sampling computation from R^2 to R that I am talking about being an efficient computation, although I guess I also tacitly assume the sampling process itself from Ω to (R^2 → R) is efficient as well.

Is there such a distribution? I could sample from a 2-D Poisson process (by using a Poisson distribution to ask how many points are in each unit-sized square in the world, and then uniformly distributing that many points within that square) to get a scattering of points in space, and then convolve this with a nice smooth C∞ bump function with compact support to satisfy (1), (2), and (4), but maybe some region might get unlucky and not have a point near enough to it, and (3) would be violated. I could use a bump function without compact support, like a Gaussian or something, and get (1), (2), and (3), but I'd have to sum over the contribution of all the infinitely many Poisson point events to get a single point sample, failing (4). I think Perlin noise is (1), (3), and (4), but I'm pretty sure it is tied to an axis-aligned grid in a way that violates (2). White noise across all space gives (2),(3),(4) without (1).