November 8th, 2009

beartato phd

(no subject)

I woke up this morning with the word "huitlacoche" in my head and I had no idea why. I looked it up on wikipedia and it turns out it's a weird-ass black, swollen, infected version of corn that some people consider a delicacy.

I was like why do I even know this word.

Then I remembered! I had seen it years ago on Steve don't eat it!, which is great food writing and you should read it all if you haven't already. The your-mom jokes in the Huitlacoche episode are kinda dumb but I like a lot of the other ones and anyhow I take pleasure in reading about another dude eating really gross things so I don't have to.
beartato phd

(no subject)

Have been looking at the literature on Functional Reactive Programming and related things. I am learning that no matter how much I think I've properly digested the whole world of monads and arrows and applicative functors, there's always more to learn.

Generally speaking the programmers' approach to all of these concepts seems very weird to me compared to the logician/category-theorist take, but McBride and Paterson's "Applicative programming with effects" is a really awesome paper that synthesizes the two points of view pretty well for the case of applicative functors. The punchline is that what a Haskell hacker calls applicative functors are what a category theorist calls strong lax monoidal functors. Yes, both strong and lax! Terribly, from a terminological point of view, "lax" means that the functor is weaker than "weak", and "strong" isn't the opposite of "weak" (if anything, "strict" is --- but since "lax" exists as a concept, it's hard to say that "strict" is the opposite of "weak", just that it's, ahem, a stronger condition than "weak") and means that the functor has a "strength".

Ok, everybody sufficiently confused now? Good.

The important thing about these strong but sensitive applicative functors is that they represent a notion of effect that is compatible with product types in one direction, and this very compatibility is a notion of sequencing of effects. Suppose T is an applicative functor. If I have a piece of data of type TA, which is a computation returning type A, and a piece of data of type TB, a computation returning B, then I can make T(A × B) by running the first one and then the second. I can't expect to necessarily do the reverse --- I might reasonably encounter a computation T(A × B) that doesn't naturally decompose into TA and TB. This one-way ticket is the laxness. For a weak monoidal functor would commute up to isomorphism with products, and you'd have T(A × B) isomorphic (with suitable coherence diagrams) to TA × TB, but a lax monoidal functor only goes one way. It needs to be a strong lax monoidal functor for the same reason that the monads in functional programming are always strong monads, to be compatible with our expectations about variable contexts.

But wait, I hear you saying, weren't we told that monads were the ultimate story on a pure functional treatment of effects and sequencing and stuff? Answer: they're one story, yes. The intuitive distinction between monads and applicatives (which the paper above explains rather nicely) is that monads, unlike applicatives, let you use the result of one computation to influence which other computation takes place as the second step. Just look at the Kleisli star's type, it's right there:

TA → (A → TB) → TB

The second argument gets the value of type A before it spits out the final computation. If T were just an applicative functor, the most we could make out of TA and (A → TB) would be TTB. When T is a monad, we have exactly the categorical μ to turn that into TB (this is of course exactly how the Kleisli star is implemented from the categorical data T, η, μ) but when T is merely applicative, TTB is considered a more complicated thingy than TB --- it has two stages of computational dependency, which we're not allowed to erase.