Jason (jcreed) wrote,
Jason
jcreed

Lambda calculus was tougher going today.

The high-level goal is currently to reduce matching to dual interpolation, that is, solving problems of the form

x M11 ... M1n ~ N1
x M21 ... M2n ~ N2
...
x Mm1 ... Mmn ~ Nm

where all the Ms and Ns are completely without free variables, and the ~s can be (independently of one another) either == or !=.

Somehow we are careening towards a model-theoretic proof of this.

Now the definition of a model of the STLC hasn't been explicitly given during class, so the attempt to do part of the construction of the free model over a handful of constants left the class rather nonplussed, I think. I didn't remember it myself, so I looked at the textbook just now, and a model is in fact (as my hazy memory suggested) an assignment of subsets of the appropriate function spaces to types in such a way that "enough" functions still remain to interpret all syntactic terms, but I still have to digest the formal meaning of "enough" a little more before I can fully recapitulate the logical relations shenanigans that went on today.

Quickly, though, the main idea was:

(1) we start with a "sufficiently big" finite set as the interpretation of the type o, and water it diligently and grow from it the usual full-function-spaces model. (We don't know how big "sufficiently" is yet — I think it eventually comes from the unification problem we want to solve.)

(2) we weed out all the terms in this model that are not λ-definable, leaving in only the ones that are

(3) we observe that by taking things out (suppose we threw out some element k of the model at type A), we might have caused certain terms at one type up (e.g., f, g : A → o) to collapse, because perhaps f k != g k was the very last bit of evidence to show that f != g, and we just threw k out of the model, so now extensionality forces us to equate f and g.

...so collapse everything that needs to be collapsed!

(4) We worry that the equivalence relation that we defined for the above collapse is well-defined in every sense it needs to be - but we prove by logical relations somehow that it is.

And so I get the impression that we're going to somehow use the assumption that dual interpolation is decidable to get around the undecidability of λ-definability, and still somehow simply "iterate over" a finite model. That is, if we're looking for something to plug in for x : A such that N(x) = M (at the type of binary trees) we just find a finite model Φ big enough to distinguish M from every other binary tree, and then try to crawl over the elements of &Phi's interpretation of A to see if we can find something to substitute for x. Naively we might get fooled and say "yes this matching problem is solvable" when it ain't, because Φ might have some elements that aren't λ-definable. Thus, I suppose, the reason that we're trying to construct the free model (which is sort of minimal in the sense that the full model is maximal) which only has the λ-definable terms.
Tags: lambda calculus, math
Subscribe

  • (no subject)

    Something that's bugged me for a long time is this: How many paths, starting at the origin, taking N steps either up, down, left or right, end up at…

  • (no subject)

    Still sad that SAC seems to end up being as complicated as it is. Surely there's some deeper duality between…

  • (no subject)

    I had already been meaning to dig into JaneSt's "Incremental" library, which bills itself as a practical implementation (in ocaml) of the ideas in…

  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 2 comments