### (no subject)

Lambda calculus today was another in a series of stabs at a handful of Stirling's lemmas. I'm still trying to match everything up with the way I'd prefer to think about the "tree-matching" "game", assuming I can get away with it. I think you only really need one kind of look-up table instead of two, one notion of state instead of 3, and 3 rules instead of like 9.

Suppose you're working in a language that has a binary function f and a constant k.

Atomic terms R look like x M1 ... Mn or f R1 R2 or k.

Normal terms M look like λx1 ... xn . R.

We can think of checking whether a term M is a solution to an interpolation problem x M1 ... Mn = R as being defined by some inference rules that manipulate lookup tables and stuff.

A lookup table θ is either empty, or it's a cons of a lookup table with a new entry (θ,M)/x. Each entry itself has a little lookup table paired with the term that it's thought of as applying to. Morally, it's θM for x, but for now we avoid actually carrying out the substitution θ and hang on to it in "suspended" form.

A triple (R,θ,R') corresponds loosely to what in Stirling's paper is a pair consisting of a position in the solution tree and what he calls a "state". A triple (θ,R,R') is a "winning state" if θR = R'. We can define winning states as follows:

To get started having in hand a putative solution term λx1...xn.R, an list of closed-term arguments M1...Mn and a right-hand side R', try to prove

(R, (.,M1)/x1 + ... + (.,Mn)/xn, R') win

This third rule does the work of both Stirling's A3 and C4. The polarity phenomenon that probably motivated him to (unnecessarily, I believe) talk about there being

Suppose you're working in a language that has a binary function f and a constant k.

Atomic terms R look like x M1 ... Mn or f R1 R2 or k.

Normal terms M look like λx1 ... xn . R.

We can think of checking whether a term M is a solution to an interpolation problem x M1 ... Mn = R as being defined by some inference rules that manipulate lookup tables and stuff.

A lookup table θ is either empty, or it's a cons of a lookup table with a new entry (θ,M)/x. Each entry itself has a little lookup table paired with the term that it's thought of as applying to. Morally, it's θM for x, but for now we avoid actually carrying out the substitution θ and hang on to it in "suspended" form.

A triple (R,θ,R') corresponds loosely to what in Stirling's paper is a pair consisting of a position in the solution tree and what he calls a "state". A triple (θ,R,R') is a "winning state" if θR = R'. We can define winning states as follows:

(θ,k,k) win |

(θ,R1,R1') win(θ,R2,R2') win |

(θ,f(R1,R2),f(R1',R2')) win |

(&theta',λx1...xn.R')/x ∈ θ (θ'+(θ,M1)/x1+...+(θ,Mn)/xn,R',R) win |

(θ,x M1 ... Mn,R) win |

To get started having in hand a putative solution term λx1...xn.R, an list of closed-term arguments M1...Mn and a right-hand side R', try to prove

(R, (.,M1)/x1 + ... + (.,Mn)/xn, R') win

This third rule does the work of both Stirling's A3 and C4. The polarity phenomenon that probably motivated him to (unnecessarily, I believe) talk about there being

*two*substitutions at any given time, and to separate the two major kinds of states, falls out here as the fact that there are two ways to encounter each one of these inference rules: one, when the middle item in the triple is a subterm of the*solution term*, two, when it's a subterm of one of the original*argument terms*. Likewise we can consistently "color" lookup tables according to whether the top layer of terms that they substitute come from the solution or arguments. This is basically the distinction between Stirling's θs and ηs but it turns out we only need a single θ or η in our hands at any given time. Every application of the variable rule above "swaps" the side we're on.