*ahem*well-received.

The theme of the day was continuing to push the general unification problem down to specialer and specialer cases without actual loss of generality. (Statman amusingly abbreviated this "wolg" once, but used the more usual "wlog" in another place, so I can't assume that wolg is his own idiosyncratic way of writing it. Wolg!)

So the general problem looks like

M1 = N1 : t1

...

Mn = Nn : tn

Where all terms might involve free variables u1, u2, etc...

The meta-problem is: There is

*more than one problem*. Problem number one: there is

*more than one variable*. Problem number two: there is

*more than one equation*.

Let's deal with the latter problem first. It's easy! We can fake tuples well enough for present purposes by making the definition

t1 * ... * tn = (t1 → ... → tn → o) → o

and look at the problem

λ f . f M1 ... Mn = λ f . f N1 ... Nn : t1 * ... * tn

**wjl**noted that you can't actually write

*projections*from this big product type, but that's fine; we don't need 'em. We just need that tuples are equal iff all their components are, which is true.

In passing, remember that from last class (and the last post I made on the topic) we discussed parts of a proof than an equation at any type can be pushed down to the first-order type B = (o → o → o) → o → o of binary trees. So without actually copying and pasting the magic terms from before, let's just say that we have transformed the big general problem that we started with to a single equation

M = N : B

with exactly the same set of free variables u1, ... un, and the same set of solutions over those variables.

Now for the issue of multiple variables. I didn't really understand what was going on while Statman was saying it, so I'll try to piece it together from what I remember hearing, and some hints from the "textbook" on Blackboard.

Inductively we want to merge variables two at a time, which we can easily extend to merge all of them together.

So here's how to merge two.

Say the two variables are

u : A1 → ... → An → o

v : B1 → ... → Bm → o

We then replace both of them with a variable

w : A1 → ... → An → B1 → ... → Bm → (o → o → o) → o

Make up a new constant z : o, so that every type is inhabited. Say in particular z

_{A}is the term of type A that ignores all its arguments and returns z at the end.

Then substitute

λ x1 ... xn . w x1 ... xn z

_{B1}... z

_{Bm}(λ a b . a)

for u, and

λ y1 ... ym . w z

_{A1}... z

_{An}y1 ... ym (λ a b . b)

for v.

When we get back a solution for w, we find out the values of u and v by applying these same substitutions for them - they act as projections. We must be sure still that every such pair of u,v-values can be achieved by some w. Suppose u, v want to be M, N, respectively. Then

(*) λ x1 ... xn y1 ... ym . λ p . p (M x1 ... xn) (N y1 ... yn)

is the pair we want. The two projections above, when you substitute (*) for w, correctly yield the terms M and N (up to βη, of course)

So now we have a nice equation

M(x) = N(x) : B

In just one variable, at type B. In the near future I'm to understand we'll work out how to reduce the (actually less general) case of matching (that is, where N doesn't depend on x) to dual interpolation.