### (no subject)

Got some more hacking done on this paper I've been working on, about the encoding of Pfenning-Davies into FOL that Sean inspired me to figure out.

I realized there's an interesting flaw in the story I always tell myself about the role of Pfenning-Davies as an S4-ish modal logic. The story goes that in classical-land, there's S4, and that's that. The Prophet Kripke spake its semantics, and lo, it was good. But in intuitionistic-land, the unity was shattered: there are at least two ways of making a plausible S4-like logic, and in my head I always call them "Pfenning-Davies" and "Simpson", but the history is of course more involved. I

(I'm not actually familiar with all these names, I'm just pulling some of them out of this paper) Anyway, the essential distinction is whether □ and ⋄ are really just universal and existential quantifiers over worlds ("Simpson-style") or whether the have an immediate context-clearing effect when synchronously decomposed. ("Pfenning-Davies-style")

Here's the flaw in that story:

⋄A ⇒ □B ⊢ □(A ⇒ B)

In Simpson style, we decompose the box on the right to get a fresh future world, call it "a", and proceed to unpack the implication, getting ⋄A ⇒ □B, A @ a ⊢ B @ a. Then we hit the ⇒L rule, which demands that we prove A at

But in Pfenning Davies style, our right-hand-side is now a positive connective that clears the context of all not-valid things. And our left-hand-side is just as negative as it ever was. So we're stuck with two substantial synchronous choices, both of which quickly fuck up everything and ruin the proof attempt. The proof does not go through.

But

⊢ ⋄A, □(A ⇒ B) and □B ⊢ □(A ⇒ B)

Note how we didn't clobber the conclusion! For the first goal, we do ⋄R to preserve the A as A poss (which is the same thing as A

What have we learned? That

∀x.□A(x) ⊢ □∀x.A(x)

Even classically, if we impose a kind of Pfenning-Davies context clearing effect with □, this is

But this interchange of quantifier and box is the celebrated Barcan Formula! Which I'd heard of a long time ago but never really understood how to get any intuition for it. Now I feel much better, realizing that it's the Simpson/Pfenning-Davies dichotomy rearing its head in the context of classical modal logics.

It also makes me realize why the proof of soundness of labelled deduction in these notes from Frank's ATP class is so hard. Because it's false for the first-order extension! That means that, while it's true for the propositional case, the proof probably needs to pull some dirty tricks. I'm reminded of Glivenko's theorem as well...

I realized there's an interesting flaw in the story I always tell myself about the role of Pfenning-Davies as an S4-ish modal logic. The story goes that in classical-land, there's S4, and that's that. The Prophet Kripke spake its semantics, and lo, it was good. But in intuitionistic-land, the unity was shattered: there are at least two ways of making a plausible S4-like logic, and in my head I always call them "Pfenning-Davies" and "Simpson", but the history is of course more involved. I

*think*if I remember right, the bundle of names goes like"Intuitionistic S4" Ewald Fischer-Servi Plotkin Stirling Simpson |
"Constructive S4" Benton Bierman de Paiva Pfenning Davies Alechina Mendler Ritter |

(I'm not actually familiar with all these names, I'm just pulling some of them out of this paper) Anyway, the essential distinction is whether □ and ⋄ are really just universal and existential quantifiers over worlds ("Simpson-style") or whether the have an immediate context-clearing effect when synchronously decomposed. ("Pfenning-Davies-style")

Here's the flaw in that story:

**the distinction matters for classical logic, too**. But it only shows up if you also have real first-order quantifiers in your language. Let's look at the counterexample that always comes up when distinguishing intuitionistic-simpson-S4 from constructive-Pfenning-Davies-S4. It's:⋄A ⇒ □B ⊢ □(A ⇒ B)

In Simpson style, we decompose the box on the right to get a fresh future world, call it "a", and proceed to unpack the implication, getting ⋄A ⇒ □B, A @ a ⊢ B @ a. Then we hit the ⇒L rule, which demands that we prove A at

*some*future world of our choosing (which we can) and that we reason*from*B at some future world of our choosing to obtain B @ A (which we also can). Done.But in Pfenning Davies style, our right-hand-side is now a positive connective that clears the context of all not-valid things. And our left-hand-side is just as negative as it ever was. So we're stuck with two substantial synchronous choices, both of which quickly fuck up everything and ruin the proof attempt. The proof does not go through.

But

*classically*, implication, like all the binary connectives, is ambipolar. We could start with ⇒L, yielding the two goals⊢ ⋄A, □(A ⇒ B) and □B ⊢ □(A ⇒ B)

Note how we didn't clobber the conclusion! For the first goal, we do ⋄R to preserve the A as A poss (which is the same thing as A

^{⊥}valid) so it survives □R. For the second goal, we just do the obvious throwing-away of A.What have we learned? That

*one*shibboleth proposition that constructively tells Simpson from Pfenning-Davies yields no information in classical logic, because ⇒ is ambipolar there. But "hmm", you say, "∀ is negative no matter where you live, isn't it?". So maybe we should ruminate on the interchange of □ and ∀ instead of □ and ⇒. How about:∀x.□A(x) ⊢ □∀x.A(x)

Even classically, if we impose a kind of Pfenning-Davies context clearing effect with □, this is

*not provable*. We're not ready to choose a term on the left yet, and working on the right blows away our hypothesis. But if we're simpson, and □ is just "for all future worlds..." then universal quantifiers commute, and it's trivially provable.But this interchange of quantifier and box is the celebrated Barcan Formula! Which I'd heard of a long time ago but never really understood how to get any intuition for it. Now I feel much better, realizing that it's the Simpson/Pfenning-Davies dichotomy rearing its head in the context of classical modal logics.

It also makes me realize why the proof of soundness of labelled deduction in these notes from Frank's ATP class is so hard. Because it's false for the first-order extension! That means that, while it's true for the propositional case, the proof probably needs to pull some dirty tricks. I'm reminded of Glivenko's theorem as well...