### (no subject)

Still kind of reeling from "Representing Monads" and the concomitant leveling-up of my understanding of delimited continuations. Deep, fascinating stuff. I poked around in Paul Blain Levy's old stuff expecting to find him telling me what the focusing story of them is, but I didn't yet find anything that I see as really satisfying. I mean clearly I kind of have a satisfying sense of what it means for the CPS monad itself to arise from the self-adjointness of negation, but I'm still kind of lost when it comes to how delimited continuations are connected to representation types like (τ → σ1) → σ2 when the two answer types are different, and what those types really "mean" vis-a-vis category theory or focusing or whatever.

In other news, I still think that bunched logic has a straightforwardly sound semantics in the category of directed graphs, and I have a burning curiosity as to whether it is complete, but, to paraphrase xkcd, all my normal approaches are useless here in the land of semantics.

Particularly because my first guess at a completeness theorem is obviously false! Recall from previous doodle and semi-formal exposition that I posit the tensor product of two graphs G1 and G2 is the graph that has

(1) a set of vertices equal to V1 x V2, the cartesian product of the vertices of G1 and the vertices of G2

(2) a set of edges isomorphic to the union of V1 x E2 and E1 x V2. For every element (v1, e : v2 → v2') in V1 x E2, you map it to an edge (v1, v2) → (v1, v2') in G1 ⊗ G2. For every element (e : v1 → v1', v2) in E1 x V2, you map it to an edge (v1, v2) → (v1', v2) in G1 ⊗ G2.

Okay, so, in bunched logic, you can't generally prove A ⊢ A ⊗ A. So it should semantically fail, too, right? Except there

Except this prescription is blatantly non-constructive. So is there some kind of completeness theorem that says for every

In other news, I still think that bunched logic has a straightforwardly sound semantics in the category of directed graphs, and I have a burning curiosity as to whether it is complete, but, to paraphrase xkcd, all my normal approaches are useless here in the land of semantics.

Particularly because my first guess at a completeness theorem is obviously false! Recall from previous doodle and semi-formal exposition that I posit the tensor product of two graphs G1 and G2 is the graph that has

(1) a set of vertices equal to V1 x V2, the cartesian product of the vertices of G1 and the vertices of G2

(2) a set of edges isomorphic to the union of V1 x E2 and E1 x V2. For every element (v1, e : v2 → v2') in V1 x E2, you map it to an edge (v1, v2) → (v1, v2') in G1 ⊗ G2. For every element (e : v1 → v1', v2) in E1 x V2, you map it to an edge (v1, v2) → (v1', v2) in G1 ⊗ G2.

Okay, so, in bunched logic, you can't generally prove A ⊢ A ⊗ A. So it should semantically fail, too, right? Except there

*is*always a function from ⟦A⟧ (the interpretation of A) to ⟦A ⊗ A⟧. Either ⟦A⟧ is the empty graph, in which case you get a trivial map out of ⟦A⟧, or else it*isn't*empty, in which case you can pick an arbitrary vertex v in ⟦A⟧, and find a trivial way of mapping ⟦A⟧ to the subgraph of ⟦A ⊗ A⟧ that looks like, you know, "v ⊗ A".Except this prescription is blatantly non-constructive. So is there some kind of completeness theorem that says for every

*constructive*graph homomorphism ⟦Γ⟧ → ⟦A⟧ there's a proof of Γ ⊢ A back in BI? How would you even get started proving such a thing? Mumble parametricity mumble relation mumble? I haven't a clue what a syntactic model would look like for this kind of setup. I started trying to formalize a little of this in Agda and got bogged down in dependent type index tarpits.