Jason (jcreed) wrote,

So my thoughts have turned to the problem how to do type inference for a language that has a fancy tree-shaped context like bunched logic, but a little different: a family of "commas" between contexts indexed by the set N ∪ { ∞ }, but there's only one unit shared among them all, and all of them are affine. All of them are also associative (each with itself) and commutative, and the nth "comma" is allowed to weaken to the mth if m > n. The 0th comma is the most "tensor-like" (or from the point of view of BI, the most actually "comma-like") and the ∞th is the most "&-like" (or "semicolon-like") but there's a whole spectrum in between.

Anyone know where the canonical reference is for practical programming with type inference in a BI-based type system?

I haven't tried looking very hard yet, because while doing so I turned up (and got distracted by) "Kripke Resource Models of a Dependently-Typed, Bunched λ-Calculus" by Ishtiaq and Pym (non-paywall citeseer link) and on page 7 they claim they can simulate LLF inside RLF somehow, which is quite a puzzler: RLF, as far as I can tell, and to the extent that I am right about this it is much to my chagrin, is not linear logic as the rest of the world understands it, but rather "relevant logic", where you require at least one use of each variable, instead of exactly one. My epistemic state about the paper is still such that Idunno if they are pulling some extra trick to get LLF encoded, but it's quite a surprising assertion.
Tags: llf, papers, work
  • Post a new comment


    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded