Jason (jcreed) wrote,
Jason
jcreed

I realized, talking to neelk at icfp, that I have a huge hole in my literature-knowledge about classical LTL and friends --- which is especially tragic because both his "Higher-Order Functional Reactive Programming without Spacetime Leaks" and Cave/Ferreira/Panangaden/Pientka's Fair Reactive Programming are, in differing ways, LTL put through the ol' Curry-Howard grinder. And they both seem like really interesting pieces of work.

I thought I'd start digging into the list of references in Moshe Vardi's slides for "The Rise and Fall of LTL". A bit discouragingly, plenty are not even accessible online because they are from the 80s. I am so spoiled by having been usually interested in corners of logic that are more recently developed... For instance in Vardi's "Branching vs. Linear Time: Final Showdown", the reader is referred to "Characterizing finite Kripke structures in propositional temporal logic" for a proof that CTL characterizes bisimulation, but I totally can't find it not behind a paywall.

Anyway I got distracted by thinking about what the "until" operator really ought to mean in a Kripke structure that isn't linear. Here of course (just like the "linear" in LTL itself) I am talking about "linear" in the sense that contrasts with "branching", in the sense of "possible worlds form a line", where ◇□A ∧ ◇□B ⊢ ◇□(A ∧ B), not in the sense of linear logic.

It occurred to me form of "A until B" that doesn't guarantee the B will actually happen (which makes it different from the "until" in LTL that lets you encode ◇A as "true until A") could be said semantically as
(A until B) @ t = ∀x ≥ t. A ∨ ∃y. t ≤ y ≤ x ∧ B @ y
                  

So: for every world y future from here (i.e. t), you've got to either know that A is still true, or that the end-condition B actually happened at some point between t and y.

The backwards-looking-existential is not over all past worlds, but just the past worlds since t. This suggests that maybe we could incorporate the "scoping" world as an extra parameter in the semantic translation. This lets us break up "until" into smaller connectives. We say
(SA) @ t,s = A @ t,t
(□A) @ t,s = ∀x ≥ t. (A @ x,s)
(◇A) @ t,s = ∃x ≥ t. (A @ x,s)
(◆A) @ t,s = ∃x. s ≤ x ≤ t ∧ (A @ x,s)
(■A) @ t,s = ∀x. s ≤ x ≤ t ⇒ (A @ x,s)
A until B = S□(A ∨ ◆B)


I'm labelling that until as until because of its □-like nature.

What S does is copy the current world into the "scoping" argument, which is then used as a lower bound by ◆. Other connectives like □ (and also all the usual nonmodal connectives) just leave s alone.

An alternate version of until that is more like diamond is
A until B = S◇(B ∧ ■A)

This one says that there exists a way of achieving B while maintaining A as an invariant until we do. The previous one says A will be true for all paths through the tree we might try before we get to a B.

Classically it looks like we'd get that A until B would be equivalent to ¬(¬B until ¬A).

This distinction smells very probably-already-invented. I await comments of the form "yes it totally has been in Author [19XX]" :)

---

Maybe the asymmetry between the forward-looking and backward-looking operators could be healed by keeping track of the "world interval" separate from the current world. Let x ∈ [a,b] abbreviate a ≤ x ≤ b. Then:
(□A) @ s,t,u = ∀x ∈ [t,u]. (A @ t,x,u)
(◇A) @ s,t,u = ∃x ∈ [t,u]. (A @ t,x,u)
(■A) @ s,t,u = ∀x ∈ [s,t]. (A @ s,x,t)
(◆A) @ s,t,u = ∃x ∈ [s,t]. (A @ s,x,t)


This means our erstwhile "S" has been incorporated directly into □ and ◇: they update the lower-bound of the "worlds we are currently interested in" interval from s to what used to be the current world, t. A dual "co-S" has been incorporated into ■ and ◆, since both of them update the upper-bound to t. So our untils become just
A until B = □(A ∨ ◆B)
A until B = ◇(B ∧ ■A)

which means we can mechanically come up with temporal duals of these (taking the word "since" from Back to the Future: Towards a Theory of Timed Regular Languages)
A since B = ■(A ∨ ◇B)
A since B = ◆(B ∧ □A)

I think I like this setup a lot better since it transparently satisfies time-reversal symmetry when you also swap s,t,u to u,t,s in the translation.
Tags: logic, temporal
Subscribe
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 3 comments