September 16th, 2012

beartato phd

(no subject)

Oh dang the problem of expressing read-only access in linear logic (or as chrisamaphone expressed her preference for naming it, the "contingent transition problem") that she and taus were discussing so fruitfully is really fascinating. It was a topic that came up frequently back in early stages of my thesis discussions with Frank, since we kind of thought having first-class labels for state might provide a solution. But I think they don't provide as strong a solution as can be hoped for.

My digestion of their thoughts goes as follows: (which, incidentally, is meant to repudiate all the comments I left on that post, which are a broken semi-solution, I now think)

Suppose c is a positive atom. P and N will stand for general positive and negative propositions. I'd like to add to the logic a propositional notation [c]N, which, when it's on the left of the turnstile, means "you can focus on N as long as c is in the context right now".

Ok, fine, [c]N = c -o N, bam, done. PROBLEM: c is consumed, and N wanted to use that delicious, succulent positive atom. Ok, fine, [c]N = c -o c * N. PROBLEM: switching back to positive polarity means that by the time you're focusing on N again, the c may no longer be around. Like, the beginning (bottom) of my proof might look like this:

N |- P
--------------------------
c, N, c -o 1 |- P
--------------------------
c, c -o c * N, c -o 1 |- P

now I'm able to choose to focus on N --- or wait until later! --- even though c is gone. This is not the intended semantics of [c]N.

Ok, so what I'll try is defining [c]N = c -o {c}N and defining {c}N recursively in N.

{c}(^P) = ^(c * P)
{c}(N & N) = {c}N1 & {c}N2
{c}(top) = top
{c}(P -o N) = ({c}P -o N) & (P -o {c}N)
{c}(vN) = v(c -o N)
{c}(P1 + P2) = {c}P1 + {c}P2
{c}(0) = 0
{c}(P1 * P2) = ({c}P1 * P2) + (P1 * {c}P2)
{c}(1) = 0

This threads the c down through the synchronous meat of N down to wherever the shifts are, and whenever they happen, it "gives c back". The & in the case for -o (and the + in the case for *) expresses the choice involved in the multiplicative context-split between two contexts: we can send the c (the one that we're eventually going to stick in some context, but not until we find a shift) to either side, our choice.

Then we can extend [c]N to more general positive propositions by saying

[P1 + P2]N = [P1]N & [P2]N
[P1 * P2]N = [P1][P2]N
[0]N = top
[1]N = N

---

Anyway I am again intrigued whenever something kinda like the product rule for derivatives (aka "Leibniz Law" from first-year calculus, d/dx (fg) = (df/dx) g + f (dg/dx)) shows up in logic (like it did in that graph-logic stuff).

I never had occasion above to define a dual [c]P = c * {c}P, but it almost smells suspiciously (If I'm allowed to imagine {c}P to be "a P with a c-shaped hole" just because {c} is slightly derivative-like, and derivatives of types are types with holes) like "a c together with a P with a c-shaped hole", which is a satisfying pair of things to have.

---

Huh! Indeed by an easy induction I can show that always [c]P |- P and N |- [c]N. This kind of makes sense, I guess? If [c]N on the left means "you can use N as long as you can verify c is present without really consuming it" then [c]N on the right must mean something like "you must make N, but you are provided with the assumption that c is present, without really producing it". So if you already have an N in the context, you should be able to make a [c]N, ignoring the assumption of c's presence.
beartato phd

(no subject)

Anyone know if the following massively-multiplayer game (which seems vaguely like a prisoner's dilemma type of game, but I don't know if it shares many formal properties with it, really) have a canonical name in the literature?

There are N players. Assume N is big, like on the order of a million. The game is parametrized by a threshold T, which is on the order of some substantial fraction of N, like N/3 or N/2 or (3/4)N or something like that. The payoffs are controlled by two parameters, D, and C. If the number of cooperating players is less than the threshold, the defectors each receive $1, and the cooperators nothing. If the number of cooperating players is above the threshold, then the defectors get $D and the cooperators get $C.

Here's some parameter values for (D, C) that are interesting to consider:
1. (1000, 1000)
2. (1001, 1000)
3. (999, 1000)
4. (0, 1000)
5. (10000, 1000)

What do you play?

In (1) and (2) and (5), the rational play is definitely to defect. In (3) and (4) I suppose you have to think about the likelihood that the population is going to make the critical threshold. (5) is interesting to me, because there's an amplified incentive to defect if lots of other people cooperating --- it's perhaps the most prisoner's-dilemma-like, since you're exploiting the "suckers".

In all cases, the game is made weirder than the plain prisoner's dilemma (and this is what made me think of it, reading some stuff about voting in elections elsewhere) by the intuitions surrounding the paradox of the heap: "my own vote has a vanishing chance of making a difference so why bother".