?
Notes from a Medium-Sized Island [entries|archive|friends|userinfo]
Jason

[ website | My Website ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

[Sep. 16th, 2012|10:29 am]
Jason
[Tags|, , ]

Anyone know if the following massively-multiplayer game (which seems vaguely like a prisoner's dilemma type of game, but I don't know if it shares many formal properties with it, really) have a canonical name in the literature?

There are N players. Assume N is big, like on the order of a million. The game is parametrized by a threshold T, which is on the order of some substantial fraction of N, like N/3 or N/2 or (3/4)N or something like that. The payoffs are controlled by two parameters, D, and C. If the number of cooperating players is less than the threshold, the defectors each receive $1, and the cooperators nothing. If the number of cooperating players is above the threshold, then the defectors get $D and the cooperators get $C.

Here's some parameter values for (D, C) that are interesting to consider:
1. (1000, 1000)
2. (1001, 1000)
3. (999, 1000)
4. (0, 1000)
5. (10000, 1000)

What do you play?

In (1) and (2) and (5), the rational play is definitely to defect. In (3) and (4) I suppose you have to think about the likelihood that the population is going to make the critical threshold. (5) is interesting to me, because there's an amplified incentive to defect if lots of other people cooperating --- it's perhaps the most prisoner's-dilemma-like, since you're exploiting the "suckers".

In all cases, the game is made weirder than the plain prisoner's dilemma (and this is what made me think of it, reading some stuff about voting in elections elsewhere) by the intuitions surrounding the paradox of the heap: "my own vote has a vanishing chance of making a difference so why bother".
LinkReply

Comments:
[User Picture]From: nightspore
2012-09-16 05:23 pm (UTC)

What have you been reading?

Voting, for me, is the gold standard real world case for evidential decision theory. I vote to give myself evidence others will, evidence that's (I hope) quasi-causal.

This is a great set-up. I'm going to ask Rob Boyd, who'd know.

(Reply) (Thread)
[User Picture]From: jcreed
2012-09-16 09:23 pm (UTC)
Oh just an argument some friends were having over on Plus. Don't have the link right now.

I forget what your position is in the two-box game, despite the many times you've referred to it... do I infer correctly you're a one-boxer, then? I think I am, ultimately, but a philosophically uneasy one-boxer.
(Reply) (Parent) (Thread)
[User Picture]From: platypuslord
2012-09-16 09:46 pm (UTC)
I have a rant about the box game.

Superficially, this game looks like it's a question about game theory: there are boxes and payoffs and utility functions and all that stuff. But the actual question it's asking can't be answered with game theory. Here is a rephrasing of the box game:

"Something called the Predictor claims that it can predict what decision you will make. This claim directly implies that you can model the one-box/two-box decision as one in which the Predictor acts second, even though in reality it acts first. This claim is very hard to believe, but you are supplied with evidence E which supports the claim. Do you believe the Predictor's claim?"

If you do believe the claim (if you assign a greater than 0.1% probability to the claim) then you choose one box. Otherwise you choose two boxes.

So this isn't actually a game theory question; it's a question about philosophy, or psychology, or reality-simulating computronium, or whatever was involved in evidence E.

By the way, I think humans' actions can be predicted with greater than 0.1% accuracy compared to random, so if E is at all compelling then I choose one box.
(Reply) (Parent) (Thread)
From: eub
2012-09-18 06:38 am (UTC)
Yeah, I agree with your take, that Newcomb's paradox is really a question about whether you believe the setup.
(Reply) (Parent) (Thread)
[User Picture]From: platypuslord
2012-09-16 08:22 pm (UTC)
In cases (1)-(4), I have high confidence that the threshold will be met, because nobody cares about the 1$ reward. I play (C), because (i) I don't care about the 1$, and (ii) I get a good feeling from cooperating which is worth more than 1$.

In case (5), I play (D), and I predict that everyone else will too.
(Reply) (Thread)
[User Picture]From: gwillen
2012-09-17 05:31 pm (UTC)
Wait I'm confused. How can it be rational to defect in (1)? I disagree. I think formally it's neutral: Both C and D are equilibria, with D being stable and C being sort of a 'flat' equilibrium. But in reality C seems the obvious result, no?

I think (2) is pretty interesting, because it most directly matches the prisoners' dilemma in the structure of the specific hypothetical: "if you change from cooperate to defect, you will always gain a dollar." (In this case it's not actually strictly _true_ -- you have to weigh it against the chance that you will be the deciding vote.)

(3) seems to be a case where both D and C are stable equilibria, but in practice everyone will C.

(4) is another bistable case, but like, if you don't frame it as a prisoners' dilemma, it becomes VERY FRAKKING OBVIOUS that there's a D-C symmetry, broken by the fact that the payoff for C is higher, and C is the obvious thing to choose.

I think I must be missing something about your analysis.
(Reply) (Thread)
[User Picture]From: jcreed
2012-09-17 06:50 pm (UTC)
Yeah I guess you're right about (1). All I can say is that everyone defecting is *an* equilibrium.

However, if you take a sledgehammer and posit that everyone's choice is a probability in the half-open interval [0,1) of cooperating, then the only equilibrium is zero. The converse sledgehammer of taking the interval (0,1] also has a unique equilibrium, but it seems more unstable in a way I don't know how I should best describe formally.
(Reply) (Parent) (Thread)
[User Picture]From: gwillen
2012-09-17 06:51 pm (UTC)
Well, I described it as a 'flat' equilibrium'. A ball placed there will not roll away, but there's nothing making it roll closer, either.
(Reply) (Parent) (Thread)
[User Picture]From: rjmccall
2012-09-18 06:47 pm (UTC)
I really wish game theory would use a different term than "rational". Defecting is, in fact, totally a ridiculous idea in scenarios 1-4, because gaining $1 is meaningless, especially compared to gaining $1000. The analysis is that, if the threshold is met, I will receive ~$1000, but otherwise I will receive ~$0; clearly my primary course of action should be to ensure that the threshold is met. In both of these games I will cooperate, and I'll just roll my eyes at the asshole defectors who apparently value doing meaninglessly better than me over doing well for themselves. My well-earned feeling of righteous dudgeon is worth far more to me than a buck.

I am quite aware that objections to calling this behavior "rational" is not new and that economists have studied all of these points quite a bit, although a distressing amount of that work takes the tack of explaining the supposed "irrationality".

5 is the only real question here.
(Reply) (Thread)
[User Picture]From: jcreed
2012-09-18 07:50 pm (UTC)
Okay, sure. Though I could replace --- no, let me say that I should have in the first place replaced --- dollars with utils, and posit that I'm adjusting the material rewards (dollars or whatever) just so to account for dudgeon so that you actually have a slight, 1-util preference for the outcome where you defect.

Secondly, I guess I just don't have a problem with calling this "rational" any more than I have a problem with the connotations of "rational" being inappropriate for rational numbers. It's just a definition.
(Reply) (Parent) (Thread)
[User Picture]From: rjmccall
2012-09-18 08:52 pm (UTC)
In math there's no confusion — no trained mathematicians are mistaking "rational number" for a value judgement about number systems. In contrast, you can find a lot of people (many of whom ought to know better, but still) confusing the game theoretic term for a value judgement, which is understandable because it often does seem to approximate true rationality, and indeed seems to have been designed as one idea of what rationality might mean. And then it's much worse coming in a social science which regularly receives some amount of mainstream press coverage, because journalists and the public have no idea of what it even means to be a formal definition, much less that this one with its now-seen-as-jokey-at-least-among-mathematicians name is not infrequently quite divergent from the culturally incredibly potent notions of rationality and irrationality.

I accept your proposal about utils and dudgeon, but... I'm not totally convinced that scaling the material rewards just works. I mean, let's say we inflate the base defection reward to, say, $20, enough that I would actually significantly regret getting absolutely nothing. To maintain the original balance, we inflate the cooperation reward to $20,000, which is actually a pretty large amount of money; to me, it might represent a small but noticeable life difference (say, a vacation or three), whereas to someone middle-class, it's a lot closer to radically life-changing, which $1000 wouldn't have been. My potential regret about not getting $20 is nothing compared to my potential regret for not getting $20,000, and the difference between $20,000, $20,020, and $9,980 is really immaterial. But more importantly, my dudgeon has actually scaled as well — my dudgeon is based on what the defectors cheated us all out of by cooperating, so it gets proportionately higher when the rewards do. You dirty defectors have actually significantly hurt a lot of people in your mad quest for an extra $20.

And if you don't inflate the cooperation reward, the balance is different. If the cooperation reward is only 50x the incremental defection reward, that is a very different problem, although I still definitely cooperate down to at least 5x.

I mean, I assume that people rationalize defecting here by supposing that N is large enough that they should just assume that the result is a given and optimize within that constraint. In practice, I don't know anything about the wider group, except that (given humanity) there's probably a cohort of people who will follow the same general line of thinking that I do; my reaching a certain conclusion won't cause them to reach the same conclusion, but they will nonetheless do so, and it's to our mutual advantage for that to be "cooperate".
(Reply) (Parent) (Thread)