Suppose n (for some odd n) agents play the following game. On each turn, a subset S of 1..n is chosen randomly, from the uniform distribution over the 2^n subsets possible. This subset is revealed to all players, and each player votes AYE or NAY. If there are more AYEs than NAYs, then all the players in the subset S receive 100 points, and all the players not in the subset lose 1 point. Otherwise, the scores of all players remain the same.
The question is, when is it rational to not vote AYE for exactly the "bills" that include you as a beneficiary, and NAY for exactly those that don't? I'm certain game theory folks have already spent like 8000 hours thinking about this, especially in the context of the prisoner's dilemma...
At least it's clear that there could be some advantage to making other agents believe that your vote can be swayed by their past behavior. Specifically, if agent X can be convinced that I will always vote AYE on proposals that benefit X (even if they don't benefit me!) if X has voted AYE on a proposal that benefits me, then X is more likely to vote AYE on proposals that benefit me (even if they don't benfit X).
Is there a snappy jargon term for this sort of reasoning: a form of the rationality assumption that involves making assumptions about your fellow agents' predictive capacities?