I’ve written about the Prisoner’s Dilemma before but wanted to revisit the point.

Game theory purports to be a theory of “rational self-interested actors” or “rational maximizers.” These are individuals who are only interested in playing the game to win. All they know about the other player is that they are also a rational maximizer, and *this is necessary for the theory to say anything interesting*.

If both players were not rational maximizers then the rational player might need a completely different strategy depending on the nature of their opponent. Game theory would then be a theory of nothing much. It is only as a theory of rational maximizers that it has anything interesting to say at all.

Note that here I am talking about classic game theory, not newfangled modern inventions that study iterated games amongst semi-rational players and ask what the optimal strategy is in such circumstances. I’m talking about the historical foundations of the modern field, not the modern field.

The problem with the classic theory of deterministic symmetrical games is that in a deterministic symmetrical game all rational maximizers will necessarily choose the same strategy. To claim anything else would be to claim that we can rely on some rational maximizers to behave differently than others, which would only be the case when there are multiple strategies that have identical payoffs, which is not generally the case for symmetrical games, and is specifically not the case with regard to the Prisoner’s Dilemma.

Again: stochastic game theory, where no agent can be relied upon to be a rational maximizer, is a different animal. In this case, a rational maximizer’s strategy is neither unique nor obvious, so a great deal of the supposed power of game theory goes away.

But for a theory of strict rational maximizers there is no more chance of one rational maximizer in a pairwise game to make a different choice than the other than there is of one mass in a physics problem falling down while an otherwise identical mass falls up. Classical physics is a deterministic theory of massive bodies, and as such all massive bodies are predicted to behave in the same way in the same situation.

In a deterministic theory of rational maximizers all rational maximizers are predicted to behave in the same way in the same situation.

It follows from the this that the off-diagonal elements of the payoff matrix for a symmetric one-off game between rational maximizers are irrelevant. No rational maximizer would ever consider them because they know that as matter of causal necessity *whatever they choose the other actor will choose as well*.

To claim otherwise is to claim that one actor is a rational maximizer and the other actor is a random number generator, which is not what classical game theory purports to be about.

I’m belabouring this point for a reason: this error of imposing an asymmetric assumption on a symmetric situation is incredibly common, the point of being our default assumption, and it is more often than not wrong.

To take a trivial example: Patrick Rothfuss’ novella “The Slow Regard of Silent Things” was pretty well received by his first readers, but they all thought no one else would like it *even though they themselves did*.

This is the Prisoner’s Fallacy: the rejection of the idea that the best, most robust, first-order predictor of other people’s behaviour is your own behaviour.

The opposite of this is the Law of Common Humanity: “To first order, They are pretty much like Us.”

The Prisoner’s Fallacy comes to us so naturally that an entire industry of very smart people failed to notice it in the roots of classical game theory. *Of course* the players of a symmetrical game could behave differently! How could this not be?

More interestingly: how could it be? How could we come to impose asymmetry on such a symmetrical situation?

Is it simply because we cannot see from any point of view but our own, and as soon as we think about the problem we project ourselves into the mind of the nominal rational maximizer, and so spontaneously break the symmetry of the problem? Maybe. But we have no warrant to do so.

This is not a small problem. The most extreme case results in the War Puzzle: the question of why anyone would go to war when there are always better alternatives available. The reason seems to be in part that we humans tend to expect others will behave differently than ourselves: *we* would fight back vigorously when attacked, but *they* will capitulate at the sound of the first shot.

Decentering our point of view is hard. There are entire books written on it and none of them have made much difference to the world. I don’t have any amazingly clever solution. I just wanted to point out how pervasive and easy this error is to make, so much so that I’m sure that most people versed in classical game theory will deny the premise of this post, and insist that no one ever claimed that classical game theory was *just* a theory of rational maximizers but some other theory that explicitly adopted fixes to allow non-rational actors into the mix. This may be, but every popular exposition of game theory I’ve read, as well as more technical introductions, tend to say things like, game theory is “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.”

Yet no theory of intelligent rational decision-makers will admit of the possibility that in a symmetric game there will be anything other than symmetrical behaviour on the part of identical entities.