I wasn’t really looking to write about artificial intelligence again and again. Then this week, the leaders of several prominent companies developing AI warned that it could pose an extinction-level threat. At first, I found this odd. I don’t know about you, but whenever I’m working on a project that I worry may end humanity, I stop. Maybe I’m just a nice guy. But, then I got to thinking about it, and I realized that the development of AI may be similar to a classic economic game: “The Prisoner’s Dilemma.”
If you have ever seen the movie “A Beautiful Mind,” then you have been exposed to the concept of this game and its inventor John Nash. In the movie, Nash is played by Russell Crowe, who goes about trying to write his PhD dissertation. (Russell Crowe playing an economist was great for the profession. Rick Moranis is probably more of our archetype.) The plot of the movie centers on Nash’s struggles with his mental health. But, the movie also touches on the development of the concept of “Nash Equilibrium,” which describes the expected behavior of individuals engaging in strategic games. In this post, I want to describe this concept, and show how it might apply to the development of AI. Then, I want to come back to the movie to give you a little quiz.
Nash Equilibrium and the Prisoner’s Dilemma
A “strategic game” is any game or situation where the behavior of other players affects your outcome. So, a game like Candyland is not strategic. No matter what the other players do, your goal is to get to King Kandy’s Castle by rolling the highest number on the dice and hopefully avoiding various pitfalls. On the other hand, Monopoly is strategic. If your opponent already owns Park Place, buying Boardwalk is less valuable from a revenue perspective, but also blocks them from getting it. So, their behavior will affect your decision.
The Nash Equilibrium describes solutions to these sorts of games. The basic idea can be summed up in a sentence. The Nash Equilibrium to a game is the set of strategies where each player’s action is the best response to the other players’ actions. It’s sort of complicated, so let’s do a classic example: The Prisoner’s Dilemma.
The Game
The game begins with two prisoners, arrested for a crime that they committed together. The prisoners are put into separate interrogation rooms, where they have two options. They can confess their crimes to the police, in hope of a lenient sentence. Or, they can stay silent and hope that their partner does too, in which case the police might lack evidence for a strong conviction.
The “payoffs” to the game are as follows. If both prisoners stay silent, then the weak evidence results in just a one year conviction each. If both players confess, then the state gets strong evidence of conviction and neither prisoner gets the full benefit of cooperation. So, both go to prison for five years. And, if one confesses and the other doesn’t, then the confessor gets off with no jail time for their cooperation, and the silent partner goes in for ten years. The grid below illustrates these outcomes.
Figure 1. Prisoner’s Dilemma

Now, you might be tempted to think that both player’s will remain silent, spending the lowest possible time in jail at two years (one each). But, that is not the Nash Equilibrium. If Prisoner A thinks Prisoner B will remain silent, then Prisoner A’s best response is to confess and spend no time in jail. If Prisoner A instead thought Prisoner B would confess, then Prisoner A should still confess, so as to save five years in jail. The same is true in reverse for Prisoner B. In other words, no matter what the other side does, it is always best to confess. So, our prisoners both confess, and spend ten total years in jail instead of the minimum two. And, both are worse off than they would be if they could just keep their mouths shut.
Is AI Development a Prisoner’s Dilemma?
So, what can this kind of strategic interaction tell us about AI? Imagine you have two players, AI Developer A and AI Developer B. They both have two actions: 1) develop AI aggressively; or 2) develop AI safely. Now it gets highly hypothetical and illustrative, but let’s imagine the following payoffs. If both develop AI safely, then they both reap the business benefits of AI without societal disruption. Let’s label this as a value of 100 each, for a total value of 200.
On the other hand, if one company develops AI safely, and the other aggressively, then the safe company loses the business competition while suffering whatever society-wide disruption could occur. So, the safe company gets nothing. The aggressive developer wins the competition, but still lives with the disruption, getting 150. So, the society-wide total gains are 150, lower than the safe/safe option.
Finally, if both aggressively pursue AI, both get some profit but also live with the potential disruption. Perhaps they get 50 each, for a society wide total of 100. This game would look like the below.
Figure 2. AI Developer’s Dilemma

You can see that this introduces exactly the same pressures as The Prisoner’s Dilemma. If both play it safe, no disruption and big industry-wide profits. But, if Developer A really thinks Developer B will play it safe, it should be aggressive and get those bigger profits, disruptions be damned. And, if one firm thinks its competitor will be aggressive, it might as well be too. Else, it will lose any profit at all while still dealing with the fallout. In this game, both developers and the society as a whole lose out relative to the safe outcome. But, the safe outcome is not really achievable.
A Time when Government Can Actually Help
Government intervention in the economy is a controversial topic among our citizenry and among economists. Sometimes, well intended government intervention can backfire. Other times, well-targeted government programs can improve outcomes in a way that pay for themselves. If you believe that AI may be a sort of Prisoner’s Dilemma, then you should believe in government regulation in this industry. More on that in a second. But first, a quiz. Watch the clip below (perhaps a bit dated/sexist) from “A Beautiful Mind,” and then answer the multiple choice question to see how well you paid attention.
OK, so, what is the Nash Equilibrium to this scene?
Did you give it a guess?
Given the preferences laid out in the movie, I think that the solution they show — with each guy going after a brunette — is wrong. If I’m a guy that wants to date a blonde (to my wife: I am not!), and I know that all my friends are going after the brunettes, then I am going after the blonde. Everyone else will do the same thing. So, the Nash Equilibrium is for everyone to go after the blonde. And, no one has any incentive to switch to a brunette because as Russell Crowe says in the clip, they will just reject them since the women will be mad at being picked second.
Which gets me to what annoys me about the clip. At the end, Crowe says excitedly that Adam Smith is incomplete. He says “the best result will come with everyone doing what is best for themselves and for the group.” The italics is the part that Adam Smith allegedly missed. Only Nash’s Equilibrium doesn’t say this at all. What it says is that each person does what is best for themselves in response to what the others do. Sometimes it may be good for the group, other times — as in the Prisoner’s Dilemma — not so much.
This idea that individual self-interest is always best for the group is very wrong. It’s also a very dangerous idea. In the case of something like Artificial Intelligence, a bunch of companies may have every incentive to go too far. This situation is exactly the sort where the government needs to step in and say “you have to be safe.” And, in this case, it won’t only be good for the society, it will actually be better for the companies too. So, when I vote in 2024, I won’t be voting for anyone who doesn’t have a clear plan of action towards this emerging issue.
I think the AI situation is more similar to an international naval arms race 1890 to 1940. Attempts to restrain players led to cheating. How is it possible to enforce agreements with China, Russia, North Korea and Iran? Unilateral disarmament has never been prudent policy.
Interesting point Richard. I guess I have a vision of restrictions on things like voice and image impersonation. But, of course they could be got around somehow. I just hope we start thinking about it sooner than later.