In 1950, John Nash proved that every strategic interaction has a natural resting point. Understanding where it falls — and how to move it — is the key to designing systems where individual choices serve the common good.
scroll to begin
Two players choose simultaneously. Alice picks a row. Bob picks a column. Where their choices intersect, each receives a payoff.
The first number in each cell is Alice's payoff. The second is Bob's. Both want to maximize their own number. The question: where do they end up?
A best response is the move that earns you the highest payoff given what the other player does.
If Bob plays Left, Alice compares her payoffs in that column and picks the highest. If Alice plays Up, Bob compares his payoffs in that row and picks the highest. The underlined and underlined payoffs are best responses.
A Nash equilibrium is a cell where both players are simultaneously playing their best response. Neither can improve by switching alone.
At (Up, Left), Alice earns 2 — better than the 0 she'd get by switching to Down. Bob earns 2 — better than 0 from switching to Right. Both are content. Balance.
In the Stag Hunt, two hunters choose: cooperate to catch a stag (big reward), or play it safe and catch a hare alone. If both cooperate, both earn 4 — the best possible outcome.
And here's the key: (Stag, Stag) is a Nash equilibrium. Neither hunter gains by switching to Hare. Self-interest and the common good are aligned.
The Prisoner's Dilemma is the classic counterexample. Defection dominates: both defect, both earn 1, even though cooperation would yield 3 each.
But this isn't a flaw in the players — it's a flaw in the game. The equilibrium is a diagnostic tool: it shows you exactly where the rules need fixing.
Add a penalty for defection — a fine, a reputation cost, a contract clause. Now the payoffs shift. Suddenly (Cooperate, Cooperate) becomes the equilibrium.
This is mechanism design: engineering the rules so that self-interest naturally produces the outcome everyone wants. Institutions, norms, and contracts all reshape games this way.
In Bach or Stravinsky, a couple wants to attend a concert together but prefers different composers. There are two equilibria: both go to Bach, or both go to Stravinsky.
Both outcomes are good — the players just need to coordinate. Signals, conventions, and communication solve this: the equilibrium concept tells us what to aim for, culture tells us which one.
In Matching Pennies, no fixed choice works — any pattern is exploitable. The solution: mix. Each player randomizes 50-50, making the other indifferent.
This isn't chaos — it's fairness. The mixed equilibrium guarantees neither player can be systematically exploited. Penalty kicks, audit schedules, and poker bluffs all rely on this principle.
In 1950, John Nash proved that every finite game has at least one equilibrium. No matter how tangled the conflict, a stable resolution always exists.
A 27-page doctoral thesis. A guarantee that balance is always reachable. The tool that launched mechanism design, market engineering, and a Nobel Prize (1994).
Adam Smith intuited in 1776 that self-interest could serve the public good. Nash made it rigorous. His fixed-point proof showed that in any finite game, the space of strategies always contains a point where every player is already doing their best — a resting point that emerges naturally from individual optimization.
This turned a philosophical claim into an engineering tool. If you can model an interaction as a game, you can find its equilibrium — and if you don't like where it lands, you can redesign the rules until it does.
Every game has at least one equilibrium — but the character varies. Cooperation, competition, coordination: the structure of the payoffs determines the quality of the outcome.
Payoffs are (Row, Col). Best responses are underlined. Green-bordered cells are pure-strategy Nash equilibria.
This insight powers real systems. The FCC spectrum auctions (2007 Nobel: Hurwicz, Maskin, Myerson) use mechanism design to allocate airwaves efficiently. Kidney exchange programs match donors to recipients so that self-interested hospitals still participate. School choice algorithms give families truthful incentives, and the equilibrium IS the fair assignment.
The deepest lesson: when self-interest and the common good diverge, the answer isn't to wish people were less selfish — it's to build better games.
Edit payoffs to create any 2×2 game. Equilibria are computed in real time.