
Combinatorial game theory is a branch ofmathematics andtheoretical computer science that typically studiessequential games withperfect information. Research in this field has primarily focused on two-playergames in which aposition evolves through alternatingmoves, each governed by well-defined rules, with the aim of achieving a specific winning condition. Unlikeeconomic game theory, combinatorial game theory generally avoids the study ofgames of chance or games involvingimperfect information, preferring instead games in which the current state and the full set of available moves are always known to both players.[1] However, as mathematical techniques develop, the scope of analyzable games expands, and the boundaries of the field continue to evolve.[2] Authors typically define the term "game" at the outset of academic papers, with definitions tailored to the specific game under analysis rather than reflecting the field’s full scope.
Combinatorial games include well-known examples such aschess,checkers, andGo, which are considered complex and non-trivial, as well as simpler, "solved" games liketic-tac-toe. Some combinatorial games, such asinfinite chess, may feature anunbounded playing area. In the context of combinatorial game theory, the structure of such games is typically modeled using agame tree. The field also encompasses single-player puzzles likeSudoku, and zero-player automata such asConway's Game of Life—although these are sometimes more accurately categorized asmathematical puzzles orautomata, given that the strictest definitions of "game" imply the involvement of multiple participants.[3]
A key concept in combinatorial game theory is that of thesolved game. For instance,tic-tac-toe is solved in that optimal play by both participants always results in a draw. Determining such outcomes for more complex games is significantly more difficult. Notably, in 2007,checkers was announced to beweakly solved, with perfect play by both sides leading to a draw; however, this result required acomputer-assisted proof.[4] Many real-world games remain too complex for complete analysis, though combinatorial methods have shown some success in the study ofGo endgames. In combinatorial game theory, analyzing aposition means finding the best sequence of moves for both players until the game ends, but this becomes extremely difficult for anything more complex than simple games.
It is useful to distinguish between combinatorial "mathgames"—games of primary interest to mathematicians and scientists for theoretical exploration—and "playgames," which are more widely played for entertainment and competition.[5] Some games, such asNim, straddle both categories. Nim played a foundational role in the development of combinatorial game theory and was among the earliest games to be programmed on a computer.[6]Tic-tac-toe continues to be used in teaching fundamental concepts ofgame AI design tocomputer science students.[7]
Combinatorial game theory contrasts with "traditional" or "economic"game theory, which, although it can addresssequential play, often incorporates elements ofprobability andincomplete information. While economic game theory employsutility theory and equilibrium concepts, combinatorial game theory is primarily concerned withtwo-playerperfect-information games and has pioneered novel techniques for analyzinggame trees, such as through the use ofsurreal numbers, which represent a subset of all two-player perfect-information games. The types of games studied in this field are of particular interest in areas such asartificial intelligence, especially for tasks inautomated planning andscheduling. However, there is a distinction in emphasis: while economic game theory tends to focus on practical algorithms—such as thealpha–beta pruning strategy commonly taught in AI courses—combinatorial game theory places greater weight on theoretical results, including the analysis ofgame complexity and the existence of optimal strategies through methods like thestrategy-stealing argument.
Combinatorial game theory arose in relation to the theory ofimpartial games, in which any play available to one player must be available to the other as well. One such game isNim, which can be solved completely. Nim is an impartial game for two players, and subject to thenormal play condition, which means that a player who cannot move loses. In the 1930s, theSprague–Grundy theorem showed that all impartial games are equivalent to heaps in Nim, thus showing that major unifications are possible in games considered at a combinatorial level, in which detailed strategies matter, not just pay-offs.
In the 1960s,Elwyn R. Berlekamp,John H. Conway andRichard K. Guy jointly introduced the theory of apartisan game, in which the requirement that a play available to one player be available to both is relaxed. Their results were published in their bookWinning Ways for your Mathematical Plays in 1982. However, the first work published on the subject was Conway's 1976 bookOn Numbers and Games, also known as ONAG, which introduced the concept ofsurreal numbers and the generalization to games.On Numbers and Games was also a fruit of the collaboration between Berlekamp, Conway, and Guy.
Combinatorial games are generally, by convention, put into a form where one player wins when the other has no moves remaining. It is easy to convert any finite game with only two possible results into an equivalent one where this convention applies. One of the most important concepts in the theory of combinatorial games is that of thesum of two games, which is a game where each player may choose to move either in one game or the other at any point in the game, and a player wins when his opponent has no move in either game. This way of combining games leads to a rich and powerful mathematical structure.
Conway stated inOn Numbers and Games that the inspiration for the theory of partisan games was based on his observation of the play inGo endgames, which can often be decomposed into sums of simpler endgames isolated from each other in different parts of the board.
The introductory textWinning Ways introduced a large number of games, but the following were used as motivating examples for the introductory theory:
The classic gameGo was influential on the early combinatorial game theory, and Berlekamp and Wolfe subsequently developed an endgame andtemperature theory for it (see references). Armed with this they were able to construct plausible Go endgame positions from which they could give expert Go players a choice of sides and then defeat them either way.
Another game studied in the context of combinatorial game theory ischess. In 1953Alan Turing wrote of the game, "If one can explain quite unambiguously in English, with the aid of mathematical symbols if required, how a calculation is to be done, then it is always possible to programme any digital computer to do that calculation, provided the storage capacity is adequate."[8] In a 1950 paper,Claude Shannon estimated the lower bound of thegame-tree complexity of chess to be 10120, and today this is referred to as theShannon number.[9] Chess remains unsolved, although extensive study, including work involving the use of supercomputers has created chess endgametablebases, which shows the result of perfect play for all end-games with seven pieces or less.Infinite chess has an even greater combinatorial complexity than chess (unless only limited end-games, or composed positions with a small number of pieces are being studied).
A game, in its simplest terms, is a list of possible "moves" that two players, calledleft andright, can make. The game position resulting from any move can be considered to be another game. This idea of viewing games in terms of their possible moves to other games leads to arecursive mathematical definition of games that is standard in combinatorial game theory. In this definition, each game has the notation{L|R}. L is theset of game positions that the left player can move to, and R is the set of game positions that the right player can move to; each position in L and R is defined as a game using the same notation.
UsingDomineering as an example, label each of the sixteen boxes of the four-by-four board by A1 for the upper leftmost square, C2 for the third box from the left on the second row from the top, and so on. We use e.g. (D3, D4) to stand for the game position in which a vertical domino has been placed in the bottom right corner. Then, the initial position can be described in combinatorial game theory notation as
In standard Cross-Cram play, the players alternate turns, but this alternation is handled implicitly by the definitions of combinatorial game theory rather than being encoded within the game states.
The above game describes a scenario in which there is only one move left for either player, and if either player makes that move, that player wins. (An irrelevant open square at C3 has been omitted from the diagram.) The {|} in each player's move list (corresponding to the single leftover square after the move) is called thezero game, and can actually be abbreviated 0. In the zero game, neither player has any valid moves; thus, the player whose turn it is when the zero game comes up automatically loses.
The type of game in the diagram above also has a simple name; it is called thestar game, which can also be abbreviated ∗. In the star game, the only valid move leads to the zero game, which means that whoever's turn comes up during the star game automatically wins.
Numbers represent the number of free moves, or the move advantage of a particular player. By convention positive numbers represent an advantage for Left, while negative numbers represent an advantage for Right. They are defined recursively with 0 being the base case.
Thezero game is a loss for the first player.
The sum of number games behaves like the integers, for example 3 + −2 = 1.
Any game number is in the class of thesurreal numbers.
Star, written as ∗ or {0|0}, is a first-player win since either player must (if first to move in the game) move to a zero game, and therefore win.
The game ∗ is neither positive nor negative; it and all other games in which the first player wins (regardless of which side the player is on) are said to befuzzy orconfused with 0; symbolically, we write ∗ || 0.
The game ∗n is notation for {0, ∗, …, ∗(n−1)| 0, ∗, …, ∗(n−1)}, which is also representative of normal-playNim with a single heap of n objects. (Note that ∗0 = 0 and ∗1 = ∗.)
Up, written as ↑, is a position in combinatorial game theory.[10] In standard notation, ↑ = {0|∗}. Its negative is calleddown.
Up is strictly positive (↑ > 0), and down is strictly negative (↓ < 0), but both areinfinitesimal. Up and down are defined inWinning Ways for your Mathematical Plays.
Consider the game {1|−1}. Both moves in this game are an advantage for the player who makes them; so the game is said to be "hot;" it is greater than any number less than −1, less than any number greater than 1, and fuzzy with any number in between. It is written as ±1. Note that a subclass of hot games, referred to as ±n for some numerical game n is a switch game. Switch games can be added to numbers, or multiplied by positive ones, in the expected fashion; for example, 4 ± 1 = {5|3}.
Animpartial game is one where, at every position of the game, the same moves are available to both players. For instance,Nim is impartial, as any set of objects that can be removed by one player can be removed by the other. However,domineering is not impartial, because one player places horizontal dominoes and the other places vertical ones. Likewise Checkers is not impartial, since the players own different colored pieces. For anyordinal number, one can define an impartial game generalizing Nim in which, on each move, either player may replace the number with any smaller ordinal number; the games defined in this way are known asnimbers. TheSprague–Grundy theorem states that every impartial game under thenormal play convention is equivalent to a nimber.
The "smallest" nimbers – the simplest and least under the usual ordering of the ordinals – are 0 and ∗.