| Part of a series on | ||
| Mathematics | ||
|---|---|---|
| ||

Decision theory or thetheory of rational choice is a branch ofprobability,economics, andanalytic philosophy that usesexpected utility andprobability to model how individuals would behaverationally underuncertainty.[1][2] It differs from thecognitive andbehavioral sciences in that it is mainlyprescriptive and concerned with identifyingoptimal decisions for arational agent, rather thandescribing how people actually make decisions. Despite this, the field is important to the study of real human behavior bysocial scientists, as it lays the foundations tomathematically model and analyze individuals in fields such associology,economics,criminology,cognitive science,moral philosophy andpolitical science.[citation needed]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(April 2025) (Learn how and when to remove this message) |
The roots of decision theory lie inprobability theory, developed byBlaise Pascal andPierre de Fermat in the 17th century, which was later refined by others likeChristiaan Huygens. These developments provided a framework for understanding risk and uncertainty, which are central todecision-making.
In the 18th century,Daniel Bernoulli introduced the concept of "expected utility" in the context of gambling, which was later formalized byJohn von Neumann andOskar Morgenstern in the 1940s. Their work onGame Theory and Expected Utility Theory helped establish a rational basis for decision-making under uncertainty.
AfterWorld War II, decision theory expanded into economics, particularly with the work of economists likeMilton Friedman and others, who applied it to market behavior and consumer choice theory. This era also saw the development ofBayesian decision theory, which incorporatesBayesian probability into decision-making models.
By the late 20th century, scholars likeDaniel Kahneman andAmos Tversky challenged the assumptions of rational decision-making. Their work inbehavioral economics highlightedcognitive biases andheuristics that influence real-world decisions, leading to the development ofprospect theory, which modified expected utility theory by accounting for psychological factors.
Normative decision theory is concerned with identification of optimal decisions where optimality is often determined by considering an ideal decision maker who is able to calculate with perfect accuracy and is in some sense fullyrational. The practical application of this prescriptive approach (how peopleought to make decisions) is calleddecision analysis and is aimed at finding tools, methodologies, and software (decision support systems) to help people make better decisions.[3][4]
In contrast, descriptive decision theory is concerned with describing observed behaviors often under the assumption that those making decisions are behaving under some consistent rules. These rules may, for instance, have a procedural framework (e.g.Amos Tversky's elimination by aspects model) or anaxiomatic framework (e.g.stochastic transitivity axioms), reconciling theVon Neumann-Morgenstern axioms with behavioral violations of theexpected utility hypothesis, or they may explicitly give a functional form fortime-inconsistentutility functions (e.g. Laibson'squasi-hyperbolic discounting).[3][4]
Prescriptive decision theory is concerned with predictions about behavior that positive decision theory produces to allow for further tests of the kind of decision-making that occurs in practice. In recent decades, there has also been increasing interest in "behavioral decision theory", contributing to a re-evaluation of what useful decision-making requires.[5][6]
The area of choice under uncertainty represents the heart of decision theory. Known from the 17th century (Blaise Pascal invoked it in hisfamous wager, which is contained in hisPensées, published in 1670), the idea ofexpected value is that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an "expected value", or the average expectation for an outcome; the action to be chosen should be the one that gives rise to the highest total expected value. In 1738,Daniel Bernoulli published an influential paper entitledExposition of a New Theory on the Measurement of Risk, in which he uses theSt. Petersburg paradox to show that expected value theory must benormatively wrong. He gives an example in which a Dutch merchant is trying to decide whether to insure a cargo being sent from Amsterdam to St. Petersburg in winter. In his solution, he defines autility function and computesexpected utility rather than expected financial value.[7]
In the 20th century, interest was reignited byAbraham Wald's 1939 paper pointing out that the two central procedures ofsampling-distribution-based statistical-theory, namelyhypothesis testing andparameter estimation, are special cases of the general decision problem.[8] Wald's paper renewed and synthesized many concepts of statistical theory, includingloss functions,risk functions,admissible decision rules,antecedent distributions,Bayesian procedures, andminimax procedures. The phrase "decision theory" itself was used in 1950 byE. L. Lehmann.[9]
The revival ofsubjective probability theory, from the work ofFrank Ramsey,Bruno de Finetti,Leonard Savage and others, extended the scope of expected utility theory to situations where subjective probabilities can be used. At the time, von Neumann and Morgenstern's theory ofexpected utility[10] proved that expected utility maximization followed from basic postulates about rational behavior.
The work ofMaurice Allais andDaniel Ellsberg showed that human behavior has systematic and sometimes important departures from expected-utility maximization (Allais paradox andEllsberg paradox).[11] Theprospect theory ofDaniel Kahneman andAmos Tversky renewed the empirical study ofeconomic behavior with less emphasis on rationality presuppositions. It describes a way by which people make decisions when all of the outcomes carry a risk.[12] Kahneman and Tversky found three regularities – in actual human decision-making, "losses loom larger than gains"; people focus more onchanges in their utility-states than they focus on absolute utilities; and the estimation of subjective probabilities is severely biased byanchoring.
Intertemporal choice is concerned with the kind of choice where different actions lead to outcomes that are realized at different stages over time.[13] It is also described ascost-benefit decision making since it involves the choices between rewards that vary according to magnitude and time of arrival.[14] If someone received a windfall of several thousand dollars, they could spend it on an expensive holiday, giving them immediate pleasure, or they could invest it in a pension scheme, giving them an income at some time in the future. What is the optimal thing to do? The answer depends partly on factors such as the expectedrates of interest andinflation, the person'slife expectancy, and their confidence in the pensions industry. However even with all those factors taken into account, human behavior again deviates greatly from the predictions of prescriptive decision theory, leading to alternative models in which, for example, objective interest rates are replaced bysubjective discount rates.[citation needed]

Some decisions are difficult because of the need to take into account how other people in the situation will respond to the decision that is taken. The analysis of such social decisions is often treated under decision theory, though it involves mathematical methods. In the emerging field ofsocio-cognitive engineering, the research is especially focused on the different types of distributed decision-making in human organizations, in normal and abnormal/emergency/crisis situations.[15]
Other areas of decision theory are concerned with decisions that are difficult simply because of their complexity, or the complexity of the organization that has to make them. Individuals making decisions are limited in resources (i.e. time and intelligence) and are thereforeboundedly rational; the issue is thus, more than the deviation between real and optimal behavior, the difficulty of determining the optimal behavior in the first place. Decisions are also affected by whether options are framed together or separately; this is known as thedistinction bias.[citation needed]

Heuristics are procedures for making a decision without working out the consequences of every option. Heuristics decrease the amount of evaluative thinking required for decisions, focusing on some aspects of the decision while ignoring others.[16] While quicker than step-by-step processing, heuristic thinking is also more likely to involve fallacies or inaccuracies.[17]
One example of a common and erroneous thought process that arises through heuristic thinking is thegambler's fallacy — believing that an isolated random event is affected by previous isolated random events. For example, if flips of a fair coin give repeated tails, the coin still has the same probability (i.e., 0.5) of tails in future turns, though intuitively it might seems that heads becomes more likely.[18] In the long run, heads and tails should occur equally often; people commit the gambler's fallacy when they use this heuristic to predict that a result of heads is "due" after a run of tails.[19] Another example is that decision-makers may be biased towards preferring moderate alternatives to extreme ones. Thecompromise effect operates under a mindset that the most moderate option carries the most benefit. In an incomplete information scenario, as in most daily decisions, the moderate option will look more appealing than either extreme, independent of the context, based only on the fact that it has characteristics that can be found at either extreme.[20]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(April 2025) (Learn how and when to remove this message) |
A highly controversial issue is whether one can replace the use of probability in decision theory with something else.
Advocates for the use of probability theory point to:
The proponents offuzzy logic,possibility theory,Dempster–Shafer theory, andinfo-gap decision theory maintain that probability is only one of many alternatives and point to many examples where non-standard alternatives have been implemented with apparent success. Notably, probabilistic decision theory can sometimes besensitive to assumptions about the probabilities of various events, whereas non-probabilistic rules, such asminimax, arerobust in that they do not make such assumptions.
A general criticism of decision theory based on a fixed universe of possibilities is that it considers the "known unknowns", not the "unknown unknowns":[21] it focuses on expected variations, not on unforeseen events, which some argue have outsized impact and must be considered – significant events may be "outside model". This[which?] line of argument, called theludic fallacy, is that there are inevitable imperfections in modeling the real world by particular models, and that unquestioning reliance on models blinds one to their limits.