Theexpected utility hypothesis is a foundational assumption inmathematical economics concerningdecision making underuncertainty. It postulates thatrational agents maximize utility, meaning the subjective desirability of their actions.Rational choice theory, a cornerstone ofmicroeconomics, builds this postulate to model aggregate social behaviour.
The expected utility hypothesis states an agent chooses between risky prospects by comparing expected utility values (i.e., the weighted sum of adding the respective utility values of payoffs multiplied by their probabilities). The summarised formula for expected utility is where is the probability that outcome indexed by with payoff is realized, and functionu expresses the utility of each respective payoff.[1] Graphically the curvature of the u function captures the agent's risk attitude.
For example, imagine you’re offered a choice between receiving $50 for sure, or flipping a coin to win $100 if heads, and nothing if tails. Although both options have the same average payoff ($50), many people choose the guaranteed $50 because they value the certainty of the smaller reward more than the possibility of a larger one, reflectingrisk-averse preferences.
Standard utility functions representordinal preferences. The expected utility hypothesis imposes limitations on the utility function and makes utilitycardinal (though still not comparable across individuals).
Although the expected utility hypothesis is a commonly accepted assumption in theories underlying economic modeling, it has frequently been found to be inconsistent with the empirical results of experimental psychology. Psychologists and economists have been developing new theories to explain these inconsistencies for many years.[2] These includeprospect theory,rank-dependent expected utility andcumulative prospect theory, andbounded rationality.
Nicolaus Bernoulli described theSt. Petersburg paradox (involving infinite expected values) in 1713, prompting two Swiss mathematicians to develop expected utility theory as a solution. Bernoulli's paper was the first formalization ofmarginal utility, which has broad application in economics in addition to expected utility theory. He used this concept to formalize the idea that the same amount of additional money was less useful to an already wealthy person than it would be to a poor person. The theory can also more accurately describe more realistic scenarios (where expected values are finite) than expected value alone. He proposed that a nonlinear function of the utility of an outcome should be used instead of theexpected value of an outcome, accounting forrisk aversion, where therisk premium is higher for low-probability events than the difference between the payout level of a particular outcome and its expected value. Bernoulli further proposed that it was not the goal of the gambler to maximize his expected gain but to maximize the logarithm of his gain instead.[citation needed]
The concept of expected utility was further developed byWilliam Playfair, an eighteenth-century political writer who frequently addressed economic issues. In his 1785 pamphletThe Increase of Manufactures, Commerce and Finance, a criticism of Britain's usury laws, Playfair presented what he argued was the calculus investors made prior to committing funds to a project. Playfair said investors estimated the potential gains and potential losses, and then assessed the probability of each. This was, in effect, a verbal rendition of an expected utility equation. Playfair argued that, if government limited the potential gains of a successful project, it would discourage investment in general, causing the national economy to under-perform.[3]
Daniel Bernoulli drew attention to psychological and behavioral components behind the individual'sdecision-making process and proposed that the utility of wealth has adiminishing marginal utility. For example, an extra dollar or an additional good is perceived as less valuable as someone gets wealthier. In other words, desirability related to a financial gain depends on the gain itself and the person's wealth. Bernoulli suggested that people maximize "moral expectation" rather than expected monetary value. Bernoulli made a clear distinction between expected value and expected utility. Instead of using the weighted outcomes, he used the weighted utility multiplied by probabilities. He proved that the utility function used in real life is finite, even when its expected value is infinite.[4]
In 1926,Frank Ramsey introduced Ramsey's Representation Theorem. This representation theorem for expected utility assumes thatpreferences are defined over a set of bets where each option has a different yield. Ramsey believed that we should always make decisions to receive the best-expected outcome according to our personal preferences. This implies that if we can understand an individual's priorities and preferences, we can anticipate their choices.[5] In this model, he defined numerical utilities for each option to exploit the richness of the space of prices. The outcome of each preference is exclusive of each other. For example, if you study, you can not see your friends. However, you will get a good grade in your course. In this scenario, we analyze personal preferences and beliefs and will be able to predict which option a person might choose (e.g., if someone prioritizes their social life over academic results, they will go out with their friends). Assuming that the decisions of a person arerational, according to this theorem, we should be able to know the beliefs and utilities of a person just by looking at the choices they make (which is wrong). Ramsey defines a proposition as "ethically neutral" when two possible outcomes have an equal value. In other words, if the probability can be defined as a preference, each proposition should have1/2 to be indifferent between both options.[6]Ramsey shows that
In the 1950s,Leonard Jimmie Savage, an American statistician, derived a framework for comprehending expected utility. Savage's framework involved proving that expected utility could be used to make an optimal choice among several acts through seven axioms.[8] In his book,The Foundations of Statistics, Savage integrated a normative account of decision making under risk (when probabilities are known) and under uncertainty (when probabilities are not objectively known). Savage concluded that people have neutral attitudes towards uncertainty and that observation is enough to predict the probabilities of uncertain events.[9] A crucial methodological aspect of Savage's framework is its focus on observable choices—cognitive processes and other psychological aspects of decision-making matter only to the extent that they directly impact choice.
The theory of subjective expected utility combines two concepts: first, a personal utility function, and second, a personalprobability distribution (usually based onBayesian probability theory). This theoretical model has been known for its clear and elegant structure and is considered by some researchers to be "the most brilliant axiomatic theory of utility ever developed."[10] Instead of assuming the probability of an event, Savage defines it in terms of preferences over acts. Savage used the states (something a person doesn't control) to calculate the probability of an event. On the other hand, he used utility and intrinsic preferences to predict the event's outcome. Savage assumed that each act and state were sufficient to determine an outcome uniquely. However, this assumption breaks in cases where an individual does not have enough information about the event.
Additionally, he believed that outcomes must have the same utility regardless of state. Therefore, it is essential to identify which statement is an outcome correctly. For example, if someone says, "I got the job," this affirmation is not considered an outcome since the utility of the statement will be different for each person depending on intrinsic factors such as financial necessity or judgment about the company. Therefore, no state can rule out the performance of an act. Only when the state and the act are evaluated simultaneously is it possible to determine an outcome with certainty.[11]
Savage's representation theorem (Savage, 1954): A preference < satisfies P1–P7 if and only if there is a finitely additive probability measure P and a function u : C → R such that for every pair of actsf andg.[11]f <g ⇐⇒ Z Ωu(f(ω))dP ≥ Z Ωu(g(ω))dP[11]*If and only if all the axioms are satisfied, one can use the information to reduce the uncertainty about the events that are out of their control. Additionally, the theorem ranks the outcome according to a utility function that reflects personal preferences.
The key ingredients in Savage's theory are:
There arefour axioms of the expected utility theory that define arational decision maker: completeness; transitivity; independence of irrelevant alternatives; and continuity.[12]
Completeness assumes that an individual has well-defined preferences and can always decide between any two alternatives.
This means that the individual prefers to, to, or is indifferent between and.
Transitivity assumes that, as an individual decides according to the completeness axiom, the individual also decides consistently.
Independence of irrelevant alternatives pertains to well-defined preferences as well. It assumes that two gambles mixed with an irrelevant third one will maintain the same order of preference as when the two are presented independently of the third one. The independence axiom is the most controversial.[citation needed].
Continuity assumes that when there are three lotteries ( and) and the individual prefers to and to. There should be a possible combination of and in which the individual is then indifferent between this mix and the lottery.
If all these axioms are satisfied, then the individual is rational. A utility function can represent the preferences, i.e., one can assign numbers (utilities) to each outcome of the lottery such that choosing the best lottery according to the preference amounts to choosing the lottery with the highest expected utility. This result is thevon Neumann–Morgenstern utility representation theorem.
In other words, if an individual's behavior always satisfies the above axioms, then there is a utility function such that the individual will choose one gamble over another if and only if the expected utility of one exceeds that of the other. The expected utility of any gamble may be expressed as a linear combination of the utilities of the outcomes, with the weights being the respective probabilities. Utility functions are also normally continuous functions. Such utility functions are also called von Neumann–Morgenstern (vNM). This is a central theme of the expected utility hypothesis in which an individual chooses not the highest expected value but rather the highest expected utility. The expected utility-maximizing individual makes decisions rationally based on the theory's axioms.
The von Neumann–Morgenstern formulation is important in the application ofset theory to economics because it was developed shortly after the Hicks–Allen "ordinal revolution" of the 1930s, and it revived the idea ofcardinal utility in economic theory.[citation needed] However, while in this context theutility function is cardinal, in that implied behavior would be altered by a nonlinear monotonic transformation of utility, theexpected utility function is ordinal because any monotonic increasing transformation of expected utility gives the same behavior.
The utility function was originally suggested by Bernoulli (see above). It hasrelative risk aversion constant and equal to one and is still sometimes assumed in economic analyses. The utility function
It exhibits constant absolute risk aversion and, for this reason, is often avoided, although it has the advantage of offering substantial mathematical tractability when asset returns are normally distributed. Note that, as per the affine transformation property alluded to above, the utility function gives the same preferences orderings as does; thus it is irrelevant that the values of and its expected value are always negative: what matters for preference ordering is which of two gambles gives the higher expected utility, not the numerical values of those expected utilities.
The class of constant relative risk aversion utility functions contains three categories. Bernoulli's utility function
Has relative risk aversion equal to 1. The functions
for have relative risk aversion equal to. And the functions
for have relative risk aversion equal to
See alsothe discussion of utility functions having hyperbolic absolute risk aversion (HARA).
When the entity whose value affects a person's utility takes on one of a set ofdiscrete values, the formula for expected utility, which is assumed to be maximized, is
where the left side is the subjective valuation of the gamble as a whole, is theith possible outcome, is its valuation, and is its probability. There could be either a finite set of possible values, in which case the right side of this equation has a finite number of terms, or there could be an infinite set of discrete values, in which case the right side has an infinite number of terms.
When can take on any of a continuous range of values, the expected utility is given by
where is theprobability density function of
Thecertainty equivalent, which is the fixed amount that would make a person indifferent to it versus the outcome distribution, is given by
Often, people refer to "risk" as a potentially quantifiable entity. In the context ofmean-variance analysis,variance is used as a risk measure for portfolio return; however, this is only valid if returns arenormally distributed or otherwisejointly elliptically distributed,[13][14][15] or in the unlikely case in which the utility function has a quadratic form—however, David E. Bell proposed a measure of risk that follows naturally from a certain class of von Neumann–Morgenstern utility functions.[16] Let utility of wealth be given by
for individual-specific positive parametersa andb. Then, the expected utility is given by
Thus the risk measure is, which differs between two individuals if they have different values of the parameter allowing other people to disagree about the degree of risk associated with any given portfolio. Individuals sharing a given risk measure (based on a given value ofa) may choose different portfolios because they may have different values ofb. See alsoEntropic risk measure.
For general utility functions, however, expected utility analysis does not permit the expression of preferences to be separated into two parameters, one representing the expected value of the variable in question and the other representing its risk.
The expected utility theory takes into account that individuals may berisk-averse, meaning that the individual would refuse a fair gamble (a fair gamble has an expected value of zero). Risk aversion implies that their utility functions areconcave and show diminishing marginal wealth utility. Therisk attitude is directly related to the curvature of the utility function: risk-neutral individuals have linear utility functions, risk-seeking individuals have convex utility functions, and risk-averse individuals have concave utility functions. The curvature of the utility function can measure the degree of risk aversion.
Since the risk attitudes are unchanged underaffine transformations ofu, the second derivativeu'' is not an adequate measure of the risk aversion of a utility function. Instead, it needs to be normalized. This leads to the definition of the Arrow–Pratt[17][18] measure of absolute risk aversion:
where is wealth.
The Arrow–Pratt measure of relative risk aversion is:
Special classes of utility functions are the CRRA (constant relative risk aversion) functions, where RRA(w) is constant, and the CARA (constant absolute risk aversion) functions, where ARA(w) is constant. These functions are often used in economics to simplify.
A decision that maximizes expected utility also maximizes the probability of the decision's consequences being preferable to some uncertain threshold.[19] In the absence of uncertainty about the threshold, expected utility maximization simplifies to maximizing the probability of achieving some fixed target. If the uncertainty is uniformly distributed, then expected utility maximization becomes expected value maximization. Intermediate cases lead to increasing risk aversion above some fixed threshold and increasing risk seeking below a fixed threshold.
TheSt. Petersburg paradox presented byNicolas Bernoulli illustrates that decision-making based on the expected value of monetary payoffs leads to absurd conclusions.[20] When a probability distribution function has an infiniteexpected value, a person who only cares about expected values of a gamble would pay an arbitrarily large finite amount to take this gamble. However, this experiment demonstrated no upper bound on the potential rewards from very low probability events. In the hypothetical setup, a person flips a coin repeatedly. The number of consecutive times the coin lands on heads determines the participant's prize. The participant's prize is doubled every time it comes up heads (1/2 probability); it ends when the participant flips the coin and comes out in tails. A player who only cares about expected payoff value should be willing to pay any finite amount of money to play because this entry cost will always be less than the expected, infinite value of the game. However, in reality, people do not do this. "Only a few participants were willing to pay a maximum of $25 to enter the game because many were risk averse and unwilling to bet on a very small possibility at a very high price.[21]
In the early days of the calculus of probability, classic utilitarians believed that the option with the greatest utility would produce more pleasure or happiness for the agent and, therefore, must be chosen.[22] The main problem with theexpected value theory is that there might not be a unique correct way to quantify utility or to identify the best trade-offs. For example, some of the trade-offs may be intangible or qualitative. Rather thanmonetary incentives, other desirable ends can also be included in utility, such as pleasure, knowledge, friendship, etc. Originally, the consumer's total utility was the sum of independent utilities of the goods. However, the expected value theory was dropped as it was considered too static and deterministic.[4] The classic counter example to the expected value theory (where everyone makes the same "correct" choice) is theSt. Petersburg Paradox.[4]
In empirical applications, several violations of expected utility theory are systematic, and these falsifications have deepened our understanding of how people decide.Daniel Kahneman andAmos Tversky in 1979 presented theirprospect theory which showed empirically how preferences of individuals are inconsistent among the same choices, depending on theframing of the choices, i.e., how they are presented.[23]
Like anymathematical model, expected utility theory simplifies reality. The mathematical correctness of expected utility theory and the salience of its primitive concepts do not guarantee that expected utility theory is a reliable guide to human behavior or optimal practice. The mathematical clarity of expected utility theory has helped scientists design experiments to test its adequacy and to distinguish systematic departures from its predictions. This has led to thebehavioral finance field, which has produced deviations from the expected utility theory to account for the empirical facts.
Other critics argue that applying expected utility to economic and policy decisions has engendered inappropriate valuations, particularly when monetary units are used to scale the utility of nonmonetary outcomes, such as deaths.[24]
Psychologists have discovered systematic violations of probability calculations and behavior by humans. This has been evidenced by examples such as theMonty Hall problem, where it was demonstrated that people do not revise their degrees on belief in line with experimented probabilities and that probabilities cannot be applied to single cases. On the other hand, in updating probability distributions using evidence, a standard method usesconditional probability, namely therule of Bayes. An experiment onbelief revision has suggested that humans change their beliefs faster when using Bayesian methods than when using informal judgment.[25]
According to the empirical results, there has been almost no recognition in decision theory of the distinction between the problem of justifying its theoretical claims regarding the properties of rational belief and desire. One of the main reasons is that people's basic tastes and preferences for losses cannot be represented with utility as they change under different scenarios.[26]
Behavioral finance has produced severalgeneralized expected utility theories to account forinstances where people's choices deviate from those predicted by expected utility theory. These deviations are described as "irrational" because they can depend on the way the problem is presented, not on the actual costs, rewards, or probabilities involved. Particular theories, includingprospect theory,rank-dependent expected utility, andcumulative prospect theory, are considered insufficient to predict preferences and the expected utility.[27] Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern. This is because preferences and utility functions constructed under different contexts differ significantly. This is demonstrated in the contrast of individual preferences under the insurance and lottery context, which shows the degree of indeterminacy of the expected utility theory. Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern.
In practice, there will be many situations where the probabilities are unknown, and one operates underuncertainty. In economics,Knightian uncertainty orambiguity may occur. Thus, one must make assumptions about the probabilities, but the expected values of various decisions can be verysensitive to the assumptions. This is particularly problematic when the expectation is dominated by rare extreme events, as in along-tailed distribution. Alternative decision techniques arerobust to the uncertainty of probability of outcomes, either not depending on probabilities of outcomes and only requiringscenario analysis (as inminimax orminimax regret), or being less sensitive to assumptions.
Bayesian approaches to probability treat it as a degree of belief. Thus, they do not distinguish between risk and a wider concept of uncertainty: they deny the existence of Knightian uncertainty. They would model uncertain probabilities withhierarchical models, i.e., as distributions whose parameters are drawn from a higher-level distribution (hyperpriors).
Starting with studies such as Lichtenstein & Slovic (1971), it was discovered that subjects sometimes exhibit signs of preference reversals about their certainty equivalents of different lotteries. Specifically, when elicitingcertainty equivalents, subjects tend to value "p bets" (lotteries with a high chance of winning a low prize) lower than "$ bets" (lotteries with a small chance of winning a large prize). When subjects are asked which lotteries they prefer in direct comparison, however, they frequently prefer the "p bets" over "$ bets".[28] Many studies have examined this "preference reversal", from both an experimental (e.g., Plott & Grether, 1979)[29] and theoretical (e.g., Holt, 1986)[30] standpoint, indicating that this behavior can be brought into accordance with neoclassical economic theory under specific assumptions.
Three components in the psychology field are seen as crucial to developing a more accurate descriptive theory of decision under risks.[26][31]
{{cite journal}}:Cite journal requires|journal= (help){{cite book}}: CS1 maint: location (link)