Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Dynamic Choice

First published Mon Oct 15, 2007; substantive revision Mon May 12, 2025

Sometimes a series of choices do not serve one’s concerns welleven though each choice in the series seems perfectly well suited toserving one’s concerns. In such cases, one has a dynamic choiceproblem. Otherwise put, one has a problem related to the fact thatone’s choices are spread out over time. There is a growingphilosophical literature, which crosses over into psychology andeconomics, on the obstacles to effective dynamic choice. Thisliterature examines the challenging choice situations and problematicpreference structures that can prompt dynamic choice problems. It alsoproposes solutions to such problems. Increasingly, familiar butpotentially puzzling phenomena—including, for example,self-destructive addictive behavior and dangerous environmentaldestruction—have been illuminated by dynamic choice theory. Thissuggests that the philosophical and practical significance of dynamicchoice theory is quite broad.

1. Challenging Choice Situations, Problematic Preference Structures, and Dynamic Choice Problems

Agents often lack some information about the consequences of eachavailable option that they face in a choice situation (with the choicemade under some risk or uncertainty about the outcome of that choice).But, even where such a lack of information is not at issue, effectivechoice over time can be extremely difficult given certain challengingchoice situations or problematic preference structures, such as theones described below. As will become apparent, these choice situationsor preference structures can prompt a series of decisions that serveone’s large-scale, ongoing concerns very badly. (Of course, if,due perhaps to some substantial transformation(s), one is sofragmented over time that one has no large-scale, ongoing concerns towhich one is persistently accountable, then inconsistency inone’s choices over time may be inevitable; but my primaryinterest here is in the philosophically puzzling cases of dynamicchoice in which an agent remains accountable to certain large-scale,ongoing concerns that are nonetheless poorly served by her choicesover time.)

1.1 Incommensurable Alternatives

Let us first consider situations that involve choosing betweenincommensurable alternatives. According to the now familiar use of theterm “incommensurable” popularized by philosophers such asJoseph Raz and John Broome (see, for example, Raz 1986 and Broome2000), two alternatives are incommensurable if neither alternative isbetter than the other, nor are the two alternatives equally good.(Although this entry sticks with this standard use, it is worth notingthat some philosophers, such as, for example, Ruth Chang (2017), usethe term “incommensurable” in a different way and thusfavor a different label for options that are here described asincommensurable, such as “in equipoise”, see Chang2013.)

It might seem as though the idea of incommensurable alternatives doesnot really make sense. For if the value of an alternative (to aparticular agent) is neither higher nor lower than the value ofanother alternative, then the values of the two alternatives must, itseems, be equal. But this assumes that there is a common measure thatone can use to express and rank the value of every alternative; and,if there are incommensurable alternatives, then this assumption ismistaken.

Now consider the following: If all alternatives were commensurable,then whenever one faced two alternatives neither of which was betterthan the other, slightly improving one of the alternatives would, itseems, ‘break the tie’ and render one alternative, namelythe improved alternative, superior. But there seem to be cases inwhich there are two alternatives such that (i) neither alternative isbetter than the other and (ii) this feature is not changed by slightlyimproving one of the alternatives. Consider, for example, thefollowing case: For Kay, neither of the following alternatives isbetter than the other:

(A1)
going on a six-day beach vacation with her children
(A2)
taking a two-month oil-painting course.

Furthermore, although the alternative

(A1+)
going on a seven-day beach vacation with her children

is a slight improvement on A1, A1+ is not better than A2.This scenario seems possible, and if it is, then we have a case ofincommensurable alternatives. For, in this case, A1 is not better thanA2, A2 is not better than A1, and yet A1 and A2 are not equally good.If A1 and A2 were equally good, then an improvement on A1, such asA1+, would be better than A2. But, for Kay, A1+is not better than A2.

It is often supposed that incommensurable alternatives must beincomparable. But things are complicated once it is recognized thatthere is conceptual room for two alternatives that are not comparableas one better than the other or as equally good (and so areincommensurable according the conception of incommensurabilityidentified above) to be comparable as ‘in the same league’or ‘on a par,’ and thus not altogether incomparable, aswould be the case if there were no positive relation connecting theoverall value of each option (see Chang, 2002). For the purposes ofthis discussion, the question of whether incommensurable alternativesare invariably incomparable can be put aside, since the dynamic choiceproblem that will be discussed in relation to incommensurabilityapplies regardless of whether the incommensurable options at issue areincomparable or are instead comparable as on a par.

Although there is still some controversy concerning the possibility ofincommensurable alternatives (compare, for a sense of the issues, Raz1997 and Regan 1997), there is widespread agreement that we oftentreat alternatives as incommensurable. Practically speaking,determining the value of two very different alternatives in terms of acommon measure, even if this is possible, may be too taxing. It isthus often natural to treat two alternatives as though they areneither equal nor one better than the other.

The existence or appearance of incommensurable alternatives can giverise to dynamic choice problems. Consider Abraham’s case, asdescribed by John Broome in his work on incommensurability:

God tells Abraham to take his son Isaac to the mountain, and theresacrifice him. Abraham has to decide whether or not to obey. Let usassume this is one of those choices where the alternatives areincommensur[able]. The option of obeying will show submission to God,but the option of disobeying will save Isaac’s life. Submittingto God and saving the life of one’s son are such differentvalues that they cannot be weighed determinately against each other;that is the assumption. Neither option is better than the other, yetwe also cannot say that they are equally good. (Broome 2001, 114)

Given that the options of submitting to God and saving Isaac areincommensurable (and even if they were only incommensurable as far asa reasonable person could tell), Abraham’s deciding to submit toGod seems rationally permissible. So it is easy to see howAbraham’s situation might prompt him to set out for the mountainin order to sacrifice Isaac. But it is also easy to see how, once atthe foot of the mountain, Abraham might decide to turn back. For, eventhough, as Broome puts it, “turning back at the foot of themountain is definitely worse than never having set out at all”since “trust between father and son [has already been] badlydamaged” (2001, 115), the option of saving Isaac by turning backand the option of submitting to God and sacrificing Isaac may beincommensurable. This becomes apparent if one recalls Kay’s caseand labels Abraham’s above-mentioned options as follows:

(B1)
saving Isaac by turning back at the foot of the mountain
(B1+)
saving Isaac by refusing to set out for the mountain
(B2)
submitting to God and sacrificing Isaac.

Even though B1+ is better than B1, both B1+ andB1 may be incommensurable with B2. But if B1 is incommensurable withB2, then Abraham could, once at the foot of the mountain, easilydecide to opt for B1 over B2. Given that B1 is worse thanB1+, Abraham could thus easily end up with an outcome thatis worse than another that was available to him, even if each of hischoices makes sense given the value of the alternatives he faces.

The moral, in general terms, is that in cases of incommensurability(or cases in which it is tempting to treat two alternatives asincommensurable), decisions that seem individually defensible can,when combined, result in a series of decisions that fit together verypoorly relative to the agent’s large-scale, ongoing concerns.Contributions concerning the possibility, significance, andimplications of incommensurability —both for individuals and forpublic policy —continue to quickly grow and gather interest,with recent debates appearing in multiple new collections on thetopic. (See, for example, Andersson & Herlitz 2021 and Eyal &Herlitz 2024.)

1.2 Time-Biased Preferences

Another source of dynamic choice problems is present-biasedpreferences. Like other animals, humans give more weight to presentsatisfaction than to future satisfaction. In other words, we discountfuture utility. Insofar as one discounts future utility, one prefers,other things equal, to get a reward sooner rather than later;relatedly, the closer one gets to a future reward, the more the rewardis valued. If we map the value (to a particular agent) of a givenfuture reward as a function of time, we get a discount curve, such asin Figure 1:

Figure 1

Figure 1. The discounted value of areward gradually increases ast, the time at which the rewardwill be available, approaches.

Research in experimental psychology (see, for example, Kirby &Herrnstein 1995, Millar & Navarick 1984, Solnick et al. 1980, andAinslie 2001) suggests that, given how animals, including humans,discount future utility, there are plenty of cases in which thediscount curves from two rewards, one a small reward and the other alarger later reward, cross, as in Figure 2:

Figure 2

Figure 2. Two crossing discount curves,one tracking the discounted value of a small reward that will beavailable att1 and the other tracking thediscounted value of a large reward that will be available att2.

In such cases, the agent’s discounting of future utility inducesa preference reversal with respect to the two possible rewards. Whenneither reward is imminent, before the discount curves cross, theagent consistently prefers the larger later reward over the smallerearlier reward. But when the opportunity to accept the small reward issufficiently close, the discounted value of the small reward catchesup with and then overtakes the discounted value of the larger laterreward. As the discount curves cross, the agent’s preferencesreverse and she prefers the small reward over the larger laterreward.

Discounting-induced preference reversals make consistent and efficientchoice over time a challenge. An agent subject to discounting-inducedpreference reversals can easily find herself performing a series ofactions that she planned against and will soon regret. Consider theagent who wants to save for a decent retirement but, as eachopportunity to save approaches, prefers to spend her potentialretirement contribution on just one more trivial indulgence beforefinally tightening her belt for the sake of the future satisfactionshe feels is essential to her well-being. Although this agentconsistently plans to save for her retirement (such saving figures asamong her large-scale, ongoing concerns), her plans can beconsistently thwarted by her discounting-induced preference reversals.Her life may thus end up looking very different from the sort of lifeshe wanted.

Interestingly, in addition to giving more weight to presentsatisfaction than to future satisfaction, human beings also seem togive more weight to future satisfaction than to past satisfaction(Greene et al. 2022). Relatedly, human beings seem to discount pastpain more than future pain. Suppose, to appeal to a variation on DerekParfit’s famous thought experiment (1984, 165–6), yoursituation is such that either you’ve already suffered aperfectly safe but terribly painful ten-hour medical procedureyesterday or else you will suffer a perfectly safe but terriblypainful nine-hour medical procedure tomorrow. (You don’t knowwhich situation you’re in because amnesia is administered rightafter the procedure and you’ve just woken up in the hospitalconfused about whether you’re recovering from the procedure orbeing prepped for it.) Wouldn’t you prefer to be in the formersituation? Intuitively, it seems like the prevailing and rationalresponse would be “most definitely!” But there is someconcern that this form of future bias, in which past rewards or costsare discounted more than future rewards or costs, can lead to trouble(Dougherty 2011; Greene and Sullivan 2015). For example, PrestonGreene and Meghan Sullivan (2015) argue that it can be a recipe for alife of “meager returns” and/or regret. Their reasoning isquite elaborate, but the following simple illustration and somewhatextemporized analysis, will hopefully provide a glimpse into some ofthe interesting philosophical issues at stake. Consider Massimo, whothoroughly enjoys massages and who can choose between a longer massageearly on or a shorter massage later. If Massimo is future biased,then, with some variation on the length and timing of the massages, hecan easily find himself faced with the following dilemma: if he optsfor getting a longer massage early on, he will, sometime after gettingthe longer massage and before the shorter massage would have beenavailable, regret accepting a pleasure, now past, that could havestill been in the future (even if diminished); if, alternatively, heopts for getting a shorter massage later on (thus avoiding regret ofthe preceding sort), he will face a life of “meagerreturns,” in which less pleasure later is, potentiallyroutinely, chosen over more pleasure earlier (a scenario that canitself generate regret and/or concern, particularly once both massagetimes are past, or if one recognizes, even as one is gladly awaiting alesser pleasure after giving up a greater pleasure that would now havebeen in the past, that, insofar as the same sort of choice has arisenrepeatedly and will continue to arise repeatedly, repeated choices forless pleasure later make for a life that is both retrospectively andprospectively much less appealing than repeated choices for morepleasure early on).

1.3 Intransitive Preferences

An agent’s preference structure need not be changing over timefor it to prompt dynamic choice problems. Such problems can also beprompted by preferences that are stable but intransitive. One’spreferences count as transitive if they satisfy the followingcondition: for allx,y, andz, if oneprefersx toy, andy toz, thenone also prefersx toz. If one’s preferencesover a set of options do not satisfy this condition, then thesepreferences count as intransitive. When one’s preferences over aset of options are intransitive, then one cannot rank the options frommost preferred to least preferred. This holds even if one’spreferences over the options are complete, in the sense that all theoptions are ranked with respect to one another. Suppose, for example,that one prefers jobA to jobB, jobB tojobC, but jobC to jobA. In this case,one’s complete preferences over the set {jobA, jobB, jobC} form a preference loop, which can berepresented as follows:

job A > job B > job C >

Figure 3.

wherex >y is to be read asx ispreferred toy.

Could an agent really have intransitive preferences? Work inexperimental and theoretical economics (see, for example, Tversky1969) suggests that intransitive preferences exist and may be quitecommon. (For a recent taxonomy of examples of intransitivepreferences, see Bovens 2022.) Consideration of the followingsituation might help make it clear how intransitive preferences canarise (whether or not they are rational). Suppose Jay can accept oneof three jobs: jobA is very stimulating but low-paying; jobB is somewhat stimulating and pays decently; jobCis not stimulating but pays very well. Given this situation, one canimagine Jay having the following preferences: He prefers jobA over jobB because the difference between having alow-paying job and a decently-paying job is not significant enough tomake Jay want to pass up a very stimulating job. Similarly, he prefersjobB over jobC because the difference betweenhaving a decently-paying job and a high-paying job is not significantenough to make Jay want to pass up a stimulating job. But he prefersjobC over jobA because the difference betweenhaving a high-paying job and a low-paying job is significant enough tomake Jay want to pass up even a very stimulating job.

Given the famous “money pump argument,” developed byDonald Davidson, J. McKinsey, and Patrick Suppes (1955), it is clearthat intransitive preferences can be problematic. Like Dutch bookarguments regarding betting, in which the rationality of an agent isput into question because the agent is susceptible to having a bookmade against her (i.e., to accepting a series of bets which are suchthat she is bound to lose more than she can gain), the money-pumpargument is concerned with agents who are vulnerable to making acombination of choices that lead to a sure loss. According to themoney-pump argument, intransitive preferences are irrational becausethey can prompt an agent to accept a series of trade offers thatleaves the agent with the same option he began with, but with lessmoney. Here is a case of the relevant sort. Suppose that Alex has thefollowing intransitive preferences: he prefers owning a computer oftypeA to owning a computer of typeB, owning acomputer of typeB to owning a computer of typeC,and owning a computer of typeC to owning a computer of typeA. Suppose also that Alex owns a computer of typeCand a hundred dollars in spending money. Suppose finally that, givenhis preferences between different computer types, Alex prefers (i)owning a computer of typeB and one less dollar of spendingmoney over owning a computer of typeC, (ii) owning acomputer of typeA and one less dollar of spending money overowning a computer of typeB, and (iii) owning a computer oftypeC and one less dollar of spending money over owning acomputer of typeA. Then a series of unanticipated tradeopportunities can spell trouble for Alex. In particular, given theopportunity to trade his current (typeC) computer and adollar for a computer of typeB, Alex’s preferenceswill prompt him to make the trade. Given the further opportunity totrade his current (typeB) computer and a dollar for acomputer of typeA, Alex’s preferences will prompt himto trade again. And given the opportunity to trade his current (typeA) computer and a dollar for a computer of typeC,Alex’s preferences will prompt him to make a third trade. Butthis series of trades leaves Alex with the type of computer he startedoff with and only 97 dollars. And, given that unexpected tradingopportunities may keep popping up, Alex’s situation may continueto deteriorate. Even though he values his spending money, hispreferences make him susceptible to being used as a ‘moneypump.’ Moreover, interesting variations on the basic money pumpargument show that an agent with intransitive preferences like thosejust considered is susceptible to being money-pumped even if he showsforesight and correctly anticipates his upcoming tradingopportunities. See, for example, Rabinowicz 2000 and Dougherty 2014.For a recent in-depth defense of the significance of money-pumparguments, see Gustafsson 2022.

Even if he does not serve as a money pump, an agent with intransitivepreferences can get into a great deal of trouble. To see this,consider Warren Quinn’s “puzzle of theself-torturer” (1993): Suppose someone—who, for reasonsthat will become apparent, Quinn calls the self-torturer—has aspecial electric device attached to him. The device has 1001 settings:0, 1, 2, 3, …, 1000 and works as follows: moving up a settingraises, by a tiny increment, the amount of electric current applied tothe self-torturer’s body. The increments in current are so smallthat the self-torturer cannot tell the difference between adjacentsettings. He can, however, tell the difference between settings thatare far apart. And, in fact, there are settings at which theself-torturer would experience excruciating pain. Once a week, theself-torturer can compare all the different settings. He must then goback to the setting he was at and decide if he wants to move up asetting. If he does so, he gets $10,000, but he can never permanentlyreturn to a lower setting. Like most of us, the self-torturer wouldlike to increase his fortune but also cares about feeling well. Sincethe self-torturer cannot feel any difference in comfort betweenadjacent settings but gets $10,000 at each advance, he prefers, forany two consecutive settingss ands+1, stopping ats+1 to stopping ats. But, since he does not want tolive in excruciating pain, even for a great fortune, he also prefersstopping at a low setting, such as 0, over stopping at a high setting,such as 1000.

Given his preferences, the self-torturer cannot rank the settingoptions he faces from most preferred to least preferred. Morespecifically, his preferences incorporate the following preferenceloop:

setting 0 < setting 1 < ... < setting 999 < setting 1000 < setting 0

Figure 4.

Relatedly, the self-torturer’s preferences over the availablesetting options are intransitive. If his preferences were transitive,then, given that he prefers setting 2 to setting 1 and setting 1 tosetting 0, he would prefer setting 2 to setting 0. Given that he alsoprefers setting 3 to setting 2, he would (assuming transitivity)prefer setting 3 to setting 0. Given that he also prefers setting 4 tosetting 3, he would (assuming transitivity) prefer setting 4 tosetting 0. Continuing with this line of reasoning leads to theconclusion that he would, if his preferences were transitive, prefersetting 1000 to setting 0. Since he does not prefer setting 1000 tosetting 0, his preferences are intransitive. And this intransitivitycan lead the self-torturer down a terrible path. In particular, if,each week, the self-torturer follows his preference over the pair ofsettings he must choose between, he will end up in a situation that hefinds completely unacceptable. This is quite disturbing, particularlyonce one realizes that, although the situation of the self-torturer ispure science fiction, the self-torturer is not really alone in hispredicament. As Quinn stresses, “most of us are like him in oneway or another. [For example, most of us] like to eat but also careabout our appearance. Just one more bite will give us pleasure andwon’t make us look fatter; but very many bites will”(Quinn 1993, 199).

Given the money pump argument and the puzzle of the self-torturer, wecan, it seems, conclude that although intransitive preferences aresometimes understandable, acting on them can be far from sensible.(Note, however, that, as Duncan MacIntosh (2010) suggests, the notionof “an unacceptable situation” plays an important rolehere and the question of how to cash out this notion stands in need ofadditional attention. For a recent attempt at addressing this issue,see (Andreou 2023, chapter 3), wherein instrumental rationality isportrayed as accountable to “subjective appraisalresponses” that go beyond the agent’s preferences and thatsometimes allow some outcomes in a “preference loop” tofigure as (rationally) acceptable and others to figure as (rationally)unacceptable.)

1.4 Vague Goals and other Challenging Wholes

Like intransitive preferences, vague goals or projects can promptdynamic choice problems even if the agent’s preference structureis not changing over time. Indeed, some have suggested that the deepsource of the self-torturer’s problem, and what prompts hisintransitive preferences, is that his goal of avoiding extreme pain isvague in the sense that, in the situation described, avoiding extremepain requires engaging in a multitude of goal-directed actions thatare not individually necessary or sufficient for the achievement ofthe goal and that are thus dispensable and perhaps even dominated ifconsidered individually (Tenenbaum and Raffman, 2012). It may behelpful to consider a more familiar example of a vague goal orproject, such as that of writing a good book. As Sergio Tenenbaum andDiana Raffman explain, this project may be characterizable as follows(2012, 99–100):

  1. Its completion requires the successful execution of many momentaryactions.
  2. For each momentary action in which you execute the project,failure to execute that action would not have prevented you fromwriting the book.
  3. On many occasions when you execute the project, there is somethingelse that you would prefer to be doing, given how unlikely it is thatexecuting the project at this time would make a difference to thesuccess of your writing the book.
  4. Had you failed to execute the project every time you would havepreferred to be doing something else, you would not have written thebook.
  5. You prefer executing the project at every momentary choicesituation in which you could work on the project, over not writing thebook at all.

It is not difficult to see how, in a case like this, seeminglyrational “local” decisions can lead one off course.

Tenenbaum and Raffman’s discussion of the pursuit of vague goalsis interestingly related to Luca Ferrero’s suggestion that manyactivities are “made up of momentary actions that relate innon-local ways that span over the entire length of theactivities” and “require the agent’s continuousappreciation of the structure and outcome of the extended activitiestaken as a whole” (2009, 406). Ferrero focuses on activitiesthat have a narrative dimension, in that “the unfolding of thecharacteristic temporal structure of …[the] activities can befully and perspicuously described solely by a narrative”(412–3), but the pursuit of vague goals also seems to fitFerrero’s initial description, as well as his idea thatactivities of the relevant sort involve the “paradigmaticoperation” of the “diachronic will” (406). In allsuch activities, relentless guidance by “proximalconcerns” interferes with what is required by “theactivity’s global structure” (406).

1.5 Autonomous Benefit Cases

The discussions in the preceding three sections suggest that, when itcomes to serving one’s concerns well, the ability to choosecounter-preferentially may be quite helpful. This point is reinforcedby the possibility of autonomous benefit cases.

In autonomous benefit cases, one benefits from forming a certainintention but not from carrying out the associated action. Theautonomous benefit cases that have figured most prominently in theliterature on dynamic choice are those in which carrying out theaction associated with the beneficial intention is detrimental ratherthan just unrewarding. Among the most famous autonomous benefit casesis Gregory Kavka’s “toxin puzzle” (1983). InKavka’s invented case,

an eccentric billionaire…places before you a vial oftoxin…[and provides you with the following information:] If youdrink [the toxin], [it] will make you painfully ill for a day, butwill not threaten your life or have any lasting effects…. Thebillionaire will pay you one million dollars tomorrow morning if, atmidnight tonight, you intend to drink the toxin tomorrowafternoon…. You need not drink the toxin to receive the money;in fact, the money will already be in your bank account hours beforethe time for drinking it arrives, if you succeed…. [The]arrangement of…external incentives is ruled out, as are suchalternative gimmicks as hiring a hypnotist to implant theintention… (Kavka 1983, 33–4)

Part of what is interesting about this case is that, even though mostpeople would gladly drink the toxin for a million dollars, getting themillion dollars is not that easy. This is because one does not get themillion dollars for drinking the toxin. Indeed, one does not getanything but a day of illness for drinking the toxin. As Kavkaexplains, by the time the toxin is to be consumed, one either alreadyhas the million in one’s account or not; and drinking the toxinwill not get one any (additional) funds. Assuming one has no desire tobe ill for nothing, drinking the toxin seems to involve actingcounter-preferentially—and this is, if not impossible, at leastno easy feat. So, given a clear understanding of the situation, one islikely to find it difficult, if not impossible, to form the intentionto drink the toxin. Presumably, one cannot form the intention to drinkthe toxin if one is confident that one will not drink it. If only onecould somehow rely on the cooperation of one’s future self, onecould then genuinely form the intention to drink the toxin and thusget the million—a wonderful result from the perspective of bothone’s current and one’s future self. But, alas,one’s future self will, it seems, have no reason to drink thetoxin when the time for doing so arrives.

Here again we have a situation in which doing well by oneself is noteasy.

2. Solving Dynamic Choice Problems

Given how much trouble dynamic choice problems can cause, it isnatural to wonder whether and how they can be solved. Varioussolutions of varying scope have been proposed in the literature ondynamic choice. The first three subsections that follow focus on ideasregarding the practical issue of dealing with dynamic choice problems.The fourth subsection focuses on attempts at resolving the theoreticalpuzzles concerning rational choice raised by various dynamic choiceproblems.

2.1 Rational Irrationality

Two strategies that we can sometimes use to solve (in the sense ofpractically deal with) dynamic choice problems are suggested inKavka’s description of the toxin puzzle. One strategy is to usegimmicks that cause one to reason or choose in a way that does notaccord with one’s preferences. The other strategy involves thearrangement of external incentives. Although such maneuvers are ruledout in Kavka’s case, they can prove useful in less restrictivecases. This subsection considers the former strategy and the nextsubsection considers the latter strategy.

If one accepts the common assumption that causing oneself to reason orchoose in a way that does not accord with one’s preferencesinvolves rendering oneself irrational, the former strategy can bethought of as aiming at rationally-induced irrationality. A fancifulbut clear illustration of this strategy is presented in DerekParfit’s work (1984). In Parfit’s example (which islabeledSchelling’s Answer to Armed Robbery because itdraws on Thomas Schelling’s view that “it is not auniversal advantage in situations of conflict to be inalienably andmanifestly rational in decision and motivation” (Schelling 1960,18)), a robber breaks into someone’s house and orders the owner,call him Moe, to open the safe in which he hoards his gold. The robberthreatens to shoot Moe’s children unless Moe complies. But Moerealizes that both he and his children will probably be shot even ifhe complies, since the robber will want to get rid of them so thatthey cannot record his getaway car information and get it to thepolice (who will be arriving from the nearest town in about 15 minutesin response to Moe’s call, which was prompted by the first signsof the break-in). Fortunately, Moe has a special drug at hand that, ifconsumed, causes one to be irrational for a brief period. Recognizingthat this drug is his only real hope, Moe consumes the drug andimmediately loses his wits. He begins “reeling about theroom” saying things like “Go ahead. I love my children. Soplease kill them” (Parfit 1984, 13). Given Moe’s currentstate, the robber cannot do anything that will induce Moe to open thesafe. There is no point in killing Moe or his children. The onlysensible thing to do now is to hurry off before the police arrive.

Given that consuming irrationality drugs and even hiring hypnotistsare normally not feasible solutions to our dynamic choice problems,the possibility of rationally inducing irrationality may seempractically irrelevant. But it may be that we often benefit from thenon-conscious employment of what is more or less a version of thisstrategy. We sometimes, for example, engage in self-deception orindulge irrational fears or superstitions when it is convenient to doso. Many of us might, in toxin-type cases, be naturally prone to dwellon and indulge superstitious fears, like the fear that one willsomehow be jinxed if one manages to get the million dollars but thendoes not drink the toxin. Given this fear, one might be quiteconfident that one will drink the toxin if one gets the million; andso it might be quite easy for one to form the intention to drink thetoxin. Although this is not a solution to the toxin puzzle that onecan consciously plan on using (nor is it one that resolves thetheoretical issues raised by the case), it may nonetheless often helpus effectively cope with toxin-type cases. (For a clear and compactdiscussion concerning self-deception, “motivationally biasedbelief,” and “motivated irrationality” moregenerally, see, for example, Mele 2004.)

2.2 The Arrangement of External Incentives

The other above-mentioned strategy that is often useful for dealingwith certain dynamic choice problems is the arrangement of externalincentives that make it worthwhile for one’s future self tocooperate with one’s current plans. This strategy can beparticularly useful in dealing with discounting-induced preferencereversals. Consider again the agent who wants to save for a decentretirement but, as each opportunity to save approaches, prefers tospend her potential retirement contribution on just one more trivialindulgence before finally tightening her belt for the sake of thefuture satisfaction she feels is essential to her well-being. If thisagent’s plans are consistently thwarted by herdiscounting-induced preference reversals, she might come to theconclusion that she will never manage to save for a decent retirementif she doesn’t supplement her plans with incentives that willprevent the preference reversals that are causing her so much trouble.If she is lucky, she may find an existing precommitment device thatshe can take advantage of. Suppose, for example, that she can sign upfor a program at work that, starting in a month, automaticallydeposits a portion of her pay into a retirement fund. If she cannotremove deposited funds without a significant penalty, and if she mustprovide a month’s notice to discontinue her participation in theprogram, signing up for the program might change the cost-and-rewardstructure of spending her potential retirement contributions ontrivial indulgences enough to make this option consistentlydispreferred. If no ready-made precommitment device is available, shemight be able to create a suitable one herself. If, for example, sheis highly averse to breaking promises, she might be able to solve herproblem by simply promising a concerned friend that she willhenceforth deposit a certain percentage of her pay into a retirementfund.

In some cases, one might not be confident that one can arrange forexternal incentives that will get one’s future self tovoluntarily cooperate with one’s current plans. One mighttherefore favor the related but more extreme strategy of making surethat one’s future self does not have the power to thwartone’s current plans. Rather than simply making cooperation moreworthwhile (and thus, in a sense, more compelling), this strategyinvolves arranging for the use of force (whichcompels in astronger sense of the term). A fictional but particularly famousemployment of the strategy (which is discussed in, for example, Elster1984) is its employment by Odysseus in Homer’sOdyssey.Because he longed to hear the enchanting singing of the Sirens, butfeared that he would thereby be lured into danger, Odysseus instructedhis companions to tie him to the mast of his ship and to resist his(anticipated) attempts at freeing himself from the requestedbonds.

2.3 Symbolic Utility

Another strategy for dealing with certain dynamic choiceproblems—this one proposed by Robert Nozick (1993)—is thestrategy of investing actions with symbolic utility (or value) andthen allowing oneself to be influenced not only by the causalsignificance of one’s actions, but also by their symbolicsignificance. According to Nozick, “actions and outcomes cansymbolize still further events … [and] draw upon themselves theemotional meaning (and utility…) of these other events”(26). If “we impute to actions… utilities coordinate withwhat they symbolize, and we strive to realize (or avoid) them as wewould strive for what they stand for” (32), our choices willdiffer from what they would be if we considered only the causalsignificance of our actions. Consider, for example, the case of theself-torturer. Suppose the self-torturer has moved up ten settings inten weeks. He is still in a very comfortable range, but he is startingto worry about ending up at a high setting that would leave him inexcruciating pain. It occurs to him that he should quit while he isahead, and he begins to symbolically associate moving up a setting atthe next opportunity with moving up a setting at every upcomingopportunity. By the time the next opportunity to move up a settingcomes around, the extremely negative symbolic significance of thispotential action steers him away from performing the action. For astructurally similar but more down-to-earth example, consider someonewho really enjoys eating but is averse to putting on a considerableamount of weight because of issues associated with joint pain. If thisindividual comes to symbolically associate having an extra helpingwith overeating in general and thus with putting on a considerableamount of weight, he may be averse to having the extra helping, evenif, in causal terms, what he does in this particular case isnegligible.

2.4 Plans and Resoluteness

The three strategies discussed so far suggest that, to cope withdynamic choice problems, one must either mess with one’srationality or else somehow change the payoffs associated with theoptions one will face. Some philosophers—including, for example,Michael Bratman (1999; 2006), David Gauthier (1986; 1994), and EdwardMcClennen (1990; 1997)—have, however, suggested that therational agent will not need to resort to such gimmicks as often asone might think—a good thing, since making the necessaryarrangements can require a heavy investment of time, energy, and/ormoney. The key to their arguments is the idea that adopting plans canaffect what it is rational for one to do even when the plans do notaffect the payoffs associated with the options one will face;relatedly, their arguments incorporate the idea that rationality atleast sometimes calls for resolutely sticking to a plan even if theplan disallows an action that would fit as well or better withone’s preferences than the action required by the plan. (Forsome interesting discussion relating resoluteness, one’s currentoptions, and the options one will face, see Portmore 2019.) ForBratman, Gauthier, and McClennen, being resolute is not simply usefulin coping with dynamic choice problems. Rather, it figures as part ofa conception of rationality that resolves the theoretical puzzlesconcerning rationality and choice over time posed by various dynamicchoice problems. In particular, it figures as part of a conception ofrationality whose dictates provide intuitively sensible guidance notonly in simple situations but also in challenging dynamic choicesituations. (Significantly, in some of his more recent work, Bratman(2014; 2018) distances himself from the idea that rationalresoluteness involves acting contrary to one’s currentpreferences by suggesting that when rationality calls for sticking toa plan even if this is not called for by one’s currentpreferences, there may be “rational pressure” to changeone’s current preferences.)

We are, as Michael Bratman (1983; 1987) stresses, planning creatures.Our reasoning is structured by our plans, which enable us to achievecomplex personal and social goals. To benefit from planning, one musttake plans seriously. For Bratman, this involves, among other things,(i) recognizing a general rational pressure favoring sticking toone’s plan so long as there is no problem with the plan (Bratman2006, section 8), and (ii) “taking seriously how one will seematters at the conclusion of one’s plan, or at appropriatestages along the way, in the case of plans or policies that areongoing” (1999, 86). In accordance with these proposedrequirements, Bratman (1999) concludes that rationality at leastsometimes calls for sticking to a plan even if this is not called forby one’s current preferences. Moreover, although this conceptionof rationality requires that one sometimes resist one’s currentpreferences, it is taken to prompt more sensible choices inchallenging dynamic choice situations than do conceptions ofrationality whose dictates do not take plans seriously.

The significance of the first requirement is easy to see. If there isa general rational pressure favoring sticking to one’s plan solong as there is no problem with it, then a rational agent that takesplans seriously will not get into the sort of trouble Broome imaginesAbraham might get into. When faced with incommensurable alternatives,the rational agent who takes plans seriously will adopt a plan andthen stick to it even if his preferences are consistent with pursuingan alternative course of action.

What about the significance of the second requirement? For Bratman, ifone is concerned about how one will see matters at the conclusion ofone’s plan or at appropriate stages along the way, then onewill, other things equal, avoid adjusting one’s plan in waysthat one will regret in the future. So Bratman’s planningconception of rationality includes a “no-regretcondition.” And, according to Bratman, given this condition, hisconception of rationality gives intuitively plausible guidance incases of temptation like the case of the self-torturer or theretirement contribution case. In particular, it implies that, in suchcases, the rational planner will adopt a plan and refrain fromadjusting it. For in both sorts of cases, if the simple fact thatone’s preferences favor adjusting one’s plan leads one toadjust it, one is bound to end up, via repeated adjustments ofone’s plan, in the situation one finds unacceptable. One is thusbound to experience future regret. And, while Bratman allows thatregret can sometimes be misguided—which is why he does notpresent avoiding regret as an exceptionless imperative—there arenot, for Bratman, any special considerations that would make regretmisguided if one gave into temptation in cases like the case of theself-torturer or the retirement contribution case.

Based on their own reasoning concerning rational resoluteness,Gauthier (1994) and McClennen (1990; 1997) argue that rationalresoluteness can help an agent do well in autonomous benefit caseslike the toxin case. They maintain that being rational is not a matterof always choosing the action that best serves one’s concerns.Rather it is a matter of acting in accordance with the deliberativeprocedure that best serves one’s concerns. Now it might seem asthough the deliberative procedure that best serves one’sconcerns must be the deliberative procedure that calls for alwayschoosing the action that best serves one’s concerns. Butautonomous benefit cases like the toxin case suggest that this is notquite right. For, the deliberative procedure that calls for alwayschoosing the action that best serves one’s concerns does notserve one’s concerns well in autonomous benefit cases. Morespecifically, someone who reasons in accordance with this deliberativeprocedure does worse in autonomous benefit cases than someone who iswilling to resolutely stick to a prior plan that he did well to adopt.Accordingly, Gauthier and McClennen deny that the best deliberativeprocedure requires one to always choose the action that best servesone’s concerns; in their view, the best deliberative procedurerequires some resoluteness. Relatedly, they see drinking the toxin inaccordance with a prior plan to drink the toxin as rational, indeed asrationally required, given that one did well to adopt the plan; sorationality helps one benefit, rather than hindering one frombenefiting, in autonomous benefit cases like the toxin case.

Note that, while there is widespread agreement that a plausibleconception of rationality will imply that the self-torturer shouldresist the temptation to keep advancing one more setting, there is nowidespread agreement that a plausible conception of rationality willimply that it is rational to drink the toxin. For those who find theidea that it is rational to drink the toxin completelycounter-intuitive, its emergence figures as a problematic, rather thanwelcome, implication of Gauthier’s and McClennen’s viewsconcerning rational resoluteness.

If Bratman and/or Gauthier and McClennen are on the righttrack—and this is, of course, a big if—then (some form of)resoluteness may often be the key to keeping oneself out of potentialdynamic choice traps. It may also be the key to resolving variouspuzzles concerning rationality and dynamic choice. (For a recentin-depth discussion of commitment and resoluteness in rational choice,see Andreou 2022.)

In an interesting critique of planning solutions to cases oftemptation, Tenenbaum and Raffman (2012) challenge the purportedcentrality of resoluteness. They suggest that, in cases of temptation,instrumental rationality may not require planning and resoluteness,but simply exercising “sufficiently many”“permissions” to do something other than what “wouldbe best at a given moment” when this is required by a“rationally innocent” goal or project. For instance,“suppose you take a break from writing an important memo andstart surfing the web. Surely surfing for one additional second willnot prevent you from completing the memo, but if you surf for longenough you won’t have time to finish it” (110).Instrumental rationality requires that you stop surfing at anacceptable point. But this need not involve stopping at a pointdetermined by a prior plan. Whether or not you have a plan to stop attime t, and whether or not you resolutely adhere to such a plan neednot be of crucial importance. What matters is that, ultimately, youstop in good time by exercising, at one or more points, the rationalpermission to do something other than what would be best at thatmoment with an eye to achieving the rationally innocent goal ofcompleting the important memo. Tenenbaum (2020) develops a theory ofinstrumental rationality that illuminates and accommodates the need toexercise rational permissions of the sort just described.

3. Some Familiar Phenomena Illuminated by Dynamic Choice Theory

Although dynamic choice problems are often presented with the help offanciful thought experiments, their interest is not strictlytheoretical. As this section highlights, they can wreak havoc in ourreal lives, supporting phenomena such as self-destructive addictivebehavior and dangerous environmental destruction. In some cases, thesephenomena can be understood in terms of procrastination (Andreou2007), which seems to be, by its very nature, a dynamic choice problem(Stroud 2010).

According to the most familiar model of self-destructive addictivebehavior, such behavior results from cravings that limit “thescope for volitional control of behavior” and can in some casesbe irresistible, “overwhelm[ing] decision makingaltogether” (Loewenstein 1999, 235–6). But, as we knowfrom dynamic choice theory, self-destructive behavior need not becompelled. It can also be supported by challenging choice situationsand problematic preference structures that prompt dynamic choiceproblems. Reflection on this point has led to new ideas concerningpossible sources of self-destructive addictive behavior. For example,George Ainslie (2001) has developed the view that addictive habitssuch as smoking—which can, it seems, flourish even in theabsence of irresistible craving—are often supported bydiscounting-induced preference reversals. Given the possibility ofdiscounting-induced preference reversals, even someone who caresdeeply about having a healthy future, and who therefore does not wantto be a heavy smoker, can easily find herself smoking cigarette aftercigarette, where this figures as a series of indulgences that sheplans against and then regrets.

Reflection on dynamic choice theory has also led to new ideas inenvironmental philosophy. For example, Chrisoula Andreou (2006) arguesthat, although dangerous environmental destruction is usually analyzedas resulting from interpersonal conflicts of interest, suchdestruction can flourish even in the absence of such conflicts. Inparticular, where individually negligible effects are involved, as isthe case among “creeping environmental problems” such aspollution, “an agent, whether it be an individual or aunified collective, can be led down a course of destructionsimply as a result of following its informed and perfectlyunderstandable butintransitive preferences” (Andreou2006, 96). Notice, for example, that if a unified collective values ahealthy community, but also values luxuries whose production or usepromotes a carcinogenic environment, it can find itself in a situationthat is structurally similar to the situation of the self-torturer.Like the self-torturer, such a collective must cope with the fact thatwhile one more day, and perhaps even one more month of indulgence canprovide great rewards without bringing about any significantalterations in (physical or psychic) health, “sustainedindulgence is far from innocuous” (Andreou 2006, 101).

Clearly, success in achieving a long-term goal can require showingsome restraint along the way; but it is tempting to put off showingrestraint and to favor a bit more indulgence over embarking on thechallenging doings or omissions that will serve the valued long-termgoal. Here, as in many other contexts, procrastination figures as aserious threat.

Though both philosophically intriguing and practically significant,procrastination has only recently received substantial attention as animportant topic of philosophical debate. (Much of the debate can befound in a 2010 collection of papers on the topic edited by ChrisoulaAndreou and Mark D. White.) It has perhaps been assumed thatprocrastination is just a form of weakness of will and so, althoughthere has been little explicit discussion of procrastination, most ofthe philosophical work necessary for understanding procrastination hasalready been done. But, as Sarah Stroud (2010) has argued, thisassumption is problematic, since there are cases of procrastinationthat do not fit with the traditional conception of weakness of will,which casts the agent as acting against her better judgment, or withthe influential revisionary conception of weakness of will due toRichard Holton (1999), which casts the agent as acting irresolutely.Although the well-developed literature on weakness of will is animportant resource in the study of procrastination, there is a lotmore philosophical work that needs to be done, and the modeling workthat seems to be most promising focuses heavily on the fact thatprocrastination is a problem faced by agents whose choices are spreadout over time.

4. Concluding Remarks

When one performs a series of actions that do not serve one’sconcerns well, it is natural to feel regret and frustration. Why, itmight be wondered, is one doing so badly by oneself? Self-loathing,compulsion, or simple ignorance might in some cases explain thesituation; but, oftentimes, none of these things seems to be at theroot of the problem. For, in many cases, one’s steps along adisadvantageous course seem voluntary, motivated by the prospect ofsome benefit, and performed in light of a correct understanding of theconsequences of each step taken. As we have seen, dynamic choicetheory makes it clear how such cases are possible.

Although an agent with a dynamic choice problem can often be describedas insufficiently resolute, she is normally guided by her preferencesor her evaluation of the options she faces. As such, she is not, ingeneral, properly described as simply out of control. Still, thecontrol she exhibits is inadequate with respect to the task ofeffectively governing her (temporally-extended) self. So her problemis, at least in part, a problem of effective self-governance overtime. Accordingly, some work on choice over time (e.g. Velleman 2000;Bratman, 2012) includes discussion of effective self-governance overtime and explores the connection between the requirements foreffective self-governance over time and the requirements for rationalchoice over time (sometimes referred to as the requirements ofdiachronic rationality). Some big questions in this area include thefollowing: To what extent does self-governance over time (or at leasteffective self-governance over time) require cross-temporal coherencein the form of a presumption in favor of prior intentions? To whatextent does diachronic rationality require self-governance over time?And to what extent does diachronic rationality require cross-temporalcoherence in the form of a presumption in favor of prior intentions.My own view is that it is ensuring the avoidance of self-defeatingbehavior rather than ensuring self-governance over time that isrationally required, and so diachronic rationality requires apresumption in favor of prior intentions only when this is necessaryfor avoiding self-defeating behavior (Andreou, 2023, chapter 2). Butdebate on this topic has not been very extensive and furtherexploration of the topic is certainly in order.

Bibliography

  • Ainslie, George, 1999. “The Dangers of Willpower,” inGetting Hooked, Jon Elster and Ole-Jørgen Skog (eds.),Cambridge: Cambridge University Press, pp. 65–92.
  • –––, 2001.Breakdown of Will,Cambridge: Cambridge University Press.
  • Andersson, Henrik and Anders Herlitz, 2021.ValueIncommensurability: Ethics, Risk, and Decision-Making, New York:Routledge.
  • Andreou, Chrisoula, 2005. “Incommensurable Alternatives andRational Choice,”Ratio, 18(3): 249–61.
  • –––, 2005. “Going from Bad (or Not So Bad)to Worse: On Harmful Addictions and Habits,”AmericanPhilosophical Quarterly, 42(4): 323–31.
  • –––, 2006. “Environmental Damage and thePuzzle of the Self-Torturer,”Philosophy & PublicAffairs, 34(1): 95–108.
  • –––, 2007. “There Are Preferences and ThenThere Are Preferences” inEconomics and the Mind,Barbara Montero and Mark D. White (eds.), New York: Routledge, pp.115–126.
  • –––, 2007. “UnderstandingProcrastination,”Journal for the Theory of SocialBehaviour, 37(2): 183–93.
  • –––, 2022. “Commitment and Resoluteness inRational Choice,” inElements in Decision Theory andPhilosophy, Martin Peterson (ed.), Cambridge: CambridgeUniversity Press.
  • –––, 2023.Choosing Well: The Good, the Bad,and the Trivial, New York: Oxford University Press.
  • Andreou, Chrisoula and Mark D. White (eds.), 2010.The Thiefof Time: Philosophical Essays on Procrastination, Oxford: OxfordUniversity Press.
  • Bratman, Michael, 1983. “Taking Plans Seriously,”Social Theory and Practice, 9: 271–87.
  • –––, 1987.Intentions, Plans, and PracticalReason, Cambridge, MA: Harvard University Press.
  • –––, 1999. “Toxin, Temptation, and theStability of Intention,” inFaces of Intention,Cambridge: Cambridge University Press, pp. 58–90.
  • –––, 2006. “Temptation Revisited,”inStructures of Agency, Oxford: Oxford University Press, pp.257–282.
  • –––, 2012. “Time, Rationality, andSelf-Governance,”Philosophical Issues, 22:73–88.
  • –––, 2014. “Temptation and theAgent’s Standpoint,”Inquiry, 57:293–310.
  • –––, 2018.Planning, Time, andSelf-Governance, New York: Oxford University Press.
  • Broome, John, 2000. “Incommensurable Values,” inWell-Being and Morality: Essays in Honour of James Griffin,Roger Crisp and Brad Hooker (eds.), Oxford: Oxford University Press,pp. 21–38.
  • –––, 2001. “Are Intentions Reasons? AndHow Should We Cope with Incommensurable Values?” inPractical Rationality and Preference, Christopher W. Morrisand Arthur Ripstein (eds.), Cambridge: Cambridge University Press, pp.98–120.
  • Bovens, Luc, 2022. “Four Structures of IntransitivePreferences,”Routledge Handbook of Philosophy, Politics,and Economics, C. M. Melenovsky (ed.), New York: Routledge, pp.81–93.
  • Chang, Ruth (ed.), 1997.Incommensurability, Incomparability,and Practical Reason, Cambridge, MA: Harvard UniversityPress.
  • –––, 2002. “The Possibility ofParity,”Ethics, 112: 659–88
  • –––, 2013. “Grounding PracticalNormativity: Going Hybrid,”Philosophical Studies, 164:163–187.
  • –––, 2017. “Hard Choices,”Journal of the American Philosophical Association, 3(1):1–21.
  • Davidson, Donald, McKinsey, J. and Suppes, Patrick, 1955.“Outlines of a Formal Theory of Value,”Philosophy ofScience, 22: 140–60.
  • Dougherty, Tom, 2011, “On Whether to Prefer Pain toPass,”Ethics, 121: 521–37.
  • –––, 2014, “A Deluxe Money Pump,”Thought, 3: 21–29.
  • Elster, Jon, 1984.Ulysses and the Sirens, Cambridge:Cambridge University Press.
  • –––, 2000.Ulysses Unbound, Cambridge:Cambridge University Press.
  • Elster, Jon and Ole-Jørgen Skog (eds.), 1999.GettingHooked, Cambridge: Cambridge University Press.
  • Eyal, Nir and Anders Herlitz (eds.), 2024.PhilosophicalStudies Special issue:Incommensurability andPopulation-Level Bioethics, 181(12): 3219–3519.
  • Ferrero, Luca, 2009. “What Good is a DiachronicWill?,”Philosophical Studies, 144: 403–30.
  • Gauthier, David, 1986.Morals by Agreement, Oxford:Clarendon Press.
  • –––, 1994. “Assure and Threaten,”Ethics, 104(4): 690–716.
  • Greene, Preston and Meghan Sullivan, 2015. “Against TimeBias,”Ethics, 125: 947–70.
  • Greene, Preston, Andrew J. Latham, Kristie Miller, and JamesNorton, 2022. “How much do we discount past pleasures?”American Philosophical Quarterly, 59: 367–376.
  • Gustafsson, Johan E.,Money-Pump Arguments, inElements in Decision Theory and Philosophy, Martin Peterson(ed.), Cambridge: Cambridge University Press.
  • Holton, Richard, 1999. “Intention and Weakness ofWill,”Journal of Philosophy, 96: 241–62.
  • Kavka, Gregory S., 1983. “The Toxin Puzzle,”Analysis, 43: 33–6.
  • Kirby, Kris N. and R. J. Herrnstein, 1995. “PreferenceReversals Due to Myopic Discounting of Delayed Reward,”Psychological Science, 6: 83–89.
  • Loewenstein, George and Jon Elster (eds.), 1992.Choice OverTime, New York: Russell Sage Foundation.
  • Loewenstein, George, Daniel Read, and Roy Baumeister (eds.), 2003.Time and Decision, New York: Russell Sage Foundation.
  • MacIntosh, Duncan, 2010. “Intransitive Preferences,Vagueness, and the Structure of Procrastination” inTheThief of Time: Philosophical Essays on Procrastination, ChrisoulaAndreou and Mark D. White (eds.), Oxford: Oxford University Press, pp.68–86.
  • Mele, Alfred, 2004. “Motivated Irrationality,” inThe Oxford Handbook of Rationality, Oxford: Oxford UniversityPress, pp. 240–256.
  • McClennen, Edward, 1990.Rationality and Dynamic Choice,Cambridge: Cambridge University Press.
  • –––, 1997. “Pragmatic Rationality andRules,”Philosophy and Public Affairs, 26(3):210–58.
  • Millar, Andrew and Douglas J. Navarick, 1984. “Self-Controland Choice in Humans: Effects of Video Game Playing as a PositiveReinforcer,”Learning and Motivation,15:203–218.
  • Nozick, Robert, 1993.The Nature of Rationality,Princeton: Princeton University Press.
  • Parfit, Derek, 1984.Reasons and Persons, Oxford:Clarendon Press.
  • Portmore, Douglas W., 2019.Opting for the Best: Oughts andOptions, New York: Oxford University Press.
  • Quinn, Warren, 1993. “The Puzzle of theSelf-Torturer,” inMorality and Action, Cambridge:Cambridge University Press, pp. 198–209.
  • Rabinowicz, Wlodek, 2000. “Money Pump with Foresight,”in M. J. Almeida (ed.),Imperceptible Harms and Benefits(Library of Ethics and Applied Philosophy: 8), Dordrecht, London:Kluwer Academic, pp. 123–154.
  • Ramsey, Frank P., 1926. “Truth and Probability,” inThe Foundations of Mathematics and other Logical Essays, R.B. Braithwaite (ed.), London: Routledge & Kegan Paul, 1931, pp.156–198.
  • Raz, Joseph, 1997. “Incommensurability and Agency,” inIncommensurability, Incomparability, and Practical Reason,Ruth Chang (ed.), Cambridge, MA: Harvard University Press, pp.110–128.
  • –––, 1986.The Morality of Freedom,Oxford: Clarendon Press.
  • Regan, Donald, 1997. “Value, Comparability, andChoice,” inIncommensurability, Incomparability, andPractical Reason, Ruth Chang (ed.), Cambridge, MA: HarvardUniversity Press, pp. 129–150.
  • Schelling, Thomas C., 1960.The Strategy of Conflict,Cambridge, MA: Harvard University Press.
  • Solnick, Jay V., Catherine H. Kannenberg, David A. Eckerman, andMarcus B. Waller. “An Experimental Analysis of Impulsivity andImpulse Control in Humans,”Learning and Motivation,11: 61–77.
  • Stroud, Sarah, 2010. “Is Procrastination Weakness ofWill?” inThe Thief of Time: Philosophical Essays onProcrastination, Chrisoula Andreou and Mark D. White (eds.),Oxford: Oxford University Press, pp. 51–67.
  • Tenenbaum, Sergio, 2020.Rational Powers in Action, NewYork: Oxford University Press.
  • Tenenbaum, Sergio and Diana Raffman, 2012. “Vague Projectsand the Puzzle of the Self-Torturer,”Ethics, 123:86–112.
  • Tversky, Amos, 1969. “Intransitivity of Preferences,”Psychological Review, 76: 31–48.
  • Velleman, David, 2000. “Deciding How to Decide,” inThe Possibility of Practical Reason, Oxford: Clarendon Press,pp. 221–243.

Acknowledgments

I am grateful to the University of Utah Tanner Humanities Center for amini-grant that supported my research for this entry.

Copyright © 2025 by
Chrisoula Andreou<c.andreou@utah.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp