Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Temporal difference learning

From Wikipedia, the free encyclopedia
Computer programming concept
Part of a series on
Machine learning
anddata mining

Temporal difference (TD)learning refers to a class ofmodel-freereinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, likeMonte Carlo methods, and perform updates based on current estimates, likedynamic programming methods.[1]

While Monte Carlo methods only adjust their estimates once the outcome is known, TD methods adjust predictions to match later, more-accurate predictions about the future, before the outcome is known.[2] This is a form ofbootstrapping, as illustrated with the following example:

Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday – and thus be able to change, say, Saturday's model before Saturday arrives.[2]

Temporal difference methods are related to the temporal difference model ofanimal learning.[3][4][5][6][7]

Mathematical formulation

[edit]

The tabular TD(0) method is one of the simplest TD methods. It is a special case of more general stochastic approximation methods. It estimates thestate value function of a finite-stateMarkov decision process (MDP) under a policyπ{\displaystyle \pi }. LetVπ{\displaystyle V^{\pi }} denote the state value function of the MDP with states(St)tN{\displaystyle (S_{t})_{t\in \mathbb {N} }}, rewards(Rt)tN{\displaystyle (R_{t})_{t\in \mathbb {N} }} and discount rate[8]γ{\displaystyle \gamma } under the policyπ{\displaystyle \pi }:[9]

Vπ(s)=Eaπ{t=0γtRt+1|S0=s}.{\displaystyle V^{\pi }(s)=E_{a\sim \pi }\left\{\sum _{t=0}^{\infty }\gamma ^{t}R_{t+1}{\Bigg |}S_{0}=s\right\}.}

We drop the action from the notation for convenience.Vπ{\displaystyle V^{\pi }} satisfies theHamilton-Jacobi-Bellman Equation:

Vπ(s)=Eπ{R1+γVπ(S1)|S0=s},{\displaystyle V^{\pi }(s)=E_{\pi }\{R_{1}+\gamma V^{\pi }(S_{1})|S_{0}=s\},}

soR1+γVπ(S1){\displaystyle R_{1}+\gamma V^{\pi }(S_{1})} is an unbiased estimate forVπ(s){\displaystyle V^{\pi }(s)}. This observation motivates the following algorithm for estimatingVπ{\displaystyle V^{\pi }}.

The algorithm starts by initializing a tableV(s){\displaystyle V(s)} arbitrarily, with one value for each state of the MDP. A positivelearning rateα{\displaystyle \alpha } is chosen.

We then repeatedly evaluate the policyπ{\displaystyle \pi }, obtain a rewardr{\displaystyle r} and update the value function for the current state using the rule:[10]

V(St)(1α)V(St)+αlearning rate[Rt+1+γV(St+1)The TD target]{\displaystyle V(S_{t})\leftarrow (1-\alpha )V(S_{t})+\underbrace {\alpha } _{\text{learning rate}}[\overbrace {R_{t+1}+\gamma V(S_{t+1})} ^{\text{The TD target}}]}

whereSt{\displaystyle S_{t}} andSt+1{\displaystyle S_{t+1}} are the current and next states, respectively. The valueRt+1+γV(St+1){\displaystyle R_{t+1}+\gamma V(S_{t+1})} is known as the TD target, andRt+1+γV(St+1)V(St){\displaystyle R_{t+1}+\gamma V(S_{t+1})-V(S_{t})} is known as the TD error.

TD-Lambda

[edit]

TD-Lambda is a learning algorithm invented byRichard S. Sutton based on earlier work on temporal difference learning byArthur Samuel.[11] This algorithm was famously applied byGerald Tesauro to createTD-Gammon, a program that learned to play the game ofbackgammon at the level of expert human players.[12]

The lambda (λ{\displaystyle \lambda }) parameter refers to the trace decay parameter, with0λ1{\displaystyle 0\leqslant \lambda \leqslant 1}. Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distant states and actions whenλ{\displaystyle \lambda } is higher, withλ=1{\displaystyle \lambda =1} producing parallel learning to Monte Carlo RL algorithms.[13]

In neuroscience

[edit]

The TDalgorithm has also received attention in the field ofneuroscience. Researchers discovered that the firing rate ofdopamineneurons in theventral tegmental area (VTA) andsubstantia nigra (SNc) appear to mimic the error function in the algorithm.[3][4][5][6][7] The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the futurereward.

Dopamine cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice.[14] Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. Subsequently, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced. This mimics closely how the error function in TD is used forreinforcement learning.

The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research.[15][16] It has also been used to study conditions such asschizophrenia or the consequences of pharmacological manipulations of dopamine on learning.[17]

See also

[edit]

Notes

[edit]
  1. ^Sutton & Barto (2018), p. 133.
  2. ^abSutton, Richard S. (1 August 1988)."Learning to predict by the methods of temporal differences".Machine Learning.3 (1):9–44.doi:10.1007/BF00115009.ISSN 1573-0565.S2CID 207771194.
  3. ^abSchultz, W, Dayan, P & Montague, PR. (1997). "A neural substrate of prediction and reward".Science.275 (5306):1593–1599.CiteSeerX 10.1.1.133.6176.doi:10.1126/science.275.5306.1593.PMID 9054347.S2CID 220093382.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  4. ^abMontague, P. R.; Dayan, P.; Sejnowski, T. J. (1996-03-01)."A framework for mesencephalic dopamine systems based on predictive Hebbian learning"(PDF).The Journal of Neuroscience.16 (5):1936–1947.doi:10.1523/JNEUROSCI.16-05-01936.1996.ISSN 0270-6474.PMC 6578666.PMID 8774460.
  5. ^abMontague, P.R.; Dayan, P.; Nowlan, S.J.; Pouget, A.; Sejnowski, T.J. (1993)."Using aperiodic reinforcement for directed self-organization"(PDF).Advances in Neural Information Processing Systems.5:969–976.
  6. ^abMontague, P. R.; Sejnowski, T. J. (1994)."The predictive brain: temporal coincidence and temporal order in synaptic learning mechanisms".Learning & Memory.1 (1):1–33.doi:10.1101/lm.1.1.1.ISSN 1072-0502.PMID 10467583.S2CID 44560099.
  7. ^abSejnowski, T.J.; Dayan, P.; Montague, P.R. (1995). "Predictive Hebbian learning".Proceedings of the eighth annual conference on Computational learning theory - COLT '95. pp. 15–18.doi:10.1145/225298.225300.ISBN 0897917235.S2CID 1709691.
  8. ^Discount rate parameter allows for atime preference toward more immediate rewards, and away from distant future rewards
  9. ^Sutton & Barto (2018), p. 134.
  10. ^Sutton & Barto (2018), p. 135.
  11. ^Sutton & Barto (2018), p. 130?.
  12. ^Tesauro (1995).
  13. ^Sutton & Barto (2018), p. 175.
  14. ^Schultz, W. (1998). "Predictive reward signal of dopamine neurons".Journal of Neurophysiology.80 (1):1–27.CiteSeerX 10.1.1.408.5994.doi:10.1152/jn.1998.80.1.1.PMID 9658025.S2CID 52857162.
  15. ^Dayan, P. (2001)."Motivated reinforcement learning"(PDF).Advances in Neural Information Processing Systems.14. MIT Press:11–18. Archived fromthe original(PDF) on 2012-05-25. Retrieved2009-03-03.
  16. ^Tobia, M. J., etc. (2016)."Altered behavioral and neural responsiveness to counterfactual gains in the elderly".Cognitive, Affective, & Behavioral Neuroscience.16 (3):457–472.doi:10.3758/s13415-016-0406-7.PMID 26864879.S2CID 11299945.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  17. ^Smith, A., Li, M., Becker, S. and Kapur, S. (2006). "Dopamine, prediction error, and associative learning: a model-based account".Network: Computation in Neural Systems.17 (1):61–84.doi:10.1080/09548980500361624.PMID 16613795.S2CID 991839.{{cite journal}}: CS1 maint: multiple names: authors list (link)

Works cited

[edit]

Further reading

[edit]

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Temporal_difference_learning&oldid=1327393303"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp