Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Markov decision process

From Wikipedia, the free encyclopedia
Mathematical model for sequential decision making under uncertainty

Markov decision process (MDP), also called astochastic dynamic program or stochastic control problem, is a model forsequential decision making whenoutcomes are uncertain.[1]

Originating fromoperations research in the 1950s,[2][3] MDPs have since gained recognition in a variety of fields, includingecology,economics,healthcare,telecommunications andreinforcement learning.[4] Reinforcement learning utilizes the MDP framework to model the interaction between a learning agent and its environment. In this framework, the interaction is characterized by states, actions, and rewards. The MDP framework is designed to provide a simplified representation of key elements ofartificial intelligence challenges. These elements encompass the understanding ofcause and effect, the management of uncertainty and nondeterminism, and the pursuit of explicit goals.[4]

The name comes from its connection toMarkov chains, a concept developed by the Russian mathematicianAndrey Markov. The "Markov" in "Markov decision process" refers to the underlying structure ofstate transitions that still follow theMarkov property. The process is called a "decision process" because it involves making decisions that influence these state transitions, extending the concept of a Markov chain into the realm of decision-making under uncertainty.

Definition

[edit]
Example of a simple MDP with three states (green circles) and two actions (orange circles), with two rewards (orange arrows)

A Markov decision process is a 4-tuple(S,A,Pa,Ra){\displaystyle (S,A,P_{a},R_{a})}, where:

A policy functionπ{\displaystyle \pi } is a (potentially probabilistic) mapping from state space (S{\displaystyle S}) to action space (A{\displaystyle A}).

Optimization objective

[edit]

The goal in a Markov decision process is to find a good "policy" for the decision maker: a functionπ{\displaystyle \pi } that specifies the actionπ(s){\displaystyle \pi (s)} that the decision maker will choose when in states{\displaystyle s}. Once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like aMarkov chain (since the action chosen in states{\displaystyle s} is completely determined byπ(s){\displaystyle \pi (s)}).

The objective is to choose a policyπ{\displaystyle \pi } that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon:

E[t=0γtRat(st,st+1)]{\displaystyle E\left[\sum _{t=0}^{\infty }{\gamma ^{t}R_{a_{t}}(s_{t},s_{t+1})}\right]} (where we chooseat=π(st){\displaystyle a_{t}=\pi (s_{t})}, i.e. actions given by the policy). And the expectation is taken overst+1Pat(st,st+1){\displaystyle s_{t+1}\sim P_{a_{t}}(s_{t},s_{t+1})}

where γ {\displaystyle \ \gamma \ } is the discount factor satisfying0 γ  1{\displaystyle 0\leq \ \gamma \ \leq \ 1}, which is usually close to1{\displaystyle 1} (for example,γ=1/(1+r){\displaystyle \gamma =1/(1+r)} for some discount rater{\displaystyle r}). A lower discount factor motivates the decision maker to favor taking actions early, rather than postpone them indefinitely.

Another possible, but strictly related, objective that is commonly used is theH{\displaystyle H-}step return. This time, instead of using a discount factor γ {\displaystyle \ \gamma \ }, the agent is interested only in the firstH{\displaystyle H} steps of the process, with each reward having the same weight.

E[t=0H1Rat(st,st+1)]{\displaystyle E\left[\sum _{t=0}^{H-1}{R_{a_{t}}(s_{t},s_{t+1})}\right]} (where we chooseat=π(st){\displaystyle a_{t}=\pi (s_{t})}, i.e. actions given by the policy). And the expectation is taken overst+1Pat(st,st+1){\displaystyle s_{t+1}\sim P_{a_{t}}(s_{t},s_{t+1})}

where H {\displaystyle \ H\ } is the time horizon. Compared to the previous objective, the latter one is more used inLearning Theory.

A policy that maximizes the function above is called anoptimal policy and is usually denotedπ{\displaystyle \pi ^{*}}. A particular MDP may have multiple distinct optimal policies. Because of theMarkov property, it can be shown that the optimal policy is a function of the current state, as assumed above.

Simulator models

[edit]

In many cases, it is difficult to represent the transition probability distributions,Pa(s,s){\displaystyle P_{a}(s,s')}, explicitly. In such cases, a simulator can be used to model the MDP implicitly by providing samples from the transition distributions. One common form of implicit MDP model is an episodic environment simulator that can be started from an initial state and yields a subsequent state and reward every time it receives an action input. In this manner, trajectories of states, actions, and rewards, often calledepisodes may be produced.

Another form of simulator is agenerative model, a single step simulator that can generate samples of the next state and reward given any state and action.[5] (Note that this is a different meaning from the termgenerative model in the context of statistical classification.) Inalgorithms that are expressed usingpseudocode,G{\displaystyle G} is often used to represent a generative model. For example, the expressions,rG(s,a){\displaystyle s',r\gets G(s,a)} might denote the action of sampling from the generative model wheres{\displaystyle s} anda{\displaystyle a} are the current state and action, ands{\displaystyle s'} andr{\displaystyle r} are the new state and reward. Compared to an episodic simulator, a generative model has the advantage that it can yield data from any state, not only those encountered in a trajectory.

These model classes form a hierarchy of information content: an explicit model trivially yields a generative model through sampling from the distributions, and repeated application of a generative model yields an episodic simulator. In the opposite direction, it is only possible to learn approximate models throughregression. The type of model available for a particular MDP plays a significant role in determining which solution algorithms are appropriate. For example, thedynamic programming algorithms described in the next section require an explicit model, andMonte Carlo tree search requires a generative model (or an episodic simulator that can be copied at any state), whereas mostreinforcement learning algorithms require only an episodic simulator.

Example

[edit]
Pole Balancing example (rendering of the environment from theOpen AI gym benchmark)

An example of MDP is the Pole-Balancing model, which comes from classic control theory.

In this example, we have

Algorithms

[edit]

Solutions for MDPs with finite state and action spaces may be found through a variety of methods such asdynamic programming. The algorithms in this section apply to MDPs with finite state and action spaces and explicitly given transition probabilities and reward functions, but the basic concepts may be extended to handle other problem classes, for example usingfunction approximation. Also, some processes with countably infinite state and action spaces can beexactly reduced to ones with finite state and action spaces.[6]

The standard family of algorithms to calculate optimal policies for finite state and action MDPs requires storage for two arrays indexed by state:valueV{\displaystyle V}, which contains real values, andpolicyπ{\displaystyle \pi }, which contains actions. At the end of the algorithm,π{\displaystyle \pi } will contain the solution andV(s){\displaystyle V(s)} will contain the discounted sum of the rewards to be earned (on average) by following that solution from states{\displaystyle s}.

The algorithm has two steps, (1) a value update and (2) a policy update, which are repeated in some order for all the states until no further changes take place. Both recursively update a new estimation of the optimal policy and state value using an older estimation of those values.

V(s):=sPπ(s)(s,s)(Rπ(s)(s,s)+γV(s)){\displaystyle V(s):=\sum _{s'}P_{\pi (s)}(s,s')\left(R_{\pi (s)}(s,s')+\gamma V(s')\right)}
π(s):=argmaxa{sPa(s,s)(Ra(s,s)+γV(s))}{\displaystyle \pi (s):=\operatorname {argmax} _{a}\left\{\sum _{s'}P_{a}(s,s')\left(R_{a}(s,s')+\gamma V(s')\right)\right\}}

Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.[7]

Notable variants

[edit]

Value iteration

[edit]

In value iteration (Bellman 1957), which is also calledbackward induction,theπ{\displaystyle \pi } function is not used; instead, the value ofπ(s){\displaystyle \pi (s)} is calculated withinV(s){\displaystyle V(s)} whenever it is needed. Substituting the calculation ofπ(s){\displaystyle \pi (s)} into the calculation ofV(s){\displaystyle V(s)} gives the combined step[further explanation needed]:

Vi+1(s):=maxa{sPa(s,s)(Ra(s,s)+γVi(s))},{\displaystyle V_{i+1}(s):=\max _{a}\left\{\sum _{s'}P_{a}(s,s')\left(R_{a}(s,s')+\gamma V_{i}(s')\right)\right\},}

wherei{\displaystyle i} is the iteration number. Value iteration starts ati=0{\displaystyle i=0} andV0{\displaystyle V_{0}} as a guess of thevalue function. It then iterates, repeatedly computingVi+1{\displaystyle V_{i+1}} for all statess{\displaystyle s}, untilV{\displaystyle V} converges with the left-hand side equal to the right-hand side (which is the "Bellman equation" for this problem[clarification needed]).Lloyd Shapley's 1953 paper onstochastic games included as a special case the value iteration method for MDPs,[8] but this was recognized only later on.[9]

Policy iteration

[edit]

In policy iteration (Howard 1960) harv error: no target: CITEREFHoward1960 (help), step one is performed once, and then step two is performed once, then both are repeated until policy converges. Then step one is again performed once and so on. (Policy iteration was invented by Howard to optimizeSears catalogue mailing, which he had been optimizing using value iteration.[10])

Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations. These equations are merely obtained by makings=s{\displaystyle s=s'} in the step two equation.[clarification needed] Thus, repeating step two to convergence can be interpreted as solving the linear equations byrelaxation.

This variant has the advantage that there is a definite stopping condition: when the arrayπ{\displaystyle \pi } does not change in the course of applying step 1 to all states, the algorithm is completed.

Policy iteration is usually slower than value iteration for a large number of possible states.

Modified policy iteration

[edit]

In modified policy iteration (van Nunen 1976;Puterman & Shin 1978), step one is performed once, and then step two is repeated several times.[11][12] Then step one is again performed once and so on.

Prioritized sweeping

[edit]

In this variant, the steps are preferentially applied to states which are in some way important – whether based on the algorithm (there were large changes inV{\displaystyle V} orπ{\displaystyle \pi } around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm).

Computational complexity

[edit]

Algorithms for finding optimal policies withtime complexity polynomial in the size of the problem representation exist for finite MDPs. Thus,decision problems based on MDPs are in computationalcomplexity classP.[13] However, due to thecurse of dimensionality, the size of the problem representation is often exponential in the number of state and action variables, limiting exact solution techniques to problems that have a compact representation. In practice, online planning techniques such asMonte Carlo tree search can find useful solutions in larger problems, and, in theory, it is possible to construct online planning algorithms that can find an arbitrarily near-optimal policy with no computational complexity dependence on the size of the state space.[14]

Extensions and generalizations

[edit]

A Markov decision process is astochastic game with only one player.

Partial observability

[edit]
Main article:Partially observable Markov decision process

The solution above assumes that the states{\displaystyle s} is known when action is to be taken; otherwiseπ(s){\displaystyle \pi (s)} cannot be calculated. When this assumption is not true, the problem is called a partially observable Markov decision process or POMDP.

Constrained Markov decision processes

[edit]

Constrained Markov decision processes (CMDPS) are extensions to Markov decision process (MDPs). There are three fundamental differences between MDPs and CMDPs.[15]

  • There are multiple costs incurred after applying an action instead of one.
  • CMDPs are solved withlinear programs only, anddynamic programming does not work.
  • The final policy depends on the starting state.

The method of Lagrange multipliers applies to CMDPs.Many Lagrangian-based algorithms have been developed.

  • Natural policy gradient primal-dual method.[16]

There are a number of applications for CMDPs. It has recently been used inmotion planning scenarios in robotics.[17]

Continuous-time Markov decision process

[edit]

In discrete-time Markov Decision Processes, decisions are made at discrete time intervals. However, forcontinuous-time Markov decision processes, decisions can be made at any time the decision maker chooses. In comparison to discrete-time Markov decision processes, continuous-time Markov decision processes can better model the decision-making process for a system that hascontinuous dynamics, i.e., the system dynamics is defined byordinary differential equations (ODEs). These kind of applications raise inqueueing systems, epidemic processes, andpopulation processes.

Like the discrete-time Markov decision processes, in continuous-time Markov decision processes the agent aims at finding the optimalpolicy which could maximize the expected cumulated reward. The only difference with the standard case stays in the fact that, due to the continuous nature of the time variable, the sum is replaced by an integral:

maxEπ[0γtr(s(t),π(s(t)))dt|s0]{\displaystyle \max \operatorname {E} _{\pi }\left[\left.\int _{0}^{\infty }\gamma ^{t}r(s(t),\pi (s(t)))\,dt\;\right|s_{0}\right]}

where0γ<1.{\displaystyle 0\leq \gamma <1.}

Discrete space: Linear programming formulation

[edit]

If the state space and action space are finite, we could use linear programming to find the optimal policy, which was one of the earliest approaches applied. Here we only consider the ergodic model, which means our continuous-time MDP becomes anergodic continuous-time Markov chain under a stationarypolicy. Under this assumption, although the decision maker can make a decision at any time in the current state, there is no benefit in taking multiple actions. It is better to take an action only at the time when system is transitioning from the current state to another state. Under some conditions,[18] if our optimal value functionV{\displaystyle V^{*}} is independent of statei{\displaystyle i}, we will have the following inequality:

gR(i,a)+jSq(ji,a)h(j)iS and aA(i){\displaystyle g\geq R(i,a)+\sum _{j\in S}q(j\mid i,a)h(j)\quad \forall i\in S{\text{ and }}a\in A(i)}

If there exists a functionh{\displaystyle h}, thenV¯{\displaystyle {\bar {V}}^{*}} will be the smallestg{\displaystyle g} satisfying the above equation. In order to findV¯{\displaystyle {\bar {V}}^{*}}, we could use the following linear programming model:

  • Primal linear program(P-LP)
Minimizegs.tgjSq(ji,a)h(j)R(i,a)iS,aA(i){\displaystyle {\begin{aligned}{\text{Minimize}}\quad &g\\{\text{s.t}}\quad &g-\sum _{j\in S}q(j\mid i,a)h(j)\geq R(i,a)\,\,\forall i\in S,\,a\in A(i)\end{aligned}}}
  • Dual linear program(D-LP)
MaximizeiSaA(i)R(i,a)y(i,a)s.t.iSaA(i)q(ji,a)y(i,a)=0jS,iSaA(i)y(i,a)=1,y(i,a)0aA(i) and iS{\displaystyle {\begin{aligned}{\text{Maximize}}&\sum _{i\in S}\sum _{a\in A(i)}R(i,a)y(i,a)\\{\text{s.t.}}&\sum _{i\in S}\sum _{a\in A(i)}q(j\mid i,a)y(i,a)=0\quad \forall j\in S,\\&\sum _{i\in S}\sum _{a\in A(i)}y(i,a)=1,\\&y(i,a)\geq 0\qquad \forall a\in A(i){\text{ and }}\forall i\in S\end{aligned}}}

y(i,a){\displaystyle y(i,a)} is a feasible solution to the D-LP ify(i,a){\displaystyle y(i,a)} is nonnative and satisfied the constraints in the D-LP problem. A feasible solutiony(i,a){\displaystyle y^{*}(i,a)} to the D-LP is said to be an optimal solution if

iSaA(i)R(i,a)y(i,a)iSaA(i)R(i,a)y(i,a){\displaystyle {\begin{aligned}\sum _{i\in S}\sum _{a\in A(i)}R(i,a)y^{*}(i,a)\geq \sum _{i\in S}\sum _{a\in A(i)}R(i,a)y(i,a)\end{aligned}}}

for all feasible solutiony(i,a){\displaystyle y(i,a)} to the D-LP. Once we have found the optimal solutiony(i,a){\displaystyle y^{*}(i,a)}, we can use it to establish the optimal policies.

Continuous space: Hamilton–Jacobi–Bellman equation

[edit]

In continuous-time MDP, if the state space and action space are continuous, the optimal criterion could be found by solvingHamilton–Jacobi–Bellman (HJB) partial differential equation. In order to discuss the HJB equation, we need to reformulateour problem

V(s(0),0)=maxa(t)=π(s(t))0Tr(s(t),a(t))dt+D[s(T)]s.t.ds(t)dt=f[t,s(t),a(t)]{\displaystyle {\begin{aligned}V(s(0),0)={}&\max _{a(t)=\pi (s(t))}\int _{0}^{T}r(s(t),a(t))\,dt+D[s(T)]\\{\text{s.t.}}\quad &{\frac {ds(t)}{dt}}=f[t,s(t),a(t)]\end{aligned}}}

D(){\displaystyle D(\cdot )} is the terminal reward function,s(t){\displaystyle s(t)} is the system state vector,a(t){\displaystyle a(t)} is the system control vector we try to find.f(){\displaystyle f(\cdot )} shows how the state vector changes over time. The Hamilton–Jacobi–Bellman equation is as follows:

0=maxu(r(t,s,a)+V(t,s)xf(t,s,a)){\displaystyle 0=\max _{u}(r(t,s,a)+{\frac {\partial V(t,s)}{\partial x}}f(t,s,a))}

We could solve the equation to find the optimal controla(t){\displaystyle a(t)}, which could give us the optimalvalue functionV{\displaystyle V^{*}}

Reinforcement learning

[edit]
Main article:Reinforcement learning

Reinforcement learning is an interdisciplinary area ofmachine learning andoptimal control that has, as main objective, finding an approximately optimal policy for MDPs where transition probabilities and rewards are unknown.[19]

Reinforcement learning can solve Markov-Decision processes without explicit specification of the transition probabilities which are instead needed to perform policy iteration. In this setting, transition probabilities and rewards must be learned from experience, i.e. by letting an agent interact with the MDP for a given number of steps. Both on a theoretical and on a practical level, effort is put in maximizing the sample efficiency, i.e. minimimizing the number of samples needed to learn a policy whose performance isε{\displaystyle \varepsilon -}close to the optimal one (due to the stochastic nature of the process, learning the optimal policy with a finite number of samples is, in general, impossible).

Reinforcement Learning for discrete MDPs

[edit]

For the purpose of this section, it is useful to define a further function, which corresponds to taking the actiona{\displaystyle a} and then continuing optimally (or according to whatever policy one currently has):

 Q(s,a)=sPa(s,s)(Ra(s,s)+γV(s)). {\displaystyle \ Q(s,a)=\sum _{s'}P_{a}(s,s')(R_{a}(s,s')+\gamma V(s')).\ }

While this function is also unknown, experience during learning is based on(s,a){\displaystyle (s,a)} pairs (together with the outcomes{\displaystyle s'}; that is, "I was in states{\displaystyle s} and I tried doinga{\displaystyle a} ands{\displaystyle s'} happened"). Thus, one has an arrayQ{\displaystyle Q} and uses experience to update it directly. This is known asQ-learning.

Other scopes

[edit]

Learning automata

[edit]
Main article:Learning automata

Another application of MDP process inmachine learning theory is called learning automata. This is also one type of reinforcement learning if the environment is stochastic. The first detaillearning automata paper is surveyed byNarendra and Thathachar (1974), which were originally described explicitly asfinite-state automata.[20] Similar to reinforcement learning, a learning automata algorithm also has the advantage of solving the problem when probability or rewards are unknown. The difference between learning automata and Q-learning is that the former technique omits the memory of Q-values, but updates the action probability directly to find the learning result. Learning automata is a learning scheme with a rigorous proof of convergence.[21]

In learning automata theory,a stochastic automaton consists of:

  • a setx of possible inputs,
  • a set Φ = { Φ1, ..., Φs } of possible internal states,
  • a set α = { α1, ..., αr } of possible outputs, or actions, withrs,
  • an initial state probability vectorp(0) = ≪p1(0), ...,ps(0) ≫,
  • acomputable functionA which after each time stept generatesp(t + 1) fromp(t), the current input, and the current state, and
  • a functionG: Φ → α which generates the output at each time step.

The states of such an automaton correspond to the states of a "discrete-state discrete-parameterMarkov process".[22] At each time stept = 0,1,2,3,..., the automaton reads an input from its environment, updates P(t) to P(t + 1) byA, randomly chooses a successor state according to the probabilities P(t + 1) and outputs the corresponding action. The automaton's environment, in turn, reads the action and sends the next input to the automaton.[21]

Category theoretic interpretation

[edit]

Other than the rewards, a Markov decision process(S,A,P){\displaystyle (S,A,P)} can be understood in terms ofCategory theory. Namely, letA{\displaystyle {\mathcal {A}}} denote thefree monoid with generating setA. LetDist denote theKleisli category of theGiry monad. Then a functorADist{\displaystyle {\mathcal {A}}\to \mathbf {Dist} } encodes both the setS of states and the probability functionP.

In this way, Markov decision processes could be generalized from monoids (categories with one object) to arbitrary categories. One can call the result(C,F:CDist){\displaystyle ({\mathcal {C}},F:{\mathcal {C}}\to \mathbf {Dist} )} acontext-dependent Markov decision process, because moving from one object to another inC{\displaystyle {\mathcal {C}}} changes the set of available actions and the set of possible states.[citation needed]

Alternative notations

[edit]

The terminology and notation for MDPs are not entirely settled. There are two main streams — one focuses on maximization problems from contexts like economics, using the terms action, reward, value, and calling the discount factorβ orγ, while the other focuses on minimization problems from engineering and navigation[citation needed], using the terms control, cost, cost-to-go, and calling the discount factorα. In addition, the notation for the transition probability varies.

in this articlealternativecomment
actionacontrolu
rewardRcostgg is the negative ofR
valueVcost-to-goJJ is the negative ofV
policyπpolicyμ
discounting factorγdiscounting factorα
transition probabilityPa(s,s){\displaystyle P_{a}(s,s')}transition probabilitypss(a){\displaystyle p_{ss'}(a)}

In addition, transition probability is sometimes writtenPr(s,a,s){\displaystyle \Pr(s,a,s')},Pr(ss,a){\displaystyle \Pr(s'\mid s,a)} or, rarely,pss(a).{\displaystyle p_{s's}(a).}


See also

[edit]

References

[edit]
  1. ^Puterman, Martin L. (1994).Markov decision processes: discrete stochastic dynamic programming. Wiley series in probability and mathematical statistics. Applied probability and statistics section. New York: Wiley.ISBN 978-0-471-61977-2.
  2. ^Schneider, S.; Wagner, D. H. (1957-02-26)."Error detection in redundant systems".Papers presented at the February 26-28, 1957, western joint computer conference: Techniques for reliability on - IRE-AIEE-ACM '57 (Western). New York, NY, USA: Association for Computing Machinery. pp. 115–121.doi:10.1145/1455567.1455587.ISBN 978-1-4503-7861-1.{{cite book}}:ISBN / Date incompatibility (help)
  3. ^Bellman, Richard (1958-09-01)."Dynamic programming and stochastic control processes".Information and Control.1 (3):228–239.doi:10.1016/S0019-9958(58)80003-0.ISSN 0019-9958.
  4. ^abSutton, Richard S.; Barto, Andrew G. (2018).Reinforcement learning: an introduction. Adaptive computation and machine learning series (2nd ed.). Cambridge, Massachusetts: The MIT Press.ISBN 978-0-262-03924-6.
  5. ^Kearns, Michael; Mansour, Yishay; Ng, Andrew (2002)."A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes".Machine Learning.49 (193–208):193–208.doi:10.1023/A:1017932429737.
  6. ^Wrobel, A. (1984). "On Markovian decision models with a finite skeleton".Zeitschrift für Operations Research.28 (1):17–27.doi:10.1007/bf01919083.S2CID 2545336.
  7. ^Reinforcement Learning: Theory and Python Implementation. Beijing: China Machine Press. 2019. p. 44.ISBN 9787111631774.
  8. ^Shapley, Lloyd (1953)."Stochastic Games".Proceedings of the National Academy of Sciences of the United States of America.39 (10):1095–1100.Bibcode:1953PNAS...39.1095S.doi:10.1073/pnas.39.10.1095.PMC 1063912.PMID 16589380.
  9. ^Kallenberg, Lodewijk (2002). "Finite state and action MDPs". InFeinberg, Eugene A.; Shwartz, Adam (eds.).Handbook of Markov decision processes: methods and applications. Springer.ISBN 978-0-7923-7459-6.
  10. ^Howard 2002,"Comments on the Origin and Application of Markov Decision Processes"
  11. ^Puterman, M. L.; Shin, M. C. (1978). "Modified Policy Iteration Algorithms for Discounted Markov Decision Problems".Management Science.24 (11):1127–1137.doi:10.1287/mnsc.24.11.1127.
  12. ^van Nunen, J.A. E. E (1976). "A set of successive approximation methods for discounted Markovian decision problems".Zeitschrift für Operations Research.20 (5):203–208.doi:10.1007/bf01920264.S2CID 5167748.
  13. ^Papadimitriou, Christos;Tsitsiklis, John (1987)."The Complexity of Markov Decision Processes".Mathematics of Operations Research.12 (3):441–450.doi:10.1287/moor.12.3.441.hdl:1721.1/2893. RetrievedNovember 2, 2023.
  14. ^Kearns, Michael; Mansour, Yishay; Ng, Andrew (November 2002)."A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes".Machine Learning.49 (2/3):193–208.doi:10.1023/A:1017932429737.
  15. ^Altman, Eitan (1999).Constrained Markov decision processes. Vol. 7. CRC Press.
  16. ^Ding, Dongsheng; Zhang, Kaiqing; Jovanovic, Mihailo; Basar, Tamer (2020).Natural policy gradient primal-dual method for constrained Markov decision processes. Advances in Neural Information Processing Systems.
  17. ^Feyzabadi, S.; Carpin, S. (18–22 Aug 2014)."Risk-aware path planning using hierarchical constrained Markov Decision Processes".Automation Science and Engineering (CASE). IEEE International Conference. pp. 297, 303.
  18. ^Continuous-Time Markov Decision Processes. Stochastic Modelling and Applied Probability. Vol. 62. 2009.doi:10.1007/978-3-642-02547-1.ISBN 978-3-642-02546-4.
  19. ^Shoham, Y.; Powers, R.; Grenager, T. (2003)."Multi-agent reinforcement learning: a critical survey"(PDF).Technical Report, Stanford University:1–13. Retrieved2018-12-12.
  20. ^Narendra, K. S.; Thathachar, M. A. L. (1974). "Learning Automata – A Survey".IEEE Transactions on Systems, Man, and Cybernetics. SMC-4 (4):323–334.CiteSeerX 10.1.1.295.2280.doi:10.1109/TSMC.1974.5408453.ISSN 0018-9472.
  21. ^abNarendra, Kumpati S.; Thathachar, Mandayam A. L. (1989).Learning automata: An introduction. Prentice Hall.ISBN 9780134855585.
  22. ^Narendra & Thathachar 1974, p.325 left.

Sources

[edit]

Further reading

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Markov_decision_process&oldid=1281671449"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp