Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.

License

NotificationsYou must be signed in to change notification settings

JuliaPOMDP/POMDPs.jl

Repository files navigation

LinuxMac OS XWindows
Build StatusBuild StatusBuild Status

DocsDev-DocsGitterSlack

This package provides a core interface for working withMarkov decision processes (MDPs) andpartially observable Markov decision processes (POMDPs).ThePOMDPTools package acts as a "standard library" for the POMDPs.jl interface, providing implementations of commonly-used components such as policies, belief updaters, distributions, and simulators.

Our goal is to provide a common programming vocabulary for:

  1. Expressing problems as MDPs and POMDPs.
  2. Writing solver software.
  3. Running simulations efficiently.

POMDPs.jl integrates with other ecosystems:

For a detailed introduction, check out ourJulia Academy course! For help, please post inGitHub Discussions tab. We welcome contributions from anyone! SeeCONTRIBUTING.md for information about contributing.

Installation

POMDPs.jl and associated solver packages can be installed usingJulia's package manager. For example, to install POMDPs.jl and the QMDP solver package, type the following in the Julia REPL:

using Pkg; Pkg.add("POMDPs"); Pkg.add("QMDP")

Quick Start

To run a simple simulation of the classicTiger POMDP using a policy created by the QMDP solver, you can use the following code (note that POMDPs.jl is not limited to discrete problems with explicitly-defined distributions like this):

using POMDPs, QuickPOMDPs, POMDPTools, QMDPm=QuickPOMDP(    states= ["left","right"],    actions= ["left","right","listen"],    observations= ["left","right"],    initialstate=Uniform(["left","right"]),    discount=0.95,    transition=function (s, a)if a=="listen"returnDeterministic(s)# tiger stays behind the same doorelse# a door is openedreturnUniform(["left","right"])# resetendend,    observation=function (s, a, sp)if a=="listen"if sp=="left"returnSparseCat(["left","right"], [0.85,0.15])# sparse categorical distributionelsereturnSparseCat(["right","left"], [0.85,0.15])endelsereturnUniform(["left","right"])endend,    reward=function (s, a)if a=="listen"return-1.0elseif s== a# the tiger was foundreturn-100.0else# the tiger was escapedreturn10.0endend)solver=QMDPSolver()policy=solve(solver, m)rsum=0.0for (s,b,a,o,r)instepthrough(m, policy,"s,b,a,o,r", max_steps=10)println("s:$s, b:$([s=>pdf(b,s)for sinstates(m)]), a:$a, o:$o")global rsum+= rendprintln("Undiscounted reward was$rsum.")

For more examples and examples with visualizations, reference theExamples andGallery of POMDPs.jl Problems sections of the documentaiton.

Documentation and Tutorials

In addition to the above-mentionedJulia Academy course, detailed documentation and examples can be foundhere.

DocsDocs

Supported Packages

Many packages use the POMDPs.jl interface, including MDP and POMDP solvers, support tools, and extensions to the POMDPs.jl interface. POMDPs.jl and all packages in the JuliaPOMDP project are fully supported on Linux. OSX and Windows are supported for all native solvers*, and most non-native solvers should work, but may require additional configuration.

Tools:

POMDPs.jl itself contains only the core interface for communicating about problem definitions; these packages contain implementations of commonly-used components:

PackageBuildCoverage
POMDPTools (hosted in this repository)Build Status
ParticleFiltersBuild Statuscodecov.io

Implemented Models:

Many models have been implemented using the POMDPs.jl interface for various projects. This list contains a few commonly used models:

PackageBuildCoverage
POMDPModelsCIcodecov
LaserTagCIcodecov
RockSampleCIcodecov
TagPOMDPProblemCICoverage Status
DroneSurveillanceBuild statuscodecov
ContinuumWorldCICoverage Status
VDPTag2CIcodecov
RoombaPOMDPs (Roomba Localization)CI

MDP solvers:

PackageBuild/CoverageOnline/
Offline
Continuous
States - Actions
Rating3
DiscreteValueIterationBuild Status
Coverage Status
OfflineN-N★★★★★
LocalApproximationValueIterationBuild Status
Coverage Status
OfflineY-N★★
GlobalApproximationValueIterationBuild Status
Coverage Status
OfflineY-N★★
MCTS (Monte Carlo Tree Search)Build Status
Coverage Status
OnlineY (DPW)-Y (DPW)★★★★

POMDP solvers:

PackageBuild/CoverageOnline/
Offline
Continuous
States-Actions-Observations
Rating3
QMDP (suboptimal)Build Status
Coverage Status
OfflineN-N-N★★★★★
FIB (suboptimal)Build Status
Coverage Status
OfflineN-N-N★★
BeliefGridValueIterationBuild Status
codecov
OfflineN-N-N★★
SARSOP*Build Status
Coverage Status
OfflineN-N-N★★★★
NativeSARSOPBuild Status
Coverage Status
OfflineN-N-N★★★★
ParticleFilterTrees (SparsePFT, PFT-DPW)Build Status
codecov
OnlineY-Y2-Y★★★
BasicPOMCPBuild Status
Coverage Status
OnlineY-N-N1★★★★
ARDESPOTBuild Status
Coverage Status
OnlineY-N-N1★★★★
AdaOPSCI
codecov.io
OnlineY-N-Y★★★★
MCVIBuild Status
Coverage Status
OfflineY-N-Y★★
POMDPSolve*Build Status
Coverage Status
OfflineN-N-N★★★
IncrementalPruningBuild Status
Coverage Status
OfflineN-N-N★★★
POMCPOWBuild Status
Coverage Status
OnlineY-Y2-Y★★★
AEMSBuild Status
Coverage Status
OnlineN-N-N★★
PointBasedValueIterationBuild status
Coverage Status
OfflineN-N-N★★

1: Will run, but will not converge to optimal solution

2: Will run, but convergence to optimal solution is not proven, and it will likely not work well on multidimensional action spaces. See alsohttps://github.com/michaelhlim/VOOTreeSearch.jl.

Reinforcement Learning:

PackageBuild/CoverageContinuous
States
Continuous
Actions
Rating3
TabularTDLearningBuild Status
Coverage Status
NN★★
DeepQLearningBuild Status
Coverage Status
Y1N★★★

1: For POMDPs, it will use the observation instead of the state as input to the policy.

3 Subjective rating; File an issue if you believe one should be changed

  • ★★★★★: Reliably Computes solution for every problem.
  • ★★★★: Works well for most problems. May require some configuration, or not support every edge of interface.
  • ★★★: May work well, but could require difficult or significant configuration.
  • ★★: Not recently used (unknown condition). May not conform to interface exactly, or may have package compatibility issues
  • ★: Not known to run

Performance Benchmarks:

Package
DESPOT

*These packages require non-Julia dependencies

Citing POMDPs

If POMDPs is useful in your research and you would like to acknowledge it, please cite thispaper:

@article{egorov2017pomdps,  author  = {Maxim Egorov and Zachary N. Sunberg and Edward Balaban and Tim A. Wheeler and Jayesh K. Gupta and Mykel J. Kochenderfer},  title   = {{POMDP}s.jl: A Framework for Sequential Decision Making under Uncertainty},  journal = {Journal of Machine Learning Research},  year    = {2017},  volume  = {18},  number  = {26},  pages   = {1-5},  url     = {http://jmlr.org/papers/v18/16-300.html}}

[8]ページ先頭

©2009-2025 Movatter.jp