Movatterモバイル変換


[0]ホーム

URL:


Explicit Explore-Exploit Algorithms in Continuous State Spaces

Part ofAdvances in Neural Information Processing Systems 32 (NeurIPS 2019)

AuthorFeedbackBibtexMetaReviewMetadataPaperReviewsSupplemental

Authors

Mikael Henaff

Abstract

We present a new model-based algorithm for reinforcement learning (RL) whichconsists of explicit exploration and exploitation phases, and is applicable in large orinfinite state spaces. The algorithm maintains a set of dynamics models consistentwith current experience and explores by finding policies which induce high dis-agreement between their state predictions. It then exploits using the refined set ofmodels or experience gathered during exploration. We show that under realizabilityand optimal planning assumptions, our algorithm provably finds a near-optimalpolicy with a number of samples that is polynomial in a structural complexitymeasure which we show to be low in several natural settings. We then give apractical approximation using neural networks and demonstrate its performanceand sample efficiency in practice.


Name Change Policy

Requests for name changes in the electronic proceedings will be accepted with no questions asked. However name changes may cause bibliographic tracking issues. Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings.

Use the "Report an Issue" link to request a name change.


[8]ページ先頭

©2009-2025 Movatter.jp