Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Cornell University

Monday, May 5: arXiv will be READ ONLY at 9:00AM EST for approximately 30 minutes. We apologize for any inconvenience.

We gratefully acknowledge support from the Simons Foundation,member institutions, and all contributors.Donate
arxiv logo>cs> arXiv:2401.06604
arXiv logo
Cornell University Logo

Computer Science > Machine Learning

arXiv:2401.06604 (cs)
[Submitted on 12 Jan 2024 (v1), last revised 18 Mar 2024 (this version, v3)]

Title:Identifying Policy Gradient Subspaces

View PDFHTML (experimental)
Abstract:Policy gradient methods hold great potential for solving complex continuous control tasks. Still, their training efficiency can be improved by exploiting structure within the optimization problem. Recent work indicates that supervised learning can be accelerated by leveraging the fact that gradients lie in a low-dimensional and slowly-changing subspace. In this paper, we conduct a thorough evaluation of this phenomenon for two popular deep policy gradient methods on various simulated benchmark tasks. Our results demonstrate the existence of such gradient subspaces despite the continuously changing data distribution inherent to reinforcement learning. These findings reveal promising directions for future work on more efficient reinforcement learning, e.g., through improving parameter-space exploration or enabling second-order optimization.
Comments:Published as conference paper at ICLR 2024
Subjects:Machine Learning (cs.LG)
ACM classes:I.2.6
Cite as:arXiv:2401.06604 [cs.LG]
 (orarXiv:2401.06604v3 [cs.LG] for this version)
 https://doi.org/10.48550/arXiv.2401.06604
arXiv-issued DOI via DataCite

Submission history

From: Jan Schneider [view email]
[v1] Fri, 12 Jan 2024 14:40:55 UTC (2,030 KB)
[v2] Mon, 15 Jan 2024 14:39:10 UTC (2,030 KB)
[v3] Mon, 18 Mar 2024 09:51:00 UTC (2,208 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
Change to browse by:
export BibTeX citation

Bookmark

BibSonomy logoReddit logo

Bibliographic and Citation Tools

Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
scite Smart Citations(What are Smart Citations?)

Code, Data and Media Associated with this Article

CatalyzeX Code Finder for Papers(What is CatalyzeX?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)

Demos

Hugging Face Spaces(What is Spaces?)

Recommenders and Search Tools

Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
IArxiv Recommender(What is IArxiv?)

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.

Which authors of this paper are endorsers? |Disable MathJax (What is MathJax?)

[8]ページ先頭

©2009-2025 Movatter.jp