Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

OpenAI's cartpole env solver.

License

NotificationsYou must be signed in to change notification settings

gsurma/cartpole

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cartpole

Reinforcement Learning solution of theOpenAI's Cartpole.

Check out corresponding Medium article:Cartpole - Introduction to Reinforcement Learning (DQN - Deep Q-Learning)

About

A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.source

DQN

Standard DQN with Experience Replay.

Hyperparameters:

  • GAMMA = 0.95
  • LEARNING_RATE = 0.001
  • MEMORY_SIZE = 1000000
  • BATCH_SIZE = 20
  • EXPLORATION_MAX = 1.0
  • EXPLORATION_MIN = 0.01
  • EXPLORATION_DECAY = 0.995

Model structure:

  1. Dense layer - input:4, output:24, activation:relu
  2. Dense layer - input24, output:24, activation:relu
  3. Dense layer - input24, output:2, activation:linear
  • MSE loss function
  • Adam optimizer

Performance

CartPole-v0 defines "solving" as getting average reward of 195.0 over 100 consecutive trials.source

Example trial gif

Example trial chart

Solved trials chart

Author

Greg (Grzegorz) Surma

PORTFOLIO

GITHUB

BLOG


[8]ページ先頭

©2009-2025 Movatter.jp