- Notifications
You must be signed in to change notification settings - Fork113
OpenAI's cartpole env solver.
License
gsurma/cartpole
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Reinforcement Learning solution of theOpenAI's Cartpole.
Check out corresponding Medium article:Cartpole - Introduction to Reinforcement Learning (DQN - Deep Q-Learning)
A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.source
Standard DQN with Experience Replay.
- GAMMA = 0.95
- LEARNING_RATE = 0.001
- MEMORY_SIZE = 1000000
- BATCH_SIZE = 20
- EXPLORATION_MAX = 1.0
- EXPLORATION_MIN = 0.01
- EXPLORATION_DECAY = 0.995
- Dense layer - input:4, output:24, activation:relu
- Dense layer - input24, output:24, activation:relu
- Dense layer - input24, output:2, activation:linear
- MSE loss function
- Adam optimizer
CartPole-v0 defines "solving" as getting average reward of 195.0 over 100 consecutive trials.source
Greg (Grzegorz) Surma