Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

High performance implementation of Deep neuroevolution in pytorch using mpi4py. Intended for use on HPC clusters

NotificationsYou must be signed in to change notification settings

sash-a/es_pytorch

Repository files navigation

Evolutionary strategies (deep neuroevolution) in pytorch using MPI

This implementation was made to be as simple and efficient as possible.
Reference implementation can be foundhere (in tensorflow using redis).
Based on two papers by uber AI labshere andhere.

Implementation

This was made for use on a cluster using MPI (however it can be used on a single machine). With regards to efficiency itonly scatters the positive fitness, negative fitness and noise index, per policy evaluated, to all other processes each generation. The noise is placed in a blockof shared memory (on each node) for fast access and low memory footprint.

How to run

  • conda install:conda install -n es_env -f env.yml
  • example usages:simple_example.pyobj.pynsra.py
  • example configs are inconfig/
conda activate es_envmpirun -np {num_procs} python simple_example.py configs/simple_conf.json

Make sure that you insert this line before you create your neural network as the initial creation sets theinitial parameters, which must be deterministic across all threads

torch.random.manual_seed({seed})

General info

  • In order to define a policy create asrc.nn.nn.BaseNet (which is a simple extension of atorch.nn.Module) andpass it to aPolicy along with ansrc.nn.optimizers.Optimizer and float value for the noise standard deviation, anexample of this can be seen insimple_example.py.
  • If you wish to share the noise using shared memory and MPI, then instantiate theNoiseTable usingNoiseTable.create_shared(...), otherwise if you wish to use your own method of sharing noise/runningsequentially then simply create the noise table using its constructor and pass your noise to it like this:NoiseTable(my_noise, n_params)
  • NoiseTable.create_shared(...) will throw an error if less than 2 MPI procs are used

About

High performance implementation of Deep neuroevolution in pytorch using mpi4py. Intended for use on HPC clusters

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp