Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

asynchoronous learning example working inside localhost

NotificationsYou must be signed in to change notification settings

wsjeon/DistributedTensorFlowExample

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dependencies

  • Ubuntu 16.04
  • tmux
  • TensorFlow >=1.0.0
  • Basic knowledge on tmux is required, e.g., shortcut keys.

How to run

  • For users with single GPU, do the following command:
$ bash run_single_gpu.sh

Inrun_single_gpu.sh, you can increase the number of workers by modifyingnum_workers.

  • For users with multiple GPUs, do the following command:
$ bash run_multi_gpu.sh

Inrun_multi_gpu.sh, you can increase the number of workers by modifyingnum_workers.Note that the size ofGPU_ID inrun_multi_gpu.sh should be the same asnum_workers.For example, ifnum_workers is equal to 2,GPU_ID might be(0 1),(2 4), ...

References

  • The network architecture from@ischlag. However, not exactly the same. For example, TensorFlow graph is a little bit different.
  • Some functions and ideas come from OpenAI'suniverse-starter-agent. However, the original code does not support GPU usage.

About

asynchoronous learning example working inside localhost

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python88.3%
  • Shell11.7%

[8]ページ先頭

©2009-2025 Movatter.jp