- Notifications
You must be signed in to change notification settings - Fork546
Add support for multithreaded training in the neural net example#2454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:master
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
Enhance the neural net example with multigpu training in parallel.
The updated example looks good. I have made some cmake changes. |
Cool tks. Feel free to merge/modify your way. |
I ran it on multiple(two: GTX 1060, AMD Spectre R7) devices, works without any issues. This change demonstrates how to use multiple devices but not so much on how to distribute training data across multiple devices. How difficult would be such an example to implement ? |
|
That item would allow to also compare speed of mutiple vs single gpus execution. |
Add support for multigpu parallel training in the neural net example