Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
This repository was archived by the owner on Oct 15, 2019. It is now read-only.
/minervaPublic archive

Minerva: a fast and flexible tool for deep learning on multi-GPU. It provides ndarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy.

License

NotificationsYou must be signed in to change notification settings

dmlc/minerva

Repository files navigation

Latest News

  • We've cleared quite a lot of Minerva's dependencies and made it easier to build. Basically, almost all needed are:

    ./build.sh

    Please seethe wiki page for more information.

  • Minerva's Tutorial and API documents are released!

  • Minerva had migrated todmlc, where you could find many awesome machine learning repositories!

  • Minerva now evolves to use cudnn_v2. Please download and use the newlibrary.

  • Minerva now supports the latest version of Caffe's network configuration protobuf format. If you are using older version, error may occur. Please use thetool to upgrade the configure file.

Overview

Minerva is a fast and flexible tool for deep learning. It provides NDarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy. Please refer to the examples to see how multi-GPU setting is used.Minerva is a fast and flexible tool for deep learning. It provides NDarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy. Please refer to the examples to see how multi-GPU setting is used.

Quick try

After building and installing Minerva and Owl package (python binding) as inInstall Minerva. Try run./run_owl_shell.sh in Minerva's root directory. And enter:

>>>x=owl.ones([10,5])>>>y=owl.ones([10,5])>>>z=x+y>>>z.to_numpy()

The result will be a 10x5 array filled by value 2. Minerva supports manynumpy style ndarray operations. Please see the APIdocument for more information.

Features

  • N-D array programming interface and easy integration withnumpy

    >>>importnumpyasnp>>>x=np.array([1,2,3])>>>y=owl.from_numpy(x)>>>y+=1>>>y.to_numpy()array([2.,3.,4., ],dtype=float32)

    More is in theAPI cheatsheet

  • Automatically parallel execution

    >>>x=owl.zeros([256,128])>>>y=owl.randn([1024,32],0.0,0.01)

    The abovex andy will be executedconcurrently. How is this achieved?

    SeeFeature Highlight: Data-flow and lazy evaluation

  • Multi-GPU, multi-CPU support:

    >>>owl.set_device(gpu0)>>>x=owl.zeros([256,128])>>>owl.set_device(gpu1)>>>y=owl.randn([1024,32],0.0,0.01)

    The abovex andy will be executed on two cardssimultaneously. How is this achieved?

    SeeFeature Highlight: Multi GPU Training

Tutorial and Documents

  • Tutorials and high-level concepts could be found inour wiki page
  • A step-by-step walk through on MNIST example could be foundhere
  • We also built a tool to directly read Caffe's configure file and train. Seedocument.
  • API documents could be foundhere

Performance

We will keep updating the latest performance we could achieve in this section.

Training speed

Training speed
(images/second)
AlexNetVGGNetGoogLeNet
1 card189.6314.3782.47
2 cards371.0129.58160.53
4 cards632.0950.26309.27
  • The performance is measured on a machine with 4 GTX Titan cards.
  • On each card, we load minibatch size of 256, 24, 120 for AlexNet, VGGNet and GoogLeNet respectively. Therefore, the total minibatch size will increase as the number of cards grows (for example, training AlexNet on 4 cards will use 1024 minibatch size).

An end-to-end training

We also provide some end-to-end training codes inowl package, which could load Caffe's model file and perform training. Note that, Minerva isnot the same tool as Caffe. We are not focusing on this part of logic. In fact, we implement these just to play with the Minerva's powerful and flexible programming interface (we could implement a Caffe-like network trainer in around 700~800 lines of python codes). Here is the training error with time compared with Caffe. Note that Minerva could finish GoogleNet training in less than four days with four GPU cards.

Error curve

Testing error rate

We trained several models using Minerva from scratch to show the correctness. The following table shows the error rate of different network under different testing settings.

Testing error rateAlexNetVGGNetGoogLeNet
single view top-141.6%31.6%32.7%
multi view top-139.7%30.1%31.3%
single view top-518.8%11.4%11.8%
multi view top-517.5%10.8%11.0%
  • AlexNet is trained with thesolver except that we didn't use multi-group convolution.
  • GoogLeNet is trained with thequick_solver.
  • We didn't train VGGNet from scratch. We just transform the model into Minerva format and testing.

The models can be found in the following link:AlexNetGoogLeNetVGGNet

You can download the trained models and try them on your own machine usingnet_tester script.

Next Plan

  • Get rid of boost library dependency by using Cython. (DONE)
  • Large scaleLSTM example using Minerva.
  • Easy support for user-defined new operations.

License and support

Minerva is provided in the Apache V2 open source license.

You can use the "issues" tab in github to report bugs. For non-bug issues, please send up an email atminerva-support@googlegroups.com. You can subscribe to the discussion group:https://groups.google.com/forum/#!forum/minerva-support.

Wiki

For more information on how to install, use or contribute to Minerva, please visit our wiki page:https://github.com/minerva-developers/minerva/wiki

About

Minerva: a fast and flexible tool for deep learning on multi-GPU. It provides ndarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp