Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
This repository was archived by the owner on Jan 29, 2023. It is now read-only.
/deepoPublic archive

Setup and customize deep learning environment in seconds.

License

NotificationsYou must be signed in to change notification settings

ufoym/deepo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

deepo

workflowsdockerbuildlicense

PLEASE NOTE, THE DEEP LEARNING FRAMEWORK WAR IS OVER, THIS PROJECT IS NO LONGER BEING MAINTAINED.


Deepo is an open framework to assemble specializeddocker images for deep learning research without pain. It provides a “lego set” of dozens of standard components for preparing deep learning tools and a framework for assembling them into custom docker images.

At the core of Deepo is a Dockerfile generator that

  • allows you tocustomize your deep learning environment with Lego-like modules
    • define your environment in a single command line,
    • then deepo will generate Dockerfiles with best practices
    • and do all the configuration for you
  • automatically resolves the dependencies for you
    • deepo knows which combos (CUDA/cuDNN/Python/PyTorch/Tensorflow, ..., tons of dependancies) are compatible
    • and will pick the right versions for you
    • and arrange sequence of installation procedures usingtopological sorting

We also prepare a series of pre-built docker images that


Table of contents


Step 1. InstallDocker andnvidia-docker.

Step 2. Obtain the all-in-one image fromDocker Hub

docker pull ufoym/deepo

For users in China who may suffer from slow speeds when pulling the image from the public Docker registry, you can pulldeepo images from the China registry mirror by specifying the full path, including the registry, in your docker pull command, for example:

docker pull registry.docker-cn.com/ufoym/deepo

Now you can try this command:

docker run --gpus all --rm ufoym/deepo nvidia-smi

This should work and enables Deepo to use the GPU from inside a docker container.If this does not work, searchthe issues section on the nvidia-docker GitHub -- many solutions are already documented. To get an interactive shell to a container that will not be automatically deleted after you exit do

docker run --gpus all -it ufoym/deepo bash

If you want to share your data and configurations between the host (your machine or VM) and the container in which you are using Deepo, use the -v option, e.g.

docker run --gpus all -it -v /host/data:/data -v /host/config:/config ufoym/deepo bash

This will make/host/data from the host visible as/data in the container, and/host/config as/config. Such isolation reduces the chances of your containerized experiments overwriting or using wrong data.

Please note that some frameworks (e.g. PyTorch) use shared memory to share data between processes, so if multiprocessing is used the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with--ipc=host or--shm-size command line options todocker run.

docker run --gpus all -it --ipc=host ufoym/deepo bash

Step 1. InstallDocker.

Step 2. Obtain the all-in-one image fromDocker Hub

docker pull ufoym/deepo:cpu

Now you can try this command:

docker run -it ufoym/deepo:cpu bash

If you want to share your data and configurations between the host (your machine or VM) and the container in which you are using Deepo, use the -v option, e.g.

docker run -it -v /host/data:/data -v /host/config:/config ufoym/deepo:cpu bash

This will make/host/data from the host visible as/data in the container, and/host/config as/config. Such isolation reduces the chances of your containerized experiments overwriting or using wrong data.

Please note that some frameworks (e.g. PyTorch) use shared memory to share data between processes, so if multiprocessing is used the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with--ipc=host or--shm-size command line options todocker run.

docker run -it --ipc=host ufoym/deepo:cpu bash

You are now ready to begin your journey.

$ python

>>>importtensorflow>>>importsonnet>>>importtorch>>>importkeras>>>importmxnet>>>importcntk>>>importchainer>>>importtheano>>>importlasagne>>>importcaffe>>>importpaddle

$ caffe --version

caffe version 1.0.0

$ darknet

usage: darknet <function>

Note thatdocker pull ufoym/deepo mentioned inQuick Start will give you a standard image containing all available deep learning frameworks. You can customize your own environment as well.

If you prefer a specific framework rather than an all-in-one image, just append a tag with the name of the framework.Take tensorflow for example:

docker pull ufoym/deepo:tensorflow

Step 1. pull the all-in-one image

docker pull ufoym/deepo

Step 2. run the image

docker run --gpus all -it -p 8888:8888 -v /home/u:/root --ipc=host ufoym/deepo jupyter lab --no-browser --ip=0.0.0.0 --allow-root --LabApp.allow_origin='*' --LabApp.root_dir='/root'

Step 1. prepare generator

git clone https://github.com/ufoym/deepo.gitcd deepo/generator

Step 2. generate your customized Dockerfile

For example, if you likepytorch andlasagne, then

python generate.py Dockerfile pytorch lasagne

or with CUDA 11.1 and CUDNN 8

python generate.py Dockerfile pytorch lasagne --cuda-ver 11.1 --cudnn-ver 8

This should generate a Dockerfile that contains everything for buildingpytorch andlasagne. Note that the generator can handle automatic dependency processing and topologically sort the lists. So you don't need to worry about missing dependencies and the list order.

You can also specify the version of Python:

python generate.py Dockerfile pytorch lasagne python==3.6

Step 3. build your Dockerfile

docker build -t my/deepo.

This may take several minutes as it compiles a few libraries from scratch.

.modern-deep-learningdl-dockerjupyter-deeplearningDeepo
ubuntu16.0414.0414.0418.04
cudaX8.06.5-8.08.0-10.2/None
cudnnXv5v2-5v7
onnxXXXO
theanoXOOO
tensorflowOOOO
sonnetXXXO
pytorchXXXO
kerasOOOO
lasagneXOOO
mxnetXXXO
cntkXXXO
chainerXXXO
caffeOOOO
caffe2XXXO
torchXOOO
darknetXXXO
paddlepaddleXXXO
.CUDA 11.3 / Python 3.8CPU-only / Python 3.8
all-in-onelatestallall-py38py38-cu113all-py38-cu113all-py38-cpuall-cpupy38-cpucpu
TensorFlowtensorflow-py38-cu113tensorflow-py38tensorflowtensorflow-py38-cputensorflow-cpu
PyTorchpytorch-py38-cu113pytorch-py38pytorchpytorch-py38-cpupytorch-cpu
Keraskeras-py38-cu113keras-py38keraskeras-py38-cpukeras-cpu
MXNetmxnet-py38-cu113mxnet-py38mxnetmxnet-py38-cpumxnet-cpu
Chainerchainer-py38-cu113chainer-py38chainerchainer-py38-cpuchainer-cpu
Darknetdarknet-cu113darknetdarknet-cpu
paddlepaddlepaddle-cu113paddlepaddle-cpu
.CUDA 11.3 / Python 3.6CUDA 11.1 / Python 3.6CUDA 10.1 / Python 3.6CUDA 10.0 / Python 3.6CUDA 9.0 / Python 3.6CUDA 9.0 / Python 2.7CPU-only / Python 3.6CPU-only / Python 2.7
all-in-onepy36-cu113all-py36-cu113py36-cu111all-py36-cu111py36-cu101all-py36-cu101py36-cu100all-py36-cu100py36-cu90all-py36-cu90all-py27-cu90all-py27py27-cu90all-py27-cpupy27-cpu
all-in-one with jupyterall-jupyter-py36-cu90all-py27-jupyterpy27-jupyterall-py27-jupyter-cpupy27-jupyter-cpu
Theanotheano-py36-cu113theano-py36-cu111theano-py36-cu101theano-py36-cu100theano-py36-cu90theano-py27-cu90theano-py27theano-py27-cpu
TensorFlowtensorflow-py36-cu113tensorflow-py36-cu111tensorflow-py36-cu101tensorflow-py36-cu100tensorflow-py36-cu90tensorflow-py27-cu90tensorflow-py27tensorflow-py27-cpu
Sonnetsonnet-py36-cu113sonnet-py36-cu111sonnet-py36-cu101sonnet-py36-cu100sonnet-py36-cu90sonnet-py27-cu90sonnet-py27sonnet-py27-cpu
PyTorchpytorch-py36-cu113pytorch-py36-cu111pytorch-py36-cu101pytorch-py36-cu100pytorch-py36-cu90pytorch-py27-cu90pytorch-py27pytorch-py27-cpu
Keraskeras-py36-cu113keras-py36-cu111keras-py36-cu101keras-py36-cu100keras-py36-cu90keras-py27-cu90keras-py27keras-py27-cpu
Lasagnelasagne-py36-cu113lasagne-py36-cu111lasagne-py36-cu101lasagne-py36-cu100lasagne-py36-cu90lasagne-py27-cu90lasagne-py27lasagne-py27-cpu
MXNetmxnet-py36-cu113mxnet-py36-cu111mxnet-py36-cu101mxnet-py36-cu100mxnet-py36-cu90mxnet-py27-cu90mxnet-py27mxnet-py27-cpu
CNTKcntk-py36-cu113cntk-py36-cu111cntk-py36-cu101cntk-py36-cu100cntk-py36-cu90cntk-py27-cu90cntk-py27cntk-py27-cpu
Chainerchainer-py36-cu113chainer-py36-cu111chainer-py36-cu101chainer-py36-cu100chainer-py36-cu90chainer-py27-cu90chainer-py27chainer-py27-cpu
Caffecaffe-py36-cu113caffe-py36-cu111caffe-py36-cu101caffe-py36-cu100caffe-py36-cu90caffe-py27-cu90caffe-py27caffe-py27-cpu
Caffe2caffe2-py36-cu90caffe2-py36caffe2caffe2-py27-cu90caffe2-py27caffe2-py36-cpucaffe2-cpucaffe2-py27-cpu
Torchtorch-cu113torch-cu111torch-cu101torch-cu100torch-cu90torch-cu90torchtorch-cpu
Darknetdarknet-cu113darknet-cu111darknet-cu101darknet-cu100darknet-cu90darknet-cu90darknetdarknet-cpu
@misc{ming2017deepo,    author = {Ming Yang},    title = {Deepo: set up deep learning environment in a single command line.},    year = {2017},    publisher = {GitHub},    journal = {GitHub repository},    howpublished = {\url{https://github.com/ufoym/deepo}}}

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.

Deepo isMIT licensed.


[8]ページ先頭

©2009-2025 Movatter.jp