- Notifications
You must be signed in to change notification settings - Fork73
MIM Installs OpenMMLab Packages
License
open-mmlab/mim
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
MIM provides a unified interface for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.
Package Management
You can use MIM to manage OpenMMLab codebases, install or uninstall them conveniently.
Model Management
You can use MIM to manage OpenMMLab model zoo, e.g., download checkpoints by name, search checkpoints that meet specific criteria.
Unified Entrypoint for Scripts
You can execute any script provided by all OpenMMLab codebases with unified commands. Train, test and inference become easier than ever. Besides, you can use
gridsearchcommand for vanilla hyper-parameter search.
This project is released under theApache 2.0 license.
v0.1.1 was released in 13/6/2021.
You can use.mimrc for customization. Now we support customize default values of each sub-command. Please refer tocustomization.md for details.
We provide some examples of how to build custom projects based on OpenMMLAB codebases and MIM inMIM-Example.Without worrying about copying codes and scripts from existing codebases, users can focus on developing new components and MIM helps integrate and run the new project.
Please refer toinstallation.md for installation.
1. install
command
# install latest version of mmcv-full> mim install mmcv-full# wheel# install 1.5.0> mim install mmcv-full==1.5.0# install latest version of mmcls> mim install mmcls# install master branch> mim install git+https://github.com/open-mmlab/mmclassification.git# install local repo> git clone https://github.com/open-mmlab/mmclassification.git>cd mmclassification> mim install.# install extension based on OpenMMLabmim install git+https://github.com/xxx/mmcls-project.git
api
frommimimportinstall# install mmcvinstall('mmcv-full')# install mmcls will automatically install mmcv if it is not installedinstall('mmcls')# install extension based on OpenMMLabinstall('git+https://github.com/xxx/mmcls-project.git')
2. uninstall
command
# uninstall mmcv> mim uninstall mmcv-full# uninstall mmcls> mim uninstall mmcls
api
frommimimportuninstall# uninstall mmcvuninstall('mmcv-full')# uninstall mmclsuninstall('mmcls')
3. list
command
> mim list> mim list --all
api
frommimimportlist_packagelist_package()list_package(True)
4. search
command
> mim search mmcls> mim search mmcls==0.23.0 --remote> mim search mmcls --config resnet18_8xb16_cifar10> mim search mmcls --model resnet> mim search mmcls --dataset cifar-10> mim search mmcls --valid-field> mim search mmcls --condition'batch_size>45,epochs>100'> mim search mmcls --condition'batch_size>45 epochs>100'> mim search mmcls --condition'128<batch_size<=256'> mim search mmcls --sort batch_size epochs> mim search mmcls --field epochs batch_size weight> mim search mmcls --exclude-field weight paper
api
frommimimportget_model_infoget_model_info('mmcls')get_model_info('mmcls==0.23.0',local=False)get_model_info('mmcls',models=['resnet'])get_model_info('mmcls',training_datasets=['cifar-10'])get_model_info('mmcls',filter_conditions='batch_size>45,epochs>100')get_model_info('mmcls',filter_conditions='batch_size>45 epochs>100')get_model_info('mmcls',filter_conditions='128<batch_size<=256')get_model_info('mmcls',sorted_fields=['batch_size','epochs'])get_model_info('mmcls',shown_fields=['epochs','batch_size','weight'])
5. download
command
> mim download mmcls --config resnet18_8xb16_cifar10> mim download mmcls --config resnet18_8xb16_cifar10 --dest.
api
frommimimportdownloaddownload('mmcls', ['resnet18_8xb16_cifar10'])download('mmcls', ['resnet18_8xb16_cifar10'],dest_root='.')
6. train
command
# Train models on a single server with CPU by setting `gpus` to 0 and# 'launcher' to 'none' (if applicable). The training script of the# corresponding codebase will fail if it doesn't support CPU training.> mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0# Train models on a single server with one GPU> mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1# Train models on a single server with 4 GPUs and pytorch distributed> mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 4 \ --launcher pytorch# Train models on a slurm HPC with one 8-GPU node> mim train mmcls resnet101_b16x8_cifar10.py --launcher slurm --gpus 8 \ --gpus-per-node 8 --partition partition_name --work-dir tmp# Print help messages of sub-command train> mim train -h# Print help messages of sub-command train and the training script of mmcls> mim train mmcls -h
api
frommimimporttraintrain(repo='mmcls',config='resnet18_8xb16_cifar10.py',gpus=0,other_args=('--work-dir','tmp'))train(repo='mmcls',config='resnet18_8xb16_cifar10.py',gpus=1,other_args=('--work-dir','tmp'))train(repo='mmcls',config='resnet18_8xb16_cifar10.py',gpus=4,launcher='pytorch',other_args=('--work-dir','tmp'))train(repo='mmcls',config='resnet18_8xb16_cifar10.py',gpus=8,launcher='slurm',gpus_per_node=8,partition='partition_name',other_args=('--work-dir','tmp'))
7. test
command
# Test models on a single server with 1 GPU, report accuracy> mimtest mmcls resnet101_b16x8_cifar10.py --checkpoint \ tmp/epoch_3.pth --gpus 1 --metrics accuracy# Test models on a single server with 1 GPU, save predictions> mimtest mmcls resnet101_b16x8_cifar10.py --checkpoint \ tmp/epoch_3.pth --gpus 1 --out tmp.pkl# Test models on a single server with 4 GPUs, pytorch distributed,# report accuracy> mimtest mmcls resnet101_b16x8_cifar10.py --checkpoint \ tmp/epoch_3.pth --gpus 4 --launcher pytorch --metrics accuracy# Test models on a slurm HPC with one 8-GPU node, report accuracy> mimtest mmcls resnet101_b16x8_cifar10.py --checkpoint \ tmp/epoch_3.pth --gpus 8 --metrics accuracy --partition \ partition_name --gpus-per-node 8 --launcher slurm# Print help messages of sub-command test> mimtest -h# Print help messages of sub-command test and the testing script of mmcls> mimtest mmcls -h
api
frommimimporttesttest(repo='mmcls',config='resnet101_b16x8_cifar10.py',checkpoint='tmp/epoch_3.pth',gpus=1,other_args=('--metrics','accuracy'))test(repo='mmcls',config='resnet101_b16x8_cifar10.py',checkpoint='tmp/epoch_3.pth',gpus=1,other_args=('--out','tmp.pkl'))test(repo='mmcls',config='resnet101_b16x8_cifar10.py',checkpoint='tmp/epoch_3.pth',gpus=4,launcher='pytorch',other_args=('--metrics','accuracy'))test(repo='mmcls',config='resnet101_b16x8_cifar10.py',checkpoint='tmp/epoch_3.pth',gpus=8,partition='partition_name',launcher='slurm',gpus_per_node=8,other_args=('--metrics','accuracy'))
8. run
command
# Get the Flops of a model> mim run mmcls get_flops resnet101_b16x8_cifar10.py# Publish a model> mim run mmcls publish_model input.pth output.pth# Train models on a slurm HPC with one GPU> srun -p partition --gres=gpu:1 mim run mmcls train \ resnet101_b16x8_cifar10.py --work-dir tmp# Test models on a slurm HPC with one GPU, report accuracy> srun -p partition --gres=gpu:1 mim run mmclstest \ resnet101_b16x8_cifar10.py tmp/epoch_3.pth --metrics accuracy# Print help messages of sub-command run> mim run -h# Print help messages of sub-command run, list all available scripts in# codebase mmcls> mim run mmcls -h# Print help messages of sub-command run, print the help message of# training script in mmcls> mim run mmcls train -h
api
frommimimportrunrun(repo='mmcls',command='get_flops',other_args=('resnet101_b16x8_cifar10.py',))run(repo='mmcls',command='publish_model',other_args=('input.pth','output.pth'))run(repo='mmcls',command='train',other_args=('resnet101_b16x8_cifar10.py','--work-dir','tmp'))run(repo='mmcls',command='test',other_args=('resnet101_b16x8_cifar10.py','tmp/epoch_3.pth','--metrics accuracy'))
9. gridsearch
command
# Parameter search on a single server with CPU by setting `gpus` to 0 and# 'launcher' to 'none' (if applicable). The training script of the# corresponding codebase will fail if it doesn't support CPU training.> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0 \ --search-args'--optimizer.lr 1e-2 1e-3'# Parameter search with on a single server with one GPU, search learning# rate> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \ --search-args'--optimizer.lr 1e-2 1e-3'# Parameter search with on a single server with one GPU, search# weight_decay> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \ --search-args'--optimizer.weight_decay 1e-3 1e-4'# Parameter search with on a single server with one GPU, search learning# rate and weight_decay> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \ --search-args'--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \ 1e-4'# Parameter search on a slurm HPC with one 8-GPU node, search learning# rate and weight_decay> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \ --partition partition_name --gpus-per-node 8 --launcher slurm \ --search-args'--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \ 1e-4'# Parameter search on a slurm HPC with one 8-GPU node, search learning# rate and weight_decay, max parallel jobs is 2> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \ --partition partition_name --gpus-per-node 8 --launcher slurm \ --max-jobs 2 --search-args'--optimizer.lr 1e-2 1e-3 \ --optimizer.weight_decay 1e-3 1e-4'# Print the help message of sub-command search> mim gridsearch -h# Print the help message of sub-command search and the help message of the# training script of codebase mmcls> mim gridsearch mmcls -h
api
frommimimportgridsearchgridsearch(repo='mmcls',config='resnet101_b16x8_cifar10.py',gpus=0,search_args='--optimizer.lr 1e-2 1e-3',other_args=('--work-dir','tmp'))gridsearch(repo='mmcls',config='resnet101_b16x8_cifar10.py',gpus=1,search_args='--optimizer.lr 1e-2 1e-3',other_args=('--work-dir','tmp'))gridsearch(repo='mmcls',config='resnet101_b16x8_cifar10.py',gpus=1,search_args='--optimizer.weight_decay 1e-3 1e-4',other_args=('--work-dir','tmp'))gridsearch(repo='mmcls',config='resnet101_b16x8_cifar10.py',gpus=1,search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay''1e-3 1e-4',other_args=('--work-dir','tmp'))gridsearch(repo='mmcls',config='resnet101_b16x8_cifar10.py',gpus=8,partition='partition_name',gpus_per_node=8,launcher='slurm',search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'' 1e-3 1e-4',other_args=('--work-dir','tmp'))gridsearch(repo='mmcls',config='resnet101_b16x8_cifar10.py',gpus=8,partition='partition_name',gpus_per_node=8,launcher='slurm',max_workers=2,search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'' 1e-3 1e-4',other_args=('--work-dir','tmp'))
We appreciate all contributions to improve mim. Please refer toCONTRIBUTING.md for the contributing guideline.
This project is released under theApache 2.0 license.
- MMEngine: OpenMMLab foundational library for training deep learning models.
- MMCV: OpenMMLab foundational library for computer vision.
- MMEval: A unified evaluation library for multiple machine learning libraries.
- MMPreTrain: OpenMMLab pre-training toolbox and benchmark.
- MMagic: OpenMMLabAdvanced,Generative andIntelligentCreation toolbox.
- MMDetection: OpenMMLab detection toolbox and benchmark.
- MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
- MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
- MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
- MMTracking: OpenMMLab video perception toolbox and benchmark.
- MMPose: OpenMMLab pose estimation toolbox and benchmark.
- MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
- MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
- MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
- MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
- MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
- MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
- MMFlow: OpenMMLab optical flow toolbox and benchmark.
- MMDeploy: OpenMMLab model deployment framework.
- MMRazor: OpenMMLab model compression toolbox and benchmark.
- Playground: A central hub for gathering and showcasing amazing projects built upon OpenMMLab.
About
MIM Installs OpenMMLab Packages
Topics
Resources
License
Code of conduct
Contributing
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.