Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Oct 31, 2023. It is now read-only.

Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

License

NotificationsYou must be signed in to change notification settings

facebookresearch/maskrcnn-benchmark

Repository files navigation

maskrcnn-benchmark has been deprecated. Please seedetectron2, which includes implementations for all models in maskrcnn-benchmark

This project aims at providing the necessary building blocks for easilycreating detection and segmentation models using PyTorch 1.0.

alt text

Highlights

  • PyTorch 1.0: RPN, Faster R-CNN and Mask R-CNN implementations that matches or exceeds Detectron accuracies
  • Very fast: up to2x faster thanDetectron and30% faster thanmmdetection during training. SeeMODEL_ZOO.md for more details.
  • Memory efficient: uses roughly 500MB less GPU memory than mmdetection during training
  • Multi-GPU training and inference
  • Mixed precision training: trains faster with less GPU memory onNVIDIA tensor cores.
  • Batched inference: can perform inference using multiple images per batch per GPU
  • CPU support for inference: runs on CPU in inference time. See ourwebcam demo for an example
  • Provides pre-trained models for almost all reference Mask R-CNN and Faster R-CNN configurations with 1x schedule.

Webcam and Jupyter notebook demo

We provide a simple webcam demo that illustrates how you can usemaskrcnn_benchmark for inference:

cd demo# by default, it runs on the GPU# for best results, use min-image-size 800python webcam.py --min-image-size 800# can also run it on the CPUpython webcam.py --min-image-size 300 MODEL.DEVICE cpu# or change the model that you want to usepython webcam.py --config-file ../configs/caffe2/e2e_mask_rcnn_R_101_FPN_1x_caffe2.yaml --min-image-size 300 MODEL.DEVICE cpu# in order to see the probability heatmaps, pass --show-mask-heatmapspython webcam.py --min-image-size 300 --show-mask-heatmaps MODEL.DEVICE cpu# for the keypoint demopython webcam.py --config-file ../configs/caffe2/e2e_keypoint_rcnn_R_50_FPN_1x_caffe2.yaml --min-image-size 300 MODEL.DEVICE cpu

A notebook with the demo can be found indemo/Mask_R-CNN_demo.ipynb.

Installation

CheckINSTALL.md for installation instructions.

Model Zoo and Baselines

Pre-trained models, baselines and comparison with Detectron and mmdetectioncan be found inMODEL_ZOO.md

Inference in a few lines

We provide a helper class to simplify writing inference pipelines using pre-trained models.Here is how we would do it. Run this from thedemo folder:

frommaskrcnn_benchmark.configimportcfgfrompredictorimportCOCODemoconfig_file="../configs/caffe2/e2e_mask_rcnn_R_50_FPN_1x_caffe2.yaml"# update the config options with the config filecfg.merge_from_file(config_file)# manual override some optionscfg.merge_from_list(["MODEL.DEVICE","cpu"])coco_demo=COCODemo(cfg,min_image_size=800,confidence_threshold=0.7,)# load image and then run predictionimage= ...predictions=coco_demo.run_on_opencv_image(image)

Perform training on COCO dataset

For the following examples to work, you need to first installmaskrcnn_benchmark.

You will also need to download the COCO dataset.We recommend to symlink the path to the coco dataset todatasets/ as follows

We useminival andvalminusminival sets fromDetectron

# symlink the coco datasetcd~/github/maskrcnn-benchmarkmkdir -p datasets/cocoln -s /path_to_coco_dataset/annotations datasets/coco/annotationsln -s /path_to_coco_dataset/train2014 datasets/coco/train2014ln -s /path_to_coco_dataset/test2014 datasets/coco/test2014ln -s /path_to_coco_dataset/val2014 datasets/coco/val2014# or use COCO 2017 versionln -s /path_to_coco_dataset/annotations datasets/coco/annotationsln -s /path_to_coco_dataset/train2017 datasets/coco/train2017ln -s /path_to_coco_dataset/test2017 datasets/coco/test2017ln -s /path_to_coco_dataset/val2017 datasets/coco/val2017# for pascal voc dataset:ln -s /path_to_VOCdevkit_dir datasets/voc

P.S.COCO_2017_train =COCO_2014_train +valminusminival ,COCO_2017_val =minival

You can also configure your own paths to the datasets.For that, all you need to do is to modifymaskrcnn_benchmark/config/paths_catalog.py topoint to the location where your dataset is stored.You can also create a newpaths_catalog.py file which implements the same two classes,and pass it as a config argumentPATHS_CATALOG during training.

Single GPU training

Most of the configuration files that we provide assume that we are running on 8 GPUs.In order to be able to run it on fewer GPUs, there are a few possibilities:

1. Run the following without modifications

python /path_to_maskrcnn_benchmark/tools/train_net.py --config-file"/path/to/config/file.yaml"

This should work out of the box and is very similar to what we should do for multi-GPU training.But the drawback is that it will use much more GPU memory. The reason is that we set in theconfiguration files a global batch size that is divided over the number of GPUs. So if we onlyhave a single GPU, this means that the batch size for that GPU will be 8x larger, which might leadto out-of-memory errors.

If you have a lot of memory available, this is the easiest solution.

2. Modify the cfg parameters

If you experience out-of-memory errors, you can reduce the global batch size. But this means thatyou'll also need to change the learning rate, the number of iterations and the learning rate schedule.

Here is an example for Mask R-CNN R-50 FPN with the 1x schedule:

python tools/train_net.py --config-file"configs/e2e_mask_rcnn_R_50_FPN_1x.yaml" SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 SOLVER.MAX_ITER 720000 SOLVER.STEPS"(480000, 640000)" TEST.IMS_PER_BATCH 1 MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN 2000

This follows thescheduling rules from Detectron.Note that we have multiplied the number of iterations by 8x (as well as the learning rate schedules),and we have divided the learning rate by 8x.

We also changed the batch size during testing, but that is generally not necessary because testingrequires much less memory than training.

Furthermore, we setMODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN 2000 as the proposals are selected for per the batch rather than per image in the default training. The value is calculated by1000 x images-per-gpu. Here we have 2 images per GPU, therefore we set the number as 1000 x 2 = 2000. If we have 8 images per GPU, the value should be set as 8000. Note that this does not apply ifMODEL.RPN.FPN_POST_NMS_PER_BATCH is set toFalse during training. See#672 for more details.

Multi-GPU training

We use internallytorch.distributed.launch in order to launchmulti-gpu training. This utility function from PyTorch spawns as manyPython processes as the number of GPUs we want to use, and each Pythonprocess will only use a single GPU.

export NGPUS=8python -m torch.distributed.launch --nproc_per_node=$NGPUS /path_to_maskrcnn_benchmark/tools/train_net.py --config-file"path/to/config/file.yaml" MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN images_per_gpu x 1000

Note we should setMODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN follow the rule in Single-GPU training.

Mixed precision training

We currently useAPEX to addAutomatic Mixed Precision support. To enable, just do Single-GPU or Multi-GPU training and setDTYPE "float16".

export NGPUS=8python -m torch.distributed.launch --nproc_per_node=$NGPUS /path_to_maskrcnn_benchmark/tools/train_net.py --config-file"path/to/config/file.yaml" MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN images_per_gpu x 1000 DTYPE"float16"

If you want more verbose logging, setAMP_VERBOSE True. SeeMixed Precision Training guide for more details.

Evaluation

You can test your model directly on single or multiple gpus. Here is an example for Mask R-CNN R-50 FPN with the 1x schedule on 8 GPUS:

export NGPUS=8python -m torch.distributed.launch --nproc_per_node=$NGPUS /path_to_maskrcnn_benchmark/tools/test_net.py --config-file"configs/e2e_mask_rcnn_R_50_FPN_1x.yaml" TEST.IMS_PER_BATCH 16

To calculate mAP for each class, you can simply modify a few lines incoco_eval.py. See#524 for more details.

Abstractions

For more information on some of the main abstractions in our implementation, seeABSTRACTIONS.md.

Adding your own dataset

This implementation adds support for COCO-style datasets.But adding support for training on a new dataset can be done as follows:

frommaskrcnn_benchmark.structures.bounding_boximportBoxListclassMyDataset(object):def__init__(self, ...):# as you would do normallydef__getitem__(self,idx):# load the image as a PIL Imageimage= ...# load the bounding boxes as a list of list of boxes# in this case, for illustrative purposes, we use# x1, y1, x2, y2 order.boxes= [[0,0,10,10], [10,20,50,50]]# and labelslabels=torch.tensor([10,20])# create a BoxList from the boxesboxlist=BoxList(boxes,image.size,mode="xyxy")# add the labels to the boxlistboxlist.add_field("labels",labels)ifself.transforms:image,boxlist=self.transforms(image,boxlist)# return the image, the boxlist and the idx in your datasetreturnimage,boxlist,idxdefget_img_info(self,idx):# get img_height and img_width. This is used if# we want to split the batches according to the aspect ratio# of the image, as it can be more efficient than loading the# image from diskreturn {"height":img_height,"width":img_width}

That's it. You can also add extra fields to the boxlist, such as segmentation masks(usingstructures.segmentation_mask.SegmentationMask), or even your own instance type.

For a full example of how theCOCODataset is implemented, checkmaskrcnn_benchmark/data/datasets/coco.py.

Once you have created your dataset, it needs to be added in a couple of places:

Testing

While the aforementioned example should work for training, we leverage thecocoApi for computing the accuracies during testing. Thus, test datasetsshould currently follow the cocoApi for now.

To enable your dataset for testing, add a corresponding if statement inmaskrcnn_benchmark/data/datasets/evaluation/__init__.py:

ifisinstance(dataset,datasets.MyDataset):returncoco_evaluation(**args)

Finetuning from Detectron weights on custom datasets

Create a scripttools/trim_detectron_model.py likehere.You can decide which keys to be removed and which keys to be kept by modifying the script.

Then you can simply point the converted model path in the config file by changingMODEL.WEIGHT.

For further information, please refer to#15.

Troubleshooting

If you have issues running or compiling this code, we have compiled a list of common issues inTROUBLESHOOTING.md. If your issue is not present there, please feelfree to open a new issue.

Citations

Please consider citing this project in your publications if it helps your research. The following is a BibTeX reference. The BibTeX entry requires theurl LaTeX package.

@misc{massa2018mrcnn,author = {Massa, Francisco and Girshick, Ross},title = {{maskrcnn-benchmark: Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch}},year = {2018},howpublished = {\url{https://github.com/facebookresearch/maskrcnn-benchmark}},note = {Accessed: [Insert date here]}}

Projects using maskrcnn-benchmark

License

maskrcnn-benchmark is released under the MIT license. SeeLICENSE for additional details.

About

Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp