Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Detection of tumors on mammography images

NotificationsYou must be signed in to change notification settings

delmalih/MIAS-mammography-obj-detection

Repository files navigation

Requirements

References

Installation instructions

Start by cloning this repo:

git clone https://github.com/delmalih/MIAS-mammography-obj-detection

1. Faster R-CNN instructions

  • First, create an environment :
conda create --name faster-r-cnnconda activate faster-r-cnnconda install ipython pipcd MIAS-mammography-obj-detectionpip install -r requirements.txtcd ..
  • Then, run these commands (ignore if you have already done the FCOS installation) :
# install pytorchconda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=9.0 -c pytorchexport INSTALL_DIR=$PWD# install pycocotoolscd $INSTALL_DIRgit clone https://github.com/cocodataset/cocoapi.gitcd cocoapi/PythonAPIpython setup.py build_ext install# install cityscapesScriptscd $INSTALL_DIRgit clone https://github.com/mcordts/cityscapesScripts.gitcd cityscapesScripts/python setup.py build_ext install# install apexcd $INSTALL_DIRgit clone https://github.com/NVIDIA/apex.gitcd apexpython setup.py install --cuda_ext --cpp_ext# install PyTorch Detectioncd $INSTALL_DIRgit clone https://github.com/facebookresearch/maskrcnn-benchmark.gitcd maskrcnn-benchmarkpython setup.py build developcd $INSTALL_DIRunset INSTALL_DIR

2. RetinaNet instructions

  • First, create an environment :
conda create --name retinanet python=3.6conda activate retinanetconda install ipython pipcd MIAS-mammography-obj-detectionpip install -r requirements.txtcd ..pip install tensorflow-gpu==1.9pip install keras==2.2.5
  • Then, run these commands :
# clone keras-retinanet repogit clone https://github.com/fizyr/keras-retinanetcd keras-retinanetpip install .python setup.py build_ext --inplace
  • Finally, replace thekeras_retinanet/preprocessing/coco.py file bythis file

3. FCOS instructions

  • First, create an environment :
conda create --name fcosconda activate fcosconda install ipython pipcd MIAS-mammography-obj-detectionpip install -r requirements.txtcd ..

How it works

1. Download the MIAS Database

Run these commands to download to MIAS database :

mkdir mias-db && cd mias-dbwget http://peipa.essex.ac.uk/pix/mias/all-mias.tar.gztar -zxvf all-mias.tar.gzrm all-mias.tar.gz && cd ..

And replace themias-db/Info.txt bythis one

2. Generate COCO or VOC augmented data

It is possible to generate COCO or VOC annotations from raw data (all-mias folder +Info.txt annotations file) through 2 scripts:generate_{COCO|VOC}_annotations.py :

python generate_{COCO|VOC}_annotations.py --images (or -i) <Path to the images folder> \                                          --annotations (or -a) <Path to the .txt annotations file> \                                          --output (or -o) <Path to output folder> \                                          --aug_fact <Data augmentation factor> \                                          --train_val_split <Percetange of the train folder (default 0.9)>

For example, to generate 10x augmented COCO annotations, run this command :

python generate_COCO_annotations.py --images ../mias-db/ \                                    --annotations ../mias-db/Info.txt \                                    --output ../mias-db/COCO \                                    --aug_fact 20 \                                    --train_val_split 0.9

3. How to run a training

3.1 Faster R-CNN

To run a training with the Faster-RCNN:

  • Go to the faster-r-cnn directory:cd faster-r-cnn
  • Change conda env:conda deactivate && conda activate faster-r-cnn
  • Download theResnet_101_FPN model
  • Trim the model:python trim_detectron_model.py --pretrained_path e2e_faster_rcnn_R_101_FPN_1x.pth --save_path base_model.pth
  • Edit themaskrcnn-benchmark/maskrcnn_benchmark/config/paths_catalog.py file and put these lines in theDATASETS dictionary :
  DATASETS = {    ...,    "mias_train_cocostyle": {        "img_dir": "<PATH_TO_'mias-db'_folder>/<COCO_FOLDER>/images/train",        "ann_file": "<PATH_TO_'mias-db'_folder>/<COCO_FOLDER>/annotations/instances_train.json"    },    "mias_val_cocostyle": {        "img_dir": "<PATH_TO_'mias-db'_folder>/<COCO_FOLDER>/images/val",        "ann_file": "<PATH_TO_'mias-db'_folder>/<COCO_FOLDER>/annotations/instances_val.json"    },  }
  • In themaskrcnn-benchmark/maskrcnn_benchmark/data/datasets/coco.py, comment line 84 to 92 :
    # if anno and "segmentation" in anno[0]:    #     masks = [obj["segmentation"] for obj in anno]    #     masks = SegmentationMask(masks, img.size, mode='poly')    #     target.add_field("masks", masks)    # if anno and "keypoints" in anno[0]:    #     keypoints = [obj["keypoints"] for obj in anno]    #     keypoints = PersonKeypoints(keypoints, img.size)    #     target.add_field("keypoints", keypoints)
  • Run this command :
python train.py --config-file mias_config.yml

3.2 RetinaNet

To run a training with the retinanet :

cd retinanetconda deactivate && conda activate retinanetpython train.py --compute-val-loss \ # Computer val loss or not                --tensorboard-dir <Path to the tensorboard directory> \                --batch-size <Batch size> \                --epochs <Nb of epochs> \                coco <Path to the COCO dataset>

And if you want to see the tensorboard, run on another window :

tensorboard --logdir <Path to the tensorboard directory>

3.3 FCOS

To run a training with the FCOS Object Detector :

cd fcosconda deactivate && conda activate fcospython train.py --config-file <Path to the config file> \                OUTPUT_DIR <Path to the output dir for the logs>

4. How to run an inference

4.1 Faster R-CNN

To run an inference, you need a pre-trained model. Run this command:

cd faster-r-cnnconda deactivate && conda activate faster-r-cnnpython inference.py --config-file <Path to the config file> \                    MODEL.WEIGHT <Path to weights of the model to load> \                    TEST.IMS_PER_BATCH <Nb of images per batch>

4.2 RetinaNet

  • Put the images you want to run an inference on, in<Name of COCO dataset>/<Name of folder>
  • Run this command :
cd retinanetconda deactivate && conda activate retinanetpython inference.py --snapshot <Path of the model snapshot> \                    --set_name <Name of the inference folder in the COCO dataset> \                    coco <Path to the COCO dataset>

4.3 FCOS

cd fcosconda deactivate && conda activate fcospython inference.py --config-file <Path to the config file> \                    MODEL.WEIGHT <Path to weights of the model to load> \                    TEST.IMS_PER_BATCH <Nb of images per batch>

Results

MetricFaster-RCNNRetinaNetFCOS
mAP98,70%94,97%98,20%
Precision94,12%100,00%94,44%
Recall98,65%94,72%98,20%
F1-score96,22%96,93%96,25%

Poster : Breast Cancer Detection Contest

Capture d’écran 2019-11-09 à 12 25 53

Releases

No releases published

Packages

No packages published

Contributors2

  •  
  •  

Languages


[8]ページ先頭

©2009-2025 Movatter.jp