Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

mean Average Precision - This code evaluates the performance of your neural net for object recognition.

License

NotificationsYou must be signed in to change notification settings

Cartucho/mAP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GitHub stars

This code will evaluate the performance of your neural net for object recognition.

In practice, ahigher mAP value indicates abetter performance of your neural net, given your ground-truth and set of classes.

Citation

This project was developed for the following paper, please consider citing it:

@INPROCEEDINGS{8594067,author={J. {Cartucho} and R. {Ventura} and M. {Veloso}},booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},title={Robust Object Recognition Through Symbiotic Deep Learning In Mobile Robots},year={2018},pages={2336-2341},}

Table of contents

Explanation

The performance of your neural net will be judged using the mAP criterium defined in thePASCAL VOC 2012 competition. We simply adapted theofficial Matlab code into Python (in our tests they both give the same results).

First (1.), we calculate the Average Precision (AP), for each of the classes present in the ground-truth. Finally (2.), we calculate the mAP (mean Average Precision) value.

1. Calculate AP

For each class:

First, your neural netdetection-results are sorted by decreasing confidence and are assigned toground-truth objects. We have "a match" when they share thesame label and an IoU >= 0.5 (Intersection over Union greater than 50%). This "match" is considered a true positive if that ground-truth object has not been already used (to avoid multiple detections of the same object).

Using this criterium, we calculate the precision/recall curve. E.g:

Then we compute a version of the measured precision/recall curve withprecision monotonically decreasing (shown in light red), by setting the precision for recallr to the maximum precision obtained for any recallr' > r.

Finally, we compute the AP as thearea under this curve (shown in light blue) by numerical integration.No approximation is involved since the curve is piecewise constant.

2. Calculate mAP

We calculate the mean of all the AP's, resulting in an mAP value from 0 to 100%. E.g:

Prerequisites

You need to install:

Optional:

  • plot the results byinstalling Matplotlib - Linux, macOS and Windows:
    1. python -mpip install -U pip
    2. python -mpip install -U matplotlib
  • showanimation by installingOpenCV:
    1. python -mpip install -U pip
    2. python -mpip install -U opencv-python

Quick-start

To start using the mAP you need to clone the repo:

git clone https://github.com/Cartucho/mAP

Running the code

Step by step:

  1. Create the ground-truth files
  2. Copy the ground-truth files into the folderinput/ground-truth/
  3. Create the detection-results files
  4. Copy the detection-results files into the folderinput/detection-results/
  5. Run the code:python main.py

Optional (if you want to see theanimation):

  1. Insert the images into the folderinput/images-optional/

PASCAL VOC, Darkflow and YOLO users

In thescripts/extra folder you can find additional scripts to convertPASCAL VOC,darkflow andYOLO files into the required format.

Create the ground-truth files

  • Create a separate ground-truth text file for each image.
  • Usematching names for the files (e.g. image: "image_1.jpg", ground-truth: "image_1.txt").
  • In these files, each line should be in the following format:
    <class_name> <left> <top> <right> <bottom> [<difficult>]
  • Thedifficult parameter is optional, use it if you want the calculation to ignore a specific detection.
  • E.g. "image_1.txt":
    tvmonitor 2 10 173 238book 439 157 556 241book 437 246 518 351 difficultpottedplant 272 190 316 259

Create the detection-results files

  • Create a separate detection-results text file for each image.
  • Usematching names for the files (e.g. image: "image_1.jpg", detection-results: "image_1.txt").
  • In these files, each line should be in the following format:
    <class_name> <confidence> <left> <top> <right> <bottom>
  • E.g. "image_1.txt":
    tvmonitor 0.471781 0 13 174 244cup 0.414941 274 226 301 265book 0.460851 429 219 528 247chair 0.292345 0 199 88 436book 0.269833 433 260 506 336

Authors:

  • João Cartucho

    Feel free to contribute

    GitHub contributors


[8]ページ先頭

©2009-2025 Movatter.jp