- Notifications
You must be signed in to change notification settings - Fork913
mean Average Precision - This code evaluates the performance of your neural net for object recognition.
License
Cartucho/mAP
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This code will evaluate the performance of your neural net for object recognition.
In practice, ahigher mAP value indicates abetter performance of your neural net, given your ground-truth and set of classes.
This project was developed for the following paper, please consider citing it:
@INPROCEEDINGS{8594067,author={J. {Cartucho} and R. {Ventura} and M. {Veloso}},booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},title={Robust Object Recognition Through Symbiotic Deep Learning In Mobile Robots},year={2018},pages={2336-2341},}
The performance of your neural net will be judged using the mAP criterium defined in thePASCAL VOC 2012 competition. We simply adapted theofficial Matlab code into Python (in our tests they both give the same results).
First (1.), we calculate the Average Precision (AP), for each of the classes present in the ground-truth. Finally (2.), we calculate the mAP (mean Average Precision) value.
For each class:
First, your neural netdetection-results are sorted by decreasing confidence and are assigned toground-truth objects. We have "a match" when they share thesame label and an IoU >= 0.5 (Intersection over Union greater than 50%). This "match" is considered a true positive if that ground-truth object has not been already used (to avoid multiple detections of the same object).
Using this criterium, we calculate the precision/recall curve. E.g:
Then we compute a version of the measured precision/recall curve withprecision monotonically decreasing (shown in light red), by setting the precision for recallr
to the maximum precision obtained for any recallr' > r
.
Finally, we compute the AP as thearea under this curve (shown in light blue) by numerical integration.No approximation is involved since the curve is piecewise constant.
We calculate the mean of all the AP's, resulting in an mAP value from 0 to 100%. E.g:
You need to install:
Optional:
- plot the results byinstalling Matplotlib - Linux, macOS and Windows:
python -mpip install -U pip
python -mpip install -U matplotlib
- showanimation by installingOpenCV:
python -mpip install -U pip
python -mpip install -U opencv-python
To start using the mAP you need to clone the repo:
git clone https://github.com/Cartucho/mAP
Step by step:
- Create the ground-truth files
- Copy the ground-truth files into the folderinput/ground-truth/
- Create the detection-results files
- Copy the detection-results files into the folderinput/detection-results/
- Run the code:
python main.py
Optional (if you want to see theanimation):
- Insert the images into the folderinput/images-optional/
In thescripts/extra folder you can find additional scripts to convertPASCAL VOC,darkflow andYOLO files into the required format.
- Create a separate ground-truth text file for each image.
- Usematching names for the files (e.g. image: "image_1.jpg", ground-truth: "image_1.txt").
- In these files, each line should be in the following format:
<class_name> <left> <top> <right> <bottom> [<difficult>]
- The
difficult
parameter is optional, use it if you want the calculation to ignore a specific detection. - E.g. "image_1.txt":
tvmonitor 2 10 173 238book 439 157 556 241book 437 246 518 351 difficultpottedplant 272 190 316 259
- Create a separate detection-results text file for each image.
- Usematching names for the files (e.g. image: "image_1.jpg", detection-results: "image_1.txt").
- In these files, each line should be in the following format:
<class_name> <confidence> <left> <top> <right> <bottom>
- E.g. "image_1.txt":
tvmonitor 0.471781 0 13 174 244cup 0.414941 274 226 301 265book 0.460851 429 219 528 247chair 0.292345 0 199 88 436book 0.269833 433 260 506 336
About
mean Average Precision - This code evaluates the performance of your neural net for object recognition.