Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

License

NotificationsYou must be signed in to change notification settings

os-hackathon/darknet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

(neural networks for object detection)

Paper YOLO v4:https://arxiv.org/abs/2004.10934

Paper Scaled YOLO v4:https://arxiv.org/abs/2011.08036 use to reproduce results:ScaledYOLOv4

More details in articles on medium:

Manual:https://github.com/AlexeyAB/darknet/wiki

Discussion:

About Darknet framework:http://pjreddie.com/darknet/

Darknet Continuous IntegrationCircleCITravisCIContributorsLicense: UnlicenseDOIarxiv.orgcolabcolab

Darknet Logo

scaled_yolov4 AP50:95 - FPS (Tesla V100) Paper:https://arxiv.org/abs/2011.08036


modern_gpus AP50:95 / AP50 - FPS (Tesla V100) Paper:https://arxiv.org/abs/2004.10934

tkDNN-TensorRT accelerates YOLOv4~2x times for batch=1 and3x-4x times for batch=4.

GeForce RTX 2080 Ti:

Network SizeDarknet, FPS (avg)tkDNN TensorRT FP32, FPStkDNN TensorRT FP16, FPSOpenCV FP16, FPStkDNN TensorRT FP16 batch=4, FPSOpenCV FP16 batch=4, FPStkDNN Speedup
3201001162021834234304.3x
416821031621592842943.6x
51269911341382062163.1x
60853621031151501502.8x
Tiny 416443609790773177413533.5x
Tiny 416 CPU Core i7 7700HQ3.4--42-3912x

Youtube video of results

Yolo v4Scaled Yolo v4

Others:https://www.youtube.com/user/pjreddie/videos

How to evaluate AP of YOLOv4 on the MS COCO evaluation server

  1. Download and unzip test-dev2017 dataset from MS COCO server:http://images.cocodataset.org/zips/test2017.zip
  2. Download list of images for Detection tasks and replace the paths with yours:https://raw.githubusercontent.com/AlexeyAB/darknet/master/scripts/testdev2017.txt
  3. Downloadyolov4.weights file 245 MB:yolov4.weights (Google-drive mirroryolov4.weights )
  4. Content of the filecfg/coco.data should be
classes= 80train  = <replace with your path>/trainvalno5k.txtvalid = <replace with your path>/testdev2017.txtnames = data/coco.namesbackup = backupeval=coco
  1. Create/results/ folder near with./darknet executable file
  2. Run validation:./darknet detector valid cfg/coco.data cfg/yolov4.cfg yolov4.weights
  3. Rename the file/results/coco_results.json todetections_test-dev2017_yolov4_results.json and compress it todetections_test-dev2017_yolov4_results.zip
  4. Submit filedetections_test-dev2017_yolov4_results.zip to the MS COCO evaluation server for thetest-dev2019 (bbox)

How to evaluate FPS of YOLOv4 on GPU

  1. Compile Darknet withGPU=1 CUDNN=1 CUDNN_HALF=1 OPENCV=1 in theMakefile
  2. Downloadyolov4.weights file 245 MB:yolov4.weights (Google-drive mirroryolov4.weights )
  3. Get any .avi/.mp4 video file (preferably not more than 1920x1080 to avoid bottlenecks in CPU performance)
  4. Run one of two commands and look at the AVG FPS:
  • include video_capturing + NMS + drawing_bboxes:./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -dont_show -ext_output
  • exclude video_capturing + NMS + drawing_bboxes:./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -benchmark

Pre-trained models

There are weights-file for different cfg-files (trained for MS COCO dataset):

FPS on RTX 2070 (R) and Tesla V100 (V):

CLICK ME - Yolo v3 models
CLICK ME - Yolo v2 models

Put it near compiled: darknet.exe

You can get cfg-files by path:darknet/cfg/

Requirements

Yolo v4 in other frameworks

Datasets

Improvements in this repository

  • developed State-of-the-Art object detector YOLOv4
  • added State-of-Art models: CSP, PRN, EfficientNet
  • added layers: [conv_lstm], [scale_channels] SE/ASFF/BiFPN, [local_avgpool], [sam], [Gaussian_yolo], [reorg3d] (fixed [reorg]), fixed [batchnorm]
  • added the ability for training recurrent models (with layers conv-lstm[conv_lstm]/conv-rnn[crnn]) for accurate detection on video
  • added data augmentation:[net] mixup=1 cutmix=1 mosaic=1 blur=1. Added activations: SWISH, MISH, NORM_CHAN, NORM_CHAN_SOFTMAX
  • added the ability for training with GPU-processing using CPU-RAM to increase the mini_batch_size and increase accuracy (instead of batch-norm sync)
  • improved binary neural network performance2x-4x times for Detection on CPU and GPU if you trained your own weights by using this XNOR-net model (bit-1 inference) :https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_xnor.cfg
  • improved neural network performance~7% by fusing 2 layers into 1: Convolutional + Batch-norm
  • improved performance: Detection2x times, on GPU Volta/Turing (Tesla V100, GeForce RTX, ...) using Tensor Cores ifCUDNN_HALF defined in theMakefile ordarknet.sln
  • improved performance~1.2x times on FullHD,~2x times on 4K, for detection on the video (file/stream) usingdarknet detector demo...
  • improved performance3.5 X times of data augmentation for training (using OpenCV SSE/AVX functions instead of hand-written functions) - removes bottleneck for training on multi-GPU or GPU Volta
  • improved performance of detection and training on Intel CPU with AVX (Yolo v3~85%)
  • optimized memory allocation during network resizing whenrandom=1
  • optimized GPU initialization for detection - we use batch=1 initially instead of re-init with batch=1
  • added correct calculation ofmAP, F1, IoU, Precision-Recall using commanddarknet detector map...
  • added drawing of chart of average-Loss and accuracy-mAP (-map flag) during training
  • run./darknet detector demo ... -json_port 8070 -mjpeg_port 8090 as JSON and MJPEG server to get results online over the network by using your soft or Web-browser
  • added calculation of anchors for training
  • added example of Detection and Tracking objects:https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp
  • run-time tips and warnings if you use incorrect cfg-file or dataset
  • added support for Windows
  • many other fixes of code...

And added manual -How to train Yolo v4-v2 (to detect your custom objects)

Also, you might be interested in using a simplified repository where is implemented INT8-quantization (+30% speedup and -1% mAP reduced):https://github.com/AlexeyAB/yolo2_light

How to use on the command line

On Linux use./darknet instead ofdarknet.exe, like this:./darknet detector test ./cfg/coco.data ./cfg/yolov4.cfg ./yolov4.weights

On Linux find executable file./darknet in the root directory, while on Windows find it in the directory\build\darknet\x64

  • Yolo v4 COCO -image:darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25
  • Output coordinates of objects:darknet.exe detector test cfg/coco.data yolov4.cfg yolov4.weights -ext_output dog.jpg
  • Yolo v4 COCO -video:darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output test.mp4
  • Yolo v4 COCO -WebCam 0:darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -c 0
  • Yolo v4 COCO fornet-videocam - Smart WebCam:darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights http://192.168.0.80:8080/video?dummy=param.mjpg
  • Yolo v4 -save result videofile res.avi:darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -out_filename res.avi
  • Yolo v3Tiny COCO - video:darknet.exe detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights test.mp4
  • JSON and MJPEG server that allows multiple connections from your soft or Web-browserip-address:8070 and 8090:./darknet detector demo ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights test50.mp4 -json_port 8070 -mjpeg_port 8090 -ext_output
  • Yolo v3 Tinyon GPU #1:darknet.exe detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights -i 1 test.mp4
  • Alternative method Yolo v3 COCO - image:darknet.exe detect cfg/yolov4.cfg yolov4.weights -i 0 -thresh 0.25
  • Train onAmazon EC2, to see mAP & Loss-chart using URL like:http://ec2-35-160-228-91.us-west-2.compute.amazonaws.com:8090 in the Chrome/Firefox (Darknet should be compiled with OpenCV):./darknet detector train cfg/coco.data yolov4.cfg yolov4.conv.137 -dont_show -mjpeg_port 8090 -map
  • 186 MB Yolo9000 - image:darknet.exe detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights
  • Remember to put data/9k.tree and data/coco9k.map under the same folder of your app if you use the cpp api to build an app
  • To process a list of imagesdata/train.txt and save results of detection toresult.json file use:darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output -dont_show -out result.json < data/train.txt
  • To process a list of imagesdata/train.txt and save results of detection toresult.txt use:
    darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -dont_show -ext_output < data/train.txt > result.txt
  • Pseudo-labelling - to process a list of imagesdata/new_train.txt and save results of detection in Yolo training format for each image as label<image_name>.txt (in this way you can increase the amount of training data) use:darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25 -dont_show -save_labels < data/new_train.txt
  • To calculate anchors:darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416
  • To check accuracy mAP@IoU=50:darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
  • To check accuracy mAP@IoU=75:darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights -iou_thresh 0.75
For using network video-camera mjpeg-stream with any Android smartphone
  1. Download for Android phone mjpeg-stream soft: IP Webcam / Smart WebCam

  2. Connect your Android phone to computer by WiFi (through a WiFi-router) or USB

  3. Start Smart WebCam on your phone

  4. Replace the address below, on shown in the phone application (Smart WebCam) and launch:

  • Yolo v4 COCO-model:darknet.exe detector demo data/coco.data yolov4.cfg yolov4.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0

How to compile on Linux/macOS (usingCMake)

TheCMakeLists.txt will attempt to find installed optional dependencies like CUDA, cudnn, ZED and build against those. It will also create a shared object library file to usedarknet for code development.

Install powershell if you do not already have it (guide here).

To update CMake on Ubuntu, it's better to follow guide here:https://apt.kitware.com/

Usingvcpkg

Open a shell and type these commands

PS Code/>              git clone https://github.com/AlexeyAB/darknetPS Code/>              cd darknetPS Code/darknet>       ./build.ps1-UseVCPKG-EnableOPENCV-EnableCUDA-EnableCUDNN

(add option-EnableOPENCV_CUDA if you want to build OpenCV with CUDA support - very slow to build!)If you open thebuild.ps1 script at the beginning you will find all available switches.

Using libraries manually provided

Open a shell and type these commands

PS Code/>              git clone https://github.com/AlexeyAB/darknetPS Code/>              cd darknetPS Code/darknet>       ./build.ps1-EnableOPENCV-EnableCUDA-EnableCUDNN

(remove options like-EnableCUDA or-EnableCUDNN if you are not interested into).If you open thebuild.ps1 script at the beginning you will find all available switches.

How to compile on Linux (usingmake)

Just domake in the darknet directory. (You can try to compile and run it on Google Colab in cloudlink (press «Open in Playground» button at the top-left corner) and watch the videolink )Before make, you can set such options in theMakefile:link

  • GPU=1 to build with CUDA to accelerate by using GPU (CUDA should be in/usr/local/cuda)
  • CUDNN=1 to build with cuDNN v5-v7 to accelerate training by using GPU (cuDNN should be in/usr/local/cudnn)
  • CUDNN_HALF=1 to build for Tensor Cores (on Titan V / Tesla V100 / DGX-2 and later) speedup Detection 3x, Training 2x
  • OPENCV=1 to build with OpenCV 4.x/3.x/2.4.x - allows to detect on video files and video streams from network cameras or web-cams
  • DEBUG=1 to build debug version of Yolo
  • OPENMP=1 to build with OpenMP support to accelerate Yolo by using multi-core CPU
  • LIBSO=1 to build a librarydarknet.so and binary runnable fileuselib that uses this library. Or you can try to run soLD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib test.mp4 How to use this SO-library from your own code - you can look at C++ example:https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cppor use in such a way:LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov4.cfg yolov4.weights test.mp4
  • ZED_CAMERA=1 to build a library with ZED-3D-camera support (should be ZED SDK installed), then runLD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov4.cfg yolov4.weights zed_camera
  • You also need to specify for which graphics card the code is generated. This is done by settingARCH=. If you use a never version than CUDA 11 you further need to edit line 20 from Makefile and remove-gencode arch=compute_30,code=sm_30 \ as Kepler GPU support was dropped in CUDA 11. You can also drop the generalARCH= and just uncommentARCH= for your graphics card.

To run Darknet on Linux use examples from this article, just use./darknet instead ofdarknet.exe, i.e. use this command:./darknet detector test ./cfg/coco.data ./cfg/yolov4.cfg ./yolov4.weights

How to compile on Windows (usingCMake)

Requires:

In Windows:

  • Start (button) -> All programs -> CMake -> CMake (gui) ->

  • look at image In CMake: Enter input path to the darknet Source, and output path to the Binaries -> Configure (button) -> Optional platform for generator:x64 -> Finish -> Generate -> Open Project ->

  • in MS Visual Studio: Select: x64 and Release -> Build -> Build solution

  • find the executable filedarknet.exe in the output path to the binaries you specified

x64 and Release

How to compile on Windows (usingvcpkg)

This is the recommended approach to build Darknet on Windows.

  1. Install Visual Studio 2017 or 2019. In case you need to download it, please go here:Visual Studio Community

  2. Install CUDA (at least v10.0) enabling VS Integration during installation.

  3. Open Powershell (Start -> All programs -> Windows Powershell) and type these commands:

PS Code/>              git clone https://github.com/AlexeyAB/darknetPS Code/>              cd darknetPS Code/darknet>       .\build.ps1-UseVCPKG-EnableOPENCV-EnableCUDA-EnableCUDNN

(add option-EnableOPENCV_CUDA if you want to build OpenCV with CUDA support - very slow to build! - or remove options like-EnableCUDA or-EnableCUDNN if you are not interested in them). If you open thebuild.ps1 script at the beginning you will find all available switches.

How to train with multi-GPU

  1. Train it first on 1 GPU for like 1000 iterations:darknet.exe detector train cfg/coco.data cfg/yolov4.cfg yolov4.conv.137

  2. Then stop and by using partially-trained model/backup/yolov4_1000.weights run training with multigpu (up to 4 GPUs):darknet.exe detector train cfg/coco.data cfg/yolov4.cfg /backup/yolov4_1000.weights -gpus 0,1,2,3

If you get a Nan, then for some datasets better to decrease learning rate, for 4 GPUs setlearning_rate = 0,00065 (i.e. learning_rate = 0.00261 / GPUs). In this case also increase 4x timesburn_in = in your cfg-file. I.e. useburn_in = 4000 instead of1000.

https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ

How to train (to detect your custom objects)

(to train old Yolo v2yolov2-voc.cfg,yolov2-tiny-voc.cfg,yolo-voc.cfg,yolo-voc.2.0.cfg, ...click by the link)

Training Yolo v4 (and v3):

  1. For trainingcfg/yolov4-custom.cfg download the pre-trained weights-file (162 MB):yolov4.conv.137 (Google drive mirroryolov4.conv.137 )

  2. Create fileyolo-obj.cfg with the same content as inyolov4-custom.cfg (or copyyolov4-custom.cfg toyolo-obj.cfg) and:

So ifclasses=1 then should befilters=18. Ifclasses=2 then writefilters=21.

(Do not write in the cfg-file: filters=(classes + 5)x3)

(Generallyfilters depends on theclasses,coords and number ofmasks, i.e. filters=(classes + coords + 1)*<number of mask>, wheremask is indices of anchors. Ifmask is absence, then filters=(classes + coords + 1)*num)

So for example, for 2 objects, your fileyolo-obj.cfg should differ fromyolov4-custom.cfg in such lines in each of3 [yolo]-layers:

[convolutional]filters=21[region]classes=2
  1. Create fileobj.names in the directorybuild\darknet\x64\data\, with objects names - each in new line

  2. Create fileobj.data in the directorybuild\darknet\x64\data\, containing (whereclasses = number of objects):

classes = 2train  = data/train.txtvalid  = data/test.txtnames = data/obj.namesbackup = backup/
  1. Put image-files (.jpg) of your objects in the directorybuild\darknet\x64\data\obj\

  2. You should label each object on images from your dataset. Use this visual GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 & v3:https://github.com/AlexeyAB/Yolo_mark

It will create.txt-file for each.jpg-image-file - in the same directory and with the same name, but with.txt-extension, and put to file: object number and object coordinates on this image, for each object in new line:

<object-class> <x_center> <y_center> <width> <height>

Where:

  • <object-class> - integer object number from0 to(classes-1)
  • <x_center> <y_center> <width> <height> - float valuesrelative to width and height of image, it can be equal from(0.0 to 1.0]
  • for example:<x> = <absolute_x> / <image_width> or<height> = <absolute_height> / <image_height>
  • attention:<x_center> <y_center> - are center of rectangle (are not top-left corner)

For example forimg1.jpg you will be createdimg1.txt containing:

1 0.716797 0.395833 0.216406 0.1472220 0.687109 0.379167 0.255469 0.1583331 0.420312 0.395833 0.140625 0.166667
  1. Create filetrain.txt in directorybuild\darknet\x64\data\, with filenames of your images, each filename in new line, with path relative todarknet.exe, for example containing:
data/obj/img1.jpgdata/obj/img2.jpgdata/obj/img3.jpg
  1. Download pre-trained weights for the convolutional layers and put to the directorybuild\darknet\x64

  2. Start training by using the command line:darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137

    To train on Linux use command:./darknet detector train data/obj.data yolo-obj.cfg yolov4.conv.137 (just use./darknet instead ofdarknet.exe)

    • (fileyolo-obj_last.weights will be saved to thebuild\darknet\x64\backup\ for each 100 iterations)
    • (fileyolo-obj_xxxx.weights will be saved to thebuild\darknet\x64\backup\ for each 1000 iterations)
    • (to disable Loss-Window usedarknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -dont_show, if you train on computer without monitor like a cloud Amazon EC2)
    • (to see the mAP & Loss-chart during training on remote server without GUI, use commanddarknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -dont_show -mjpeg_port 8090 -map then open URLhttp://ip-address:8090 in Chrome/Firefox browser)

8.1. For training with mAP (mean average precisions) calculation for each 4 Epochs (setvalid=valid.txt ortrain.txt inobj.data file) and run:darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map

  1. After training is complete - get resultyolo-obj_final.weights from pathbuild\darknet\x64\backup\
  • After each 100 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just start training using:darknet.exe detector train data/obj.data yolo-obj.cfg backup\yolo-obj_2000.weights

    (in the original repositoryhttps://github.com/pjreddie/darknet the weights-file is saved only once every 10 000 iterationsif(iterations > 1000))

  • Also you can get result earlier than all 45000 iterations.

Note: If during training you seenan values foravg (loss) field - then training goes wrong, but ifnan is in some other lines - then training goes well.

Note: If you changed width= or height= in your cfg-file, then new width and height must be divisible by 32.

Note: After training use such command for detection:darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

Note: if errorOut of memory occurs then in.cfg-file you should increasesubdivisions=16, 32 or 64:link

How to train tiny-yolo (to detect your custom objects):

Do all the same steps as for the full yolo model as described above. With the exception of:

  • Download file with the first 29-convolutional layers of yolov4-tiny:https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.conv.29(Or get this file from yolov4-tiny.weights file by using command:darknet.exe partial cfg/yolov4-tiny-custom.cfg yolov4-tiny.weights yolov4-tiny.conv.29 29
  • Make your custom modelyolov4-tiny-obj.cfg based oncfg/yolov4-tiny-custom.cfg instead ofyolov4.cfg
  • Start training:darknet.exe detector train data/obj.data yolov4-tiny-obj.cfg yolov4-tiny.conv.29

For training Yolo based on other models (DenseNet201-Yolo orResNet50-Yolo), you can download and get pre-trained weights as showed in this file:https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/partial.cmdIf you made you custom model that isn't based on other models, then you can train it without pre-trained weights, then will be used random initial weights.

When should I stop training:

Usually sufficient 2000 iterations for each class(object), but not less than number of training images and not less than 6000 iterations in total. But for a more precise definition when you should stop training, use the following manual:

  1. During training, you will see varying indicators of error, and you should stop when no longer decreases0.XXXXXXX avg:

Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000, count: 8Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000, count: 8

9002: 0.211667,0.60730 avg, 0.001000 rate, 3.868000 seconds, 576128 imagesLoaded: 0.000000 seconds

  • 9002 - iteration number (number of batch)
  • 0.60730 avg - average loss (error) -the lower, the better

When you see that average loss0.xxxxxx avg no longer decreases at many iterations then you should stop training. The final average loss can be from0.05 (for a small model and easy dataset) to3.0 (for a big model and a difficult dataset).

Or if you train with flag-map then you will see mAP indicatorLast accuracy mAP@0.5 = 18.50% in the console - this indicator is better than Loss, so train while mAP increases.

  1. Once training is stopped, you should take some of last.weights-files fromdarknet\build\darknet\x64\backup and choose the best of them:

For example, you stopped training after 9000 iterations, but the best result can give one of previous weights (7000, 8000, 9000). It can happen due to over-fitting.Over-fitting - is case when you can detect objects on images from training-dataset, but can't detect objects on any others images. You should get weights fromEarly Stopping Point:

Over-fitting

To get weights from Early Stopping Point:

2.1. At first, in your fileobj.data you must specify the path to the validation datasetvalid = valid.txt (format ofvalid.txt as intrain.txt), and if you haven't validation images, just copydata\train.txt todata\valid.txt.

2.2 If training is stopped after 9000 iterations, to validate some of previous weights use this commands:

(If you use another GitHub repository, then usedarknet.exe detector recall... instead ofdarknet.exe detector map...)

  • darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
  • darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_8000.weights
  • darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_9000.weights

And compare last output lines for each weights (7000, 8000, 9000):

Choose weights-filewith the highest mAP (mean average precision) or IoU (intersect over union)

For example,bigger mAP gives weightsyolo-obj_8000.weights - thenuse this weights for detection.

Or just train with-map flag:

darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map

So you will see mAP-chart (red-line) in the Loss-chart Window. mAP will be calculated for each 4 Epochs usingvalid=valid.txt file that is specified inobj.data file (1 Epoch = images_in_train_txt / batch iterations)

(to change the max x-axis value - changemax_batches= parameter to2000*classes, f.e.max_batches=6000 for 3 classes)

loss_chart_map_chart

Example of custom object detection:darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

  • IoU (intersect over union) - average intersect over union of objects and detections for a certain threshold = 0.24

  • mAP (mean average precision) - mean value ofaverage precisions for each class, whereaverage precision is average value of 11 points on PR-curve for each possible threshold (each probability of detection) for the same class (Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11:http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf

mAP is default metric of precision in the PascalVOC competition,this is the same as AP50 metric in the MS COCO competition.In terms of Wiki, indicators Precision and Recall have a slightly different meaning than in the PascalVOC competition, butIoU always has the same meaning.

precision_recall_iou

Custom object detection:

Example of custom object detection:darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

Yolo_v2_trainingYolo_v2_training

How to improve object detection:

  1. Before training:
  • set flagrandom=1 in your.cfg-file - it will increase precision by training Yolo for different resolutions:link

  • increase network resolution in your.cfg-file (height=608,width=608 or any value multiple of 32) - it will increase precision

  • check that each object that you want to detect is mandatory labeled in your dataset - no one object in your data set should not be without label. In the most training issues - there are wrong labels in your dataset (got labels by using some conversion script, marked with a third-party tool, ...). Always check your dataset by using:https://github.com/AlexeyAB/Yolo_mark

  • my Loss is very high and mAP is very low, is training wrong? Run training with -show_imgs flag at the end of training command, do you see correct bounded boxes of objects (in windows or in filesaug_...jpg)? If no - your training dataset is wrong.

  • for each object which you want to detect - there must be at least 1 similar object in the Training dataset with about the same: shape, side of object, relative size, angle of rotation, tilt, illumination. So desirable that your training dataset include images with objects at different: scales, rotations, lightings, from different sides, on different backgrounds - you should preferably have 2000 different images for each class or more, and you should train2000*classes iterations or more

  • desirable that your training dataset include images with non-labeled objects that you do not want to detect - negative samples without bounded box (empty.txt files) - use as many images of negative samples as there are images with objects

  • What is the best way to mark objects: label only the visible part of the object, or label the visible and overlapped part of the object, or label a little more than the entire object (with a little gap)? Mark as you like - how would you like it to be detected.

  • for training with a large number of objects in each image, add the parametermax=200 or higher value in the last[yolo]-layer or[region]-layer in your cfg-file (the global maximum number of objects that can be detected by YoloV3 is0,0615234375*(width*height) where are width and height are parameters from[net] section in cfg-file)

  • for training for small objects (smaller than 16x16 after the image is resized to 416x416) - setlayers = 23 instead ofhttps://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L895

  • for training for both small and large objects use modified models:

  • If you train the model to distinguish Left and Right objects as separate classes (left/right hand, left/right-turn on road signs, ...) then for disabling flip data augmentation - addflip=0 here:https://github.com/AlexeyAB/darknet/blob/3d2d0a7c98dbc8923d9ff705b81ff4f7940ea6ff/cfg/yolov3.cfg#L17

  • General rule - your training dataset should include such a set of relative sizes of objects that you want to detect:

    • train_network_width * train_obj_width / train_image_width ~= detection_network_width * detection_obj_width / detection_image_width
    • train_network_height * train_obj_height / train_image_height ~= detection_network_height * detection_obj_height / detection_image_height

    I.e. for each object from Test dataset there must be at least 1 object in the Training dataset with the same class_id and about the same relative size:

    object width in percent from Training dataset ~=object width in percent from Test dataset

    That is, if only objects that occupied 80-90% of the image were present in the training set, then the trained network will not be able to detect objects that occupy 1-10% of the image.

  • to speedup training (with decreasing detection accuracy) set paramstopbackward=1 for layer-136 in cfg-file

  • each:model of object, side, illumination, scale, each 30 grad of the turn and inclination angles - these aredifferent objects from an internal perspective of the neural network. So the moredifferent objects you want to detect, the more complex network model should be used.

  • to make the detected bounded boxes more accurate, you can add 3 parametersignore_thresh = .9 iou_normalizer=0.5 iou_loss=giou to each[yolo] layer and train, it will increase mAP@0.9, but decrease mAP@0.5.

  • Only if you are anexpert in neural detection networks - recalculate anchors for your dataset forwidth andheight from cfg-file:darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416then set the same 9anchors in each of 3[yolo]-layers in your cfg-file. But you should change indexes of anchorsmasks= for each [yolo]-layer, so for YOLOv4 the 1st-[yolo]-layer has anchors smaller than 30x30, 2nd smaller than 60x60, 3rd remaining, and vice versa for YOLOv3. Also you should change thefilters=(classes + 5)*<number of mask> before each [yolo]-layer. If many of the calculated anchors do not fit under the appropriate layers - then just try using all the default anchors.

  1. After training - for detection:
  • Increase network-resolution by set in your.cfg-file (height=608 andwidth=608) or (height=832 andwidth=832) or (any value multiple of 32) - this increases the precision and makes it possible to detect small objects:link

  • it is not necessary to train the network again, just use.weights-file already trained for 416x416 resolution

  • to get even greater accuracy you should train with higher resolution 608x608 or 832x832, note: if errorOut of memory occurs then in.cfg-file you should increasesubdivisions=16, 32 or 64:link

How to mark bounded boxes of objects and create annotation files:

Here you can find repository with GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 - v4:https://github.com/AlexeyAB/Yolo_mark

With example of:train.txt,obj.names,obj.data,yolo-obj.cfg,air1-6.txt,bird1-4.txt for 2 classes of objects (air, bird) andtrain_obj.cmd with example how to train this image-set with Yolo v2 - v4

Different tools for marking objects in images:

  1. in C++:https://github.com/AlexeyAB/Yolo_mark
  2. in Python:https://github.com/tzutalin/labelImg
  3. in Python:https://github.com/Cartucho/OpenLabeling
  4. in C++:https://www.ccoderun.ca/darkmark/
  5. in #"https://github.com/opencv/cvat">https://github.com/opencv/cvat
  6. in C++:https://github.com/jveitchmichaelis/deeplabel
  7. in C#:https://github.com/BMW-InnovationLab/BMW-Labeltool-Lite
  8. DL-Annotator for Windows ($30):url
  9. v7labs - the greatest cloud labeling tool ($1.5 per hour):https://www.v7labs.com/

How to use Yolo as DLL and SO libraries

  • on Linux
    • usingbuild.sh or
    • builddarknet usingcmake or
    • setLIBSO=1 in theMakefile and domake
  • on Windows
    • usingbuild.ps1 or
    • builddarknet usingcmake or
    • compilebuild\darknet\yolo_cpp_dll.sln solution orbuild\darknet\yolo_cpp_dll_no_gpu.sln solution

There are 2 APIs:


  1. To compile Yolo as C++ DLL-fileyolo_cpp_dll.dll - open the solutionbuild\darknet\yolo_cpp_dll.sln, setx64 andRelease, and do the: Build -> Build yolo_cpp_dll

    • You should have installedCUDA 10.0
    • To use cuDNN do: (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and add at the beginning of line:CUDNN;
  2. To use Yolo as DLL-file in your C++ console application - open the solutionbuild\darknet\yolo_console_dll.sln, setx64 andRelease, and do the: Build -> Build yolo_console_dll

    • you can run your console application from Windows Explorerbuild\darknet\x64\yolo_console_dll.exeuse this command:yolo_console_dll.exe data/coco.names yolov4.cfg yolov4.weights test.mp4

    • after launching your console application and entering the image file name - you will see info for each object:<obj_id> <left_x> <top_y> <width> <height> <probability>

    • to use simple OpenCV-GUI you should uncomment line//#define OPENCV inyolo_console_dll.cpp-file:link

    • you can see source code of simple example for detection on the video file:link

yolo_cpp_dll.dll-API:link

structbbox_t {unsignedint x, y, w, h;// (x,y) - top-left corner, (w, h) - width & height of bounded boxfloat prob;// confidence - probability that the object was found correctlyunsignedint obj_id;// class of object - from range [0, classes-1]unsignedint track_id;// tracking id for video (0 - untracked, 1 - inf - tracked object)unsignedint frames_counter;// counter of frames on which the object was detected};classDetector {public:Detector(std::string cfg_filename, std::string weight_filename,int gpu_id =0);~Detector();        std::vector<bbox_t>detect(std::string image_filename,float thresh =0.2,bool use_mean =false);        std::vector<bbox_t>detect(image_t img,float thresh =0.2,bool use_mean =false);staticimage_tload_image(std::string image_filename);staticvoidfree_image(image_t m);#ifdef OPENCV        std::vector<bbox_t>detect(cv::Mat mat,float thresh =0.2,bool use_mean =false);        std::shared_ptr<image_t>mat_to_image_resize(cv::Mat mat)const;#endif};

About

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C64.0%
  • Cuda14.9%
  • C++12.9%
  • Python4.4%
  • CMake1.4%
  • PowerShell0.8%
  • Other1.6%

[8]ページ先頭

©2009-2025 Movatter.jp