Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

TensorRT8.Support Yolov5n,s,m,l,x .darknet -> tensorrt. Yolov4 Yolov3 use raw darknet *.weights and *.cfg fils. If the wrapper is useful to you,please Star it.

License

NotificationsYou must be signed in to change notification settings

enazoe/yolo-tensorrt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GitHub starsGitHub forksGitHub watchersGitter

news: 2021.10.31:yolov5-v6.0 support

INTRODUCTION

The project is the encapsulation of nvidia official yolo-tensorrtimplementation. And you must have the trained yolo model(.weights) and.cfg file from the darknet (yolov3 & yolov4). For theyolov5 ,you should prepare the model file (yolov5s.yaml) and the trained weight file (yolov5s.pt) from pytorch.

  • yolov5n ,yolov5s , yolov5m , yolov5l , yolov5x ,yolov5-p6tutorial
  • yolov4
  • yolov3

Features

  • inequal net width and height
  • batch inference
  • support FP32,FP16,INT8
  • dynamic input size

PLATFORM & BENCHMARK

  • windows 10
  • ubuntu 18.04
  • L4T (Jetson platform)
BENCHMARK

x86 (inference time)

modelsizegpufp32fp16INT8
yolov5s640x6401080ti8ms/7ms
yolov5m640x6401080ti13ms/11ms
yolov5l640x6401080ti20ms/15ms
yolov5x640x6401080ti30ms/23ms

Jetson NX with Jetpack4.4.1 (inference / detect time)

modelsizegpufp32fp16INT8
yolov3416x416nx105ms/120ms30ms/48ms20ms/35ms
yolov3-tiny416x416nx14ms/23ms8ms/15ms12ms/19ms
yolov4-tiny416x416nx13ms/23ms7ms/16ms7ms/15ms
yolov4416x416nx111ms/125ms55ms/65ms47ms/57ms
yolov5s416x416nx47ms/88ms33ms/74ms28ms/64ms
yolov5m416x416nx110ms/145ms63ms/101ms49ms/91ms
yolov5l416x416nx205ms/242ms95ms/123ms76ms/118ms
yolov5x416x416nx351ms/405ms151ms/183ms114ms/149ms

ubuntu

modelsizegpufp32fp16INT8
yolov4416x416titanv11ms/17ms8ms/15ms7ms/14ms
yolov5s416x416titanv7ms/22ms5ms/20ms5ms/18ms
yolov5m416x416titanv9ms/23ms8ms/22ms7ms/21ms
yolov5l416x416titanv17ms/28ms11ms/23ms11ms/24ms
yolov5x416x416titanv25ms/40ms15ms/27ms15ms/27ms

WRAPPER

Prepare the pretrained.weights and.cfg model.

Detector detector;Config config;std::vector<BatchResult> res;detector.detect(vec_image, res)

Build and use yolo-trt as DLL or SO libraries

windows10

  • dependency : TensorRT 7.1.3.4 , cuda 11.0 , cudnn 8.0 , opencv4 , vs2015

  • build:

    open MSVCsln/sln.sln file

    • dll project : the trt yolo detector dll
    • demo project : test of the dll

ubuntu & L4T (jetson)

The project generate thelibdetector.so lib, and the sample code.If you want to use the libdetector.so lib in your own project,thiscmake file perhaps could help you .

git clone https://github.com/enazoe/yolo-tensorrt.gitcd yolo-tensorrt/mkdir buildcd build/cmake ..make./yolo-trt

API

structConfig{std::string file_model_cfg ="configs/yolov4.cfg";std::string file_model_weights ="configs/yolov4.weights";float detect_thresh =0.9;ModelType net_type = YOLOV4;Precision inference_precison = INT8;int gpu_id =0;std::string calibration_image_list_file_txt ="configs/calibration_images.txt";};classAPI Detector{public:explicitDetector();~Detector();voidinit(const Config &config);voiddetect(const std::vector<cv::Mat> &mat_image,std::vector<BatchResult> &vec_batch_result);private:Detector(const Detector &);const Detector &operator =(const Detector &);classImpl;Impl *_impl;};

REFERENCE

Contact

微信关注公众号EigenVison,回复yolo获取交流群号

About

TensorRT8.Support Yolov5n,s,m,l,x .darknet -> tensorrt. Yolov4 Yolov3 use raw darknet *.weights and *.cfg fils. If the wrapper is useful to you,please Star it.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp