Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

🛠A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉

License

NotificationsYou must be signed in to change notification settings

xlite-dev/lite.ai.toolkit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

lite-ai-toolkit

🛠Lite.Ai.ToolKit: A lite C++ toolkit of 100+ Awesome AI models, such asObject Detection,Face Detection,Face Recognition,Segmentation,Matting, etc. SeeModel Zoo andONNX Hub,MNN Hub,TNN Hub,NCNN Hub. Welcome to 🌟👆🏻star this repo to support me, many thanks ~ 🎉🎉

📖 News 🔥🔥

  • [2025-08-18]:🤗cache-dit is released! 🤗A PyTorch-native Inference Engine with Hybrid Cache Acceleration and Parallelism for DiTs. Feel free to take a try!

Citations 🎉🎉

@misc{lite.ai.toolkit@2021,title={lite.ai.toolkit: A lite C++ toolkit of 100+ Awesome AI models.},url={https://github.com/xlite-dev/lite.ai.toolkit},note={Open-source software available at https://github.com/xlite-dev/lite.ai.toolkit},author={xlite-dev, wangzijian1010 etc},year={2021}}

Features 👏👋

  • Simply and User friendly. Simply and Consistent syntax likelite::cv::Type::Class, seeexamples.
  • Minimum Dependencies. OnlyOpenCV andONNXRuntime are required by default, seebuild.
  • Many Models Supported.300+ C++ implementations and500+ weights 👉Supported-Matrix.

Build 👇👇

Download prebuilt lite.ai.toolkit library fromtag/v0.2.0, or just build it from source:

git clone --depth=1 https://github.com/xlite-dev/lite.ai.toolkit.git# latestcd lite.ai.toolkit&& sh ./build.sh# >= 0.2.0, support Linux only, tested on Ubuntu 20.04.6 LTS

Quick Start 🌟🌟

Example0: Object Detection usingYOLOv5. Download model from Model-Zoo2.

#include"lite/lite.h"intmain(int argc,char *argv[]) {  std::string onnx_path ="yolov5s.onnx";  std::string test_img_path ="test_yolov5.jpg";  std::string save_img_path ="test_results.jpg";auto *yolov5 =newlite::cv::detection::YoloV5(onnx_path);   std::vector<lite::types::Boxf> detected_boxes;  cv::Mat img_bgr =cv::imread(test_img_path);  yolov5->detect(img_bgr, detected_boxes);lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);cv::imwrite(save_img_path, img_bgr);delete yolov5;return0;}

You can download the prebuilt lite.ai.tooklit library and test resources fromtag/v0.2.0.

export LITE_AI_TAG_URL=https://github.com/xlite-dev/lite.ai.toolkit/releases/download/v0.2.0wget${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgzwget${LITE_AI_TAG_URL}/yolov5s.onnx&& wget${LITE_AI_TAG_URL}/test_yolov5.jpg

🎉🎉TensorRT: Boost inference performance with NVIDIA GPU via TensorRT.

Runbash ./build.sh tensorrt to build lite.ai.toolkit with TensorRT support, and then test yolov5 with the codes below. NOTE: lite.ai.toolkit need TensorRT 10.x (or later) and CUDA 12.x (or later). Please checkbuild.sh,tensorrt-linux-x86_64-install.zh.md,test_lite_yolov5.cpp andNVIDIA/TensorRT for more details.

// trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s.engineauto *yolov5 =new lite::trt::cv::detection::YOLOV5(engine_path);

Quick Setup 👀

To quickly setuplite.ai.toolkit, you can follow theCMakeLists.txt listed as belows. 👇👀

set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL)find_package(lite.ai.toolkit REQUIREDPATHS ${lite.ai.toolkit_DIR})add_executable(lite_yolov5 test_lite_yolov5.cpp)target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})

Mixed with MNN or ONNXRuntime 👇👇

The goal of lite.ai.toolkit is not to abstract on top of MNN and ONNXRuntime. So, you can use lite.ai.toolkit mixed with MNN(-DENABLE_MNN=ON, default OFF) or ONNXRuntime(-DENABLE_ONNXRUNTIME=ON, default ON). The lite.ai.toolkit installation package contains complete MNN and ONNXRuntime. The workflow may looks like:

#include"lite/lite.h"// 0. use yolov5 from lite.ai.toolkit to detect objs.auto *yolov5 =new lite::cv::detection::YoloV5(onnx_path);// 1. use OnnxRuntime or MNN to implement your own classfier.interpreter = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(mnn_path));// or: session = new Ort::Session(ort_env, onnx_path, session_options);classfier = interpreter->createSession(schedule_config);// 2. then, classify the detected objs use your own classfier ...

The included headers of MNN and ONNXRuntime can be found atmnn_config.h andort_config.h.

🔑️ Check the detailed Quick Start!Click here!

Download resources

You can download the prebuilt lite.ai.tooklit library and test resources fromtag/v0.2.0.

export LITE_AI_TAG_URL=https://github.com/xlite-dev/lite.ai.toolkit/releases/download/v0.2.0wget${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgzwget${LITE_AI_TAG_URL}/yolov5s.onnx&& wget${LITE_AI_TAG_URL}/test_yolov5.jpgtar -zxvf lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz

Write test code

write YOLOv5 example codes and name ittest_lite_yolov5.cpp:

#include"lite/lite.h"intmain(int argc,char *argv[]) {  std::string onnx_path ="yolov5s.onnx";  std::string test_img_path ="test_yolov5.jpg";  std::string save_img_path ="test_results.jpg";auto *yolov5 =newlite::cv::detection::YoloV5(onnx_path);   std::vector<lite::types::Boxf> detected_boxes;  cv::Mat img_bgr =cv::imread(test_img_path);  yolov5->detect(img_bgr, detected_boxes);lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);cv::imwrite(save_img_path, img_bgr);delete yolov5;return0;}

Setup CMakeLists.txt

cmake_minimum_required(VERSION 3.10)project(lite_yolov5)set(CMAKE_CXX_STANDARD 17)set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL)find_package(lite.ai.toolkit REQUIREDPATHS ${lite.ai.toolkit_DIR})if (lite.ai.toolkit_Found)    message(STATUS"lite.ai.toolkit_INCLUDE_DIRS: ${lite.ai.toolkit_INCLUDE_DIRS}")    message(STATUS"        lite.ai.toolkit_LIBS: ${lite.ai.toolkit_LIBS}")    message(STATUS"   lite.ai.toolkit_LIBS_DIRS: ${lite.ai.toolkit_LIBS_DIRS}")endif()add_executable(lite_yolov5 test_lite_yolov5.cpp)target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})

Build example

mkdir build&&cd build&& cmake ..&& make -j1

Then, export the lib paths toLD_LIBRARY_PATH which listed bylite.ai.toolkit_LIBS_DIRS.

export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/lib:$LD_LIBRARY_PATHexport LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/opencv/lib:$LD_LIBRARY_PATHexport LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/onnxruntime/lib:$LD_LIBRARY_PATHexport LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/MNN/lib:$LD_LIBRARY_PATH# if -DENABLE_MNN=ON

Run binary:

cp ../yolov5s.onnx ../test_yolov.jpg../lite_yolov5

The output logs:

LITEORT_DEBUG LogId: ../examples/hub/onnx/cv/yolov5s.onnx=============== Input-Dims ==============Name: imagesDims: 1Dims: 3Dims: 640Dims: 640=============== Output-Dims ==============Output: 0 Name: pred Dim: 0 :1Output: 0 Name: pred Dim: 1 :25200Output: 0 Name: pred Dim: 2 :85Output: 1 Name: output2 Dim: 0 :1......Output: 3 Name: output4 Dim: 1 :3Output: 3 Name: output4 Dim: 2 :20Output: 3 Name: output4 Dim: 3 :20Output: 3 Name: output4 Dim: 4 :85========================================detected num_anchors: 25200generate_bboxes num: 48

Supported Models Matrix

  • / = not supported now.
  • ✅ = known work and official supported now.
  • ✔️ = known work, but unofficial supported now.
  • ❔ = in my plan, but not coming soon, maybe a few months later.

NVIDIA GPU Inference: TensorRT

ClassClassClassClassClassSystemEngine
YOLOv5YOLOv6YOLOv8YOLOv8FaceYOLOv5FaceLinuxTensorRT
YOLOXYOLOv5BlazeFaceStableDiffusionFaceFusion/LinuxTensorRT

CPU Inference: ONNXRuntime, MNN, NCNN and TNN

ClassSizeTypeDemoONNXRuntimeMNNNCNNTNNLinuxMacOSWindowsAndroid
YoloV528Mdetectiondemo✔️✔️
YoloV3236Mdetectiondemo///✔️✔️/
TinyYoloV333Mdetectiondemo///✔️✔️/
YoloV4176Mdetectiondemo///✔️✔️/
SSD76Mdetectiondemo///✔️✔️/
SSDMobileNetV127Mdetectiondemo///✔️✔️/
YoloX3.5Mdetectiondemo✔️✔️
TinyYoloV4VOC22Mdetectiondemo///✔️✔️/
TinyYoloV4COCO22Mdetectiondemo///✔️✔️/
YoloR39Mdetectiondemo✔️✔️
ScaledYoloV4270Mdetectiondemo///✔️✔️/
EfficientDet15Mdetectiondemo///✔️✔️/
EfficientDetD7220Mdetectiondemo///✔️✔️/
EfficientDetD8322Mdetectiondemo///✔️✔️/
YOLOP30Mdetectiondemo✔️✔️
NanoDet1.1Mdetectiondemo✔️✔️
NanoDetPlus4.5Mdetectiondemo✔️✔️
NanoDetEffi...12Mdetectiondemo✔️✔️
YoloX_V_0_1_13.5Mdetectiondemo✔️✔️
YoloV5_V_6_07.5Mdetectiondemo✔️✔️
GlintArcFace92Mfaceiddemo✔️✔️
GlintCosFace92Mfaceiddemo✔️✔️/
GlintPartialFC170Mfaceiddemo✔️✔️/
FaceNet89Mfaceiddemo✔️✔️/
FocalArcFace166Mfaceiddemo✔️✔️/
FocalAsiaArcFace166Mfaceiddemo✔️✔️/
TencentCurricularFace249Mfaceiddemo✔️✔️/
TencentCifpFace130Mfaceiddemo✔️✔️/
CenterLossFace280Mfaceiddemo✔️✔️/
SphereFace80Mfaceiddemo✔️✔️/
PoseRobustFace92Mfaceiddemo///✔️✔️/
NaivePoseRobustFace43Mfaceiddemo///✔️✔️/
MobileFaceNet3.8Mfaceiddemo✔️✔️
CavaGhostArcFace15Mfaceiddemo✔️✔️
CavaCombinedFace250Mfaceiddemo✔️✔️/
MobileSEFocalFace4.5Mfaceiddemo✔️✔️
RobustVideoMatting14Mmattingdemo/✔️✔️
MGMatting113Mmattingdemo/✔️✔️/
MODNet24Mmattingdemo✔️✔️/
MODNetDyn24Mmattingdemo///✔️✔️/
BackgroundMattingV220Mmattingdemo/✔️✔️/
BackgroundMattingV2Dyn20Mmattingdemo///✔️✔️/
UltraFace1.1Mface::detectdemo✔️✔️
RetinaFace1.6Mface::detectdemo✔️✔️
FaceBoxes3.8Mface::detectdemo✔️✔️
FaceBoxesV23.8Mface::detectdemo✔️✔️
SCRFD2.5Mface::detectdemo✔️✔️
YOLO5Face4.8Mface::detectdemo✔️✔️
PFLD1.0Mface::aligndemo✔️✔️
PFLD984.8Mface::aligndemo✔️✔️
MobileNetV2689.4Mface::aligndemo✔️✔️
MobileNetV2SE6811Mface::aligndemo✔️✔️
PFLD682.8Mface::aligndemo✔️✔️
FaceLandmark10002.0Mface::aligndemo✔️✔️
PIPNet9844.0Mface::aligndemo✔️✔️
PIPNet6844.0Mface::aligndemo✔️✔️
PIPNet2944.0Mface::aligndemo✔️✔️
PIPNet1944.0Mface::aligndemo✔️✔️
FSANet1.2Mface::posedemo/✔️✔️
AgeGoogleNet23Mface::attrdemo✔️✔️
GenderGoogleNet23Mface::attrdemo✔️✔️
EmotionFerPlus33Mface::attrdemo✔️✔️
VGG16Age514Mface::attrdemo✔️✔️/
VGG16Gender512Mface::attrdemo✔️✔️/
SSRNet190Kface::attrdemo/✔️✔️
EfficientEmotion715Mface::attrdemo✔️✔️
EfficientEmotion815Mface::attrdemo✔️✔️
MobileEmotion713Mface::attrdemo✔️✔️
ReXNetEmotion730Mface::attrdemo/✔️✔️/
EfficientNetLite449Mclassificationdemo/✔️✔️/
ShuffleNetV28.7Mclassificationdemo✔️✔️
DenseNet12130.7Mclassificationdemo✔️✔️/
GhostNet20Mclassificationdemo✔️✔️
HdrDNet13Mclassificationdemo✔️✔️
IBNNet97Mclassificationdemo✔️✔️/
MobileNetV213Mclassificationdemo✔️✔️
ResNet44Mclassificationdemo✔️✔️/
ResNeXt95Mclassificationdemo✔️✔️/
DeepLabV3ResNet101232Msegmentationdemo✔️✔️/
FCNResNet101207Msegmentationdemo✔️✔️/
FastStyleTransfer6.4Mstyledemo✔️✔️
Colorizer123Mcolorizationdemo/✔️✔️/
SubPixelCNN234Kresolutiondemo/✔️✔️
SubPixelCNN234Kresolutiondemo/✔️✔️
InsectDet27Mdetectiondemo/✔️✔️
InsectID22Mclassificationdemo✔️✔️✔️
PlantID30Mclassificationdemo✔️✔️✔️
YOLOv5BlazeFace3.4Mface::detectdemo//✔️✔️
YoloV5_V_6_17.5Mdetectiondemo//✔️✔️
HeadSeg31Msegmentationdemo/✔️✔️
FemalePhoto2Cartoon15Mstyledemo/✔️✔️
FastPortraitSeg400ksegmentationdemo//✔️✔️
PortraitSegSINet380ksegmentationdemo//✔️✔️
PortraitSegExtremeC3Net180ksegmentationdemo//✔️✔️
FaceHairSeg18Msegmentationdemo//✔️✔️
HairSeg18Msegmentationdemo//✔️✔️
MobileHumanMatting3Mmattingdemo//✔️✔️
MobileHairSeg14Msegmentationdemo//✔️✔️
YOLOv617Mdetectiondemo✔️✔️
FaceParsingBiSeNet50Msegmentationdemo✔️✔️
FaceParsingBiSeNetDyn50Msegmentationdemo////✔️✔️
🔑️ Model Zoo!Click here!

Model Zoo.

Lite.Ai.ToolKit contains almost100+ AI models with500+ frozen pretrained files now. Most of the files are converted by myself. You can use it throughlite::cv::Type::Class syntax, such aslite::cv::detection::YoloV5. More details can be found atExamples for Lite.Ai.ToolKit. Note, for Google Drive, I can not upload all the *.onnx files because of the storage limitation (15G).

FileBaidu DriveGoogle DriveDocker HubHub (Docs)
ONNXBaidu Drive code: 8ginGoogle DriveONNX Docker v0.1.22.01.08 (28G), v0.1.22.02.02 (400M)ONNX Hub
MNNBaidu Drive code: 9v63MNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (213M)MNN Hub
NCNNBaidu Drive code: sc7fNCNN Docker v0.1.22.01.08 (9G), v0.1.22.02.02 (197M)NCNN Hub
TNNBaidu Drive code: 6o6kTNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (217M)TNN Hub
  docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08# (28G)  docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08# (11G)  docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08# (9G)  docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08# (11G)  docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.02.02# (400M) + YOLO5Face  docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.02.02# (213M) + YOLO5Face  docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.02.02# (197M) + YOLO5Face  docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.02.02# (217M) + YOLO5Face

🔑️ How to download Model Zoo from Docker Hub?

  • Firstly, pull the image from docker hub.
    docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08# (11G)docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08# (9G)docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08# (11G)docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08# (28G)
  • Secondly, run the container with localshare dir usingdocker run -idt xxx. A minimum example will show you as follows.
    • make ashare dir in your local device.
    mkdir share# any name is ok.
    • writerun_mnn_docker_hub.sh script like:
    #!/bin/bashPORT1=6072PORT2=6084SERVICE_DIR=/Users/xxx/Desktop/your-path-to/shareCONRAINER_DIR=/home/hub/shareCONRAINER_NAME=mnn_docker_hub_ddocker run -idt -p${PORT2}:${PORT1} -v${SERVICE_DIR}:${CONRAINER_DIR} --shm-size=16gb --name${CONRAINER_NAME} qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08
  • Finally, copy the model weights from/home/hub/mnn/cv to your localshare dir.
    # activate mnn docker.sh ./run_mnn_docker_hub.shdockerexec -it mnn_docker_hub_d /bin/bash# copy the models to the share dir.cd /home/hub cp -rf mnn/cv share/

Model Hubs

The pretrained and converted ONNX files provide by lite.ai.toolkit are listed as follows. Also, seeModel Zoo andONNX Hub,MNN Hub,TNN Hub,NCNN Hub for more details.

🔑️ More Examples!Click here!

🔑️ More Examples.

More examples can be found atexamples.

Example0: Object Detection usingYOLOv5. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/yolov5s.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_yolov5_1.jpg";  std::string save_img_path ="../../../examples/logs/test_lite_yolov5_1.jpg";auto *yolov5 =newlite::cv::detection::YoloV5(onnx_path);   std::vector<lite::types::Boxf> detected_boxes;  cv::Mat img_bgr =cv::imread(test_img_path);  yolov5->detect(img_bgr, detected_boxes);lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);cv::imwrite(save_img_path, img_bgr);delete yolov5;}

The output is:

Or you can use Newest 🔥🔥 ! YOLO series's detectorYOLOX orYoloR. They got the similar results.

More classes for general object detection (80 classes, COCO).

auto *detector =new lite::cv::detection::YoloX(onnx_path);// Newest YOLO detector !!! 2021-07auto *detector =new lite::cv::detection::YoloV4(onnx_path);auto *detector =new lite::cv::detection::YoloV3(onnx_path);auto *detector =new lite::cv::detection::TinyYoloV3(onnx_path);auto *detector =new lite::cv::detection::SSD(onnx_path);auto *detector =new lite::cv::detection::YoloV5(onnx_path);auto *detector =new lite::cv::detection::YoloR(onnx_path);// Newest YOLO detector !!! 2021-05auto *detector =new lite::cv::detection::TinyYoloV4VOC(onnx_path);auto *detector =new lite::cv::detection::TinyYoloV4COCO(onnx_path);auto *detector =new lite::cv::detection::ScaledYoloV4(onnx_path);auto *detector =new lite::cv::detection::EfficientDet(onnx_path);auto *detector =new lite::cv::detection::EfficientDetD7(onnx_path);auto *detector =new lite::cv::detection::EfficientDetD8(onnx_path);auto *detector =new lite::cv::detection::YOLOP(onnx_path);auto *detector =new lite::cv::detection::NanoDet(onnx_path);// Super fast and tiny!auto *detector =new lite::cv::detection::NanoDetPlus(onnx_path);// Super fast and tiny! 2021/12/25auto *detector =new lite::cv::detection::NanoDetEfficientNetLite(onnx_path);// Super fast and tiny!auto *detector =new lite::cv::detection::YoloV5_V_6_0(onnx_path);auto *detector =new lite::cv::detection::YoloV5_V_6_1(onnx_path);auto *detector =new lite::cv::detection::YoloX_V_0_1_1(onnx_path);// Newest YOLO detector !!! 2021-07auto *detector =new lite::cv::detection::YOLOv6(onnx_path);// Newest 2022 YOLO detector !!!

Example1: Video Matting usingRobustVideoMatting2021🔥🔥🔥. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";  std::string video_path ="../../../examples/lite/resources/test_lite_rvm_0.mp4";  std::string output_path ="../../../examples/logs/test_lite_rvm_0.mp4";  std::string background_path ="../../../examples/lite/resources/test_lite_matting_bgr.jpg";auto *rvm =newlite::cv::matting::RobustVideoMatting(onnx_path,16);// 16 threads  std::vector<lite::types::MattingContent> contents;// 1. video matting.  cv::Mat background =cv::imread(background_path);  rvm->detect_video(video_path, output_path, contents,false,0.4f,20,true,true, background);delete rvm;}

The output is:


More classes for matting (image matting, video matting, trimap/mask-free, trimap/mask-based)

auto *matting =new lite::cv::matting::RobustVideoMatting:(onnx_path);//  WACV 2022.auto *matting =new lite::cv::matting::MGMatting(onnx_path);// CVPR 2021auto *matting =new lite::cv::matting::MODNet(onnx_path);// AAAI 2022auto *matting =new lite::cv::matting::MODNetDyn(onnx_path);// AAAI 2022 Dynamic Shape Inference.auto *matting =new lite::cv::matting::BackgroundMattingV2(onnx_path);// CVPR 2020auto *matting =new lite::cv::matting::BackgroundMattingV2Dyn(onnx_path);// CVPR 2020 Dynamic Shape Inference.auto *matting =new lite::cv::matting::MobileHumanMatting(onnx_path);// 3Mb only !!!

Example2: 1000 Facial Landmarks Detection usingFaceLandmarks1000. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/FaceLandmark1000.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_face_landmarks_0.png";  std::string save_img_path ="../../../examples/logs/test_lite_face_landmarks_1000.jpg";auto *face_landmarks_1000 =newlite::cv::face::align::FaceLandmark1000(onnx_path);  lite::types::Landmarks landmarks;  cv::Mat img_bgr =cv::imread(test_img_path);  face_landmarks_1000->detect(img_bgr, landmarks);lite::utils::draw_landmarks_inplace(img_bgr, landmarks);cv::imwrite(save_img_path, img_bgr);delete face_landmarks_1000;}

The output is:

More classes for face alignment (68 points, 98 points, 106 points, 1000 points)

auto *align =new lite::cv::face::align::PFLD(onnx_path);// 106 landmarks, 1.0Mb only!auto *align =new lite::cv::face::align::PFLD98(onnx_path);// 98 landmarks, 4.8Mb only!auto *align =new lite::cv::face::align::PFLD68(onnx_path);// 68 landmarks, 2.8Mb only!auto *align =new lite::cv::face::align::MobileNetV268(onnx_path);// 68 landmarks, 9.4Mb only!auto *align =new lite::cv::face::align::MobileNetV2SE68(onnx_path);// 68 landmarks, 11Mb only!auto *align =new lite::cv::face::align::FaceLandmark1000(onnx_path);// 1000 landmarks, 2.0Mb only!auto *align =new lite::cv::face::align::PIPNet98(onnx_path);// 98 landmarks, CVPR2021!auto *align =new lite::cv::face::align::PIPNet68(onnx_path);// 68 landmarks, CVPR2021!auto *align =new lite::cv::face::align::PIPNet29(onnx_path);// 29 landmarks, CVPR2021!auto *align =new lite::cv::face::align::PIPNet19(onnx_path);// 19 landmarks, CVPR2021!

Example3: Colorization usingcolorization. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/eccv16-colorizer.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_colorizer_1.jpg";  std::string save_img_path ="../../../examples/logs/test_lite_eccv16_colorizer_1.jpg";auto *colorizer =newlite::cv::colorization::Colorizer(onnx_path);    cv::Mat img_bgr =cv::imread(test_img_path);  lite::types::ColorizeContent colorize_content;  colorizer->detect(img_bgr, colorize_content);if (colorize_content.flag)cv::imwrite(save_img_path, colorize_content.mat);delete colorizer;}

The output is:


More classes for colorization (gray to rgb)

auto *colorizer =new lite::cv::colorization::Colorizer(onnx_path);

Example4: Face Recognition usingArcFace. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/ms1mv3_arcface_r100.onnx";  std::string test_img_path0 ="../../../examples/lite/resources/test_lite_faceid_0.png";  std::string test_img_path1 ="../../../examples/lite/resources/test_lite_faceid_1.png";  std::string test_img_path2 ="../../../examples/lite/resources/test_lite_faceid_2.png";auto *glint_arcface =newlite::cv::faceid::GlintArcFace(onnx_path);  lite::types::FaceContent face_content0, face_content1, face_content2;  cv::Mat img_bgr0 =cv::imread(test_img_path0);  cv::Mat img_bgr1 =cv::imread(test_img_path1);  cv::Mat img_bgr2 =cv::imread(test_img_path2);  glint_arcface->detect(img_bgr0, face_content0);  glint_arcface->detect(img_bgr1, face_content1);  glint_arcface->detect(img_bgr2, face_content2);if (face_content0.flag && face_content1.flag && face_content2.flag)  {float sim01 = lite::utils::math::cosine_similarity<float>(        face_content0.embedding, face_content1.embedding);float sim02 = lite::utils::math::cosine_similarity<float>(        face_content0.embedding, face_content2.embedding);    std::cout <<"Detected Sim01:" << sim  <<" Sim02:" << sim02 << std::endl;  }delete glint_arcface;}

The output is:

Detected Sim01: 0.721159 Sim02: -0.0626267

More classes for face recognition (face id vector extract)

auto *recognition =new lite::cv::faceid::GlintCosFace(onnx_path);// DeepGlint(insightface)auto *recognition =new lite::cv::faceid::GlintArcFace(onnx_path);// DeepGlint(insightface)auto *recognition =new lite::cv::faceid::GlintPartialFC(onnx_path);// DeepGlint(insightface)auto *recognition =new lite::cv::faceid::FaceNet(onnx_path);auto *recognition =new lite::cv::faceid::FocalArcFace(onnx_path);auto *recognition =new lite::cv::faceid::FocalAsiaArcFace(onnx_path);auto *recognition =new lite::cv::faceid::TencentCurricularFace(onnx_path);// Tencent(TFace)auto *recognition =new lite::cv::faceid::TencentCifpFace(onnx_path);// Tencent(TFace)auto *recognition =new lite::cv::faceid::CenterLossFace(onnx_path);auto *recognition =new lite::cv::faceid::SphereFace(onnx_path);auto *recognition =new lite::cv::faceid::PoseRobustFace(onnx_path);auto *recognition =new lite::cv::faceid::NaivePoseRobustFace(onnx_path);auto *recognition =new lite::cv::faceid::MobileFaceNet(onnx_path);// 3.8Mb only !auto *recognition =new lite::cv::faceid::CavaGhostArcFace(onnx_path);auto *recognition =new lite::cv::faceid::CavaCombinedFace(onnx_path);auto *recognition =new lite::cv::faceid::MobileSEFocalFace(onnx_path);// 4.5Mb only !

Example5: Face Detection usingSCRFD 2021. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/scrfd_2.5g_bnkps_shape640x640.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_face_detector.jpg";  std::string save_img_path ="../../../examples/logs/test_lite_scrfd.jpg";auto *scrfd =newlite::cv::face::detect::SCRFD(onnx_path);    std::vector<lite::types::BoxfWithLandmarks> detected_boxes;  cv::Mat img_bgr =cv::imread(test_img_path);  scrfd->detect(img_bgr, detected_boxes);lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);cv::imwrite(save_img_path, img_bgr);delete scrfd;}

The output is:

More classes for face detection (super fast face detection)

auto *detector =new lite::face::detect::UltraFace(onnx_path);// 1.1Mb only !auto *detector =new lite::face::detect::FaceBoxes(onnx_path);// 3.8Mb only !auto *detector =new lite::face::detect::FaceBoxesv2(onnx_path);// 4.0Mb only !auto *detector =new lite::face::detect::RetinaFace(onnx_path);// 1.6Mb only ! CVPR2020auto *detector =new lite::face::detect::SCRFD(onnx_path);// 2.5Mb only ! CVPR2021, Super fast and accurate!!auto *detector =new lite::face::detect::YOLO5Face(onnx_path);// 2021, Super fast and accurate!!auto *detector =new lite::face::detect::YOLOv5BlazeFace(onnx_path);// 2021, Super fast and accurate!!

Example6: Object Segmentation usingDeepLabV3ResNet101. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/deeplabv3_resnet101_coco.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";  std::string save_img_path ="../../../examples/logs/test_lite_deeplabv3_resnet101.jpg";auto *deeplabv3_resnet101 =newlite::cv::segmentation::DeepLabV3ResNet101(onnx_path,16);// 16 threads  lite::types::SegmentContent content;  cv::Mat img_bgr =cv::imread(test_img_path);  deeplabv3_resnet101->detect(img_bgr, content);if (content.flag)  {    cv::Mat out_img;cv::addWeighted(img_bgr,0.2, content.color_mat,0.8,0., out_img);cv::imwrite(save_img_path, out_img);if (!content.names_map.empty())    {for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)      {        std::cout << it->first <<" Name:" << it->second << std::endl;      }    }  }delete deeplabv3_resnet101;}

The output is:

More classes for object segmentation (general objects segmentation)

auto *segment =new lite::cv::segmentation::FCNResNet101(onnx_path);auto *segment =new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path);

Example7: Age Estimation usingSSRNet . Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/ssrnet.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_ssrnet.jpg";  std::string save_img_path ="../../../examples/logs/test_lite_ssrnet.jpg";auto *ssrnet =newlite::cv::face::attr::SSRNet(onnx_path);  lite::types::Age age;  cv::Mat img_bgr =cv::imread(test_img_path);  ssrnet->detect(img_bgr, age);lite::utils::draw_age_inplace(img_bgr, age);cv::imwrite(save_img_path, img_bgr);delete ssrnet;}

The output is:

More classes for face attributes analysis (age, gender, emotion)

auto *attribute =new lite::cv::face::attr::AgeGoogleNet(onnx_path);auto *attribute =new lite::cv::face::attr::GenderGoogleNet(onnx_path);auto *attribute =new lite::cv::face::attr::EmotionFerPlus(onnx_path);auto *attribute =new lite::cv::face::attr::VGG16Age(onnx_path);auto *attribute =new lite::cv::face::attr::VGG16Gender(onnx_path);auto *attribute =new lite::cv::face::attr::EfficientEmotion7(onnx_path);// 7 emotions, 15Mb only!auto *attribute =new lite::cv::face::attr::EfficientEmotion8(onnx_path);// 8 emotions, 15Mb only!auto *attribute =new lite::cv::face::attr::MobileEmotion7(onnx_path);// 7 emotions, 13Mb only!auto *attribute =new lite::cv::face::attr::ReXNetEmotion7(onnx_path);// 7 emotionsauto *attribute =new lite::cv::face::attr::SSRNet(onnx_path);// age estimation, 190kb only!!!

Example8: 1000 Classes Classification usingDenseNet. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/densenet121.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_densenet.jpg";auto *densenet =newlite::cv::classification::DenseNet(onnx_path);  lite::types::ImageNetContent content;  cv::Mat img_bgr =cv::imread(test_img_path);  densenet->detect(img_bgr, content);if (content.flag)  {constunsignedint top_k = content.scores.size();if (top_k >0)    {for (unsignedint i =0; i < top_k; ++i)        std::cout << i +1                  <<":" << content.labels.at(i)                  <<":" << content.texts.at(i)                  <<":" << content.scores.at(i)                  << std::endl;    }  }delete densenet;}

The output is:

More classes for image classification (1000 classes)

auto *classifier =new lite::cv::classification::EfficientNetLite4(onnx_path);auto *classifier =new lite::cv::classification::ShuffleNetV2(onnx_path);// 8.7Mb only!auto *classifier =new lite::cv::classification::GhostNet(onnx_path);auto *classifier =new lite::cv::classification::HdrDNet(onnx_path);auto *classifier =new lite::cv::classification::IBNNet(onnx_path);auto *classifier =new lite::cv::classification::MobileNetV2(onnx_path);// 13Mb only!auto *classifier =new lite::cv::classification::ResNet(onnx_path);auto *classifier =new lite::cv::classification::ResNeXt(onnx_path);

Example9: Head Pose Estimation usingFSANet. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/fsanet-var.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_fsanet.jpg";  std::string save_img_path ="../../../examples/logs/test_lite_fsanet.jpg";auto *fsanet =newlite::cv::face::pose::FSANet(onnx_path);  cv::Mat img_bgr =cv::imread(test_img_path);  lite::types::EulerAngles euler_angles;  fsanet->detect(img_bgr, euler_angles);if (euler_angles.flag)  {lite::utils::draw_axis_inplace(img_bgr, euler_angles);cv::imwrite(save_img_path, img_bgr);    std::cout <<"yaw:" << euler_angles.yaw <<" pitch:" << euler_angles.pitch <<" row:" << euler_angles.roll << std::endl;  }delete fsanet;}

The output is:

More classes for head pose estimation (euler angle, yaw, pitch, roll)

auto *pose =new lite::cv::face::pose::FSANet(onnx_path);// 1.2Mb only!

Example10: Style Transfer usingFastStyleTransfer. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/style-candy-8.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";  std::string save_img_path ="../../../examples/logs/test_lite_fast_style_transfer_candy.jpg";auto *fast_style_transfer =newlite::cv::style::FastStyleTransfer(onnx_path);   lite::types::StyleContent style_content;  cv::Mat img_bgr =cv::imread(test_img_path);  fast_style_transfer->detect(img_bgr, style_content);if (style_content.flag)cv::imwrite(save_img_path, style_content.mat);delete fast_style_transfer;}

The output is:


More classes for style transfer (neural style transfer, others)

auto *transfer =new lite::cv::style::FastStyleTransfer(onnx_path);// 6.4Mb only

Example11: Human Head Segmentation usingHeadSeg. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/minivision_head_seg.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_head_seg.png";  std::string save_img_path ="../../../examples/logs/test_lite_head_seg.jpg";auto *head_seg =newlite::cv::segmentation::HeadSeg(onnx_path,4);// 4 threads  lite::types::HeadSegContent content;  cv::Mat img_bgr =cv::imread(test_img_path);  head_seg->detect(img_bgr, content);if (content.flag)cv::imwrite(save_img_path, content.mask *255.f);delete head_seg;}

The output is:

More classes for human segmentation (head, portrait, hair, others)

auto *segment =new lite::cv::segmentation::HeadSeg(onnx_path);// 31Mbauto *segment =new lite::cv::segmentation::FastPortraitSeg(onnx_path);// <= 400Kb !!!auto *segment =new lite::cv::segmentation::PortraitSegSINet(onnx_path);// <= 380Kb !!!auto *segment =new lite::cv::segmentation::PortraitSegExtremeC3Net(onnx_path);// <= 180Kb !!! Extreme Tiny !!!auto *segment =new lite::cv::segmentation::FaceHairSeg(onnx_path);// 18Mauto *segment =new lite::cv::segmentation::HairSeg(onnx_path);// 18Mauto *segment =new lite::cv::segmentation::MobileHairSeg(onnx_path);// 14M

Example12: Photo transfer to CartoonPhoto2Cartoon. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string head_seg_onnx_path ="../../../examples/hub/onnx/cv/minivision_head_seg.onnx";  std::string cartoon_onnx_path ="../../../examples/hub/onnx/cv/minivision_female_photo2cartoon.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_female_photo2cartoon.jpg";  std::string save_mask_path ="../../../examples/logs/test_lite_female_photo2cartoon_seg.jpg";  std::string save_cartoon_path ="../../../examples/logs/test_lite_female_photo2cartoon_cartoon.jpg";auto *head_seg =newlite::cv::segmentation::HeadSeg(head_seg_onnx_path,4);// 4 threadsauto *female_photo2cartoon =newlite::cv::style::FemalePhoto2Cartoon(cartoon_onnx_path,4);// 4 threads  lite::types::HeadSegContent head_seg_content;  cv::Mat img_bgr =cv::imread(test_img_path);  head_seg->detect(img_bgr, head_seg_content);if (head_seg_content.flag && !head_seg_content.mask.empty())  {cv::imwrite(save_mask_path, head_seg_content.mask *255.f);// Female Photo2Cartoon Style Transfer    lite::types::FemalePhoto2CartoonContent female_cartoon_content;    female_photo2cartoon->detect(img_bgr, head_seg_content.mask, female_cartoon_content);if (female_cartoon_content.flag && !female_cartoon_content.cartoon.empty())cv::imwrite(save_cartoon_path, female_cartoon_content.cartoon);  }delete head_seg;delete female_photo2cartoon;}

The output is:

More classes for photo style transfer.

auto *transfer =new lite::cv::style::FemalePhoto2Cartoon(onnx_path);

Example13: Face Parsing usingFaceParsing. Download model from Model-Zoo2.

#include"lite/lite.h"staticvoidtest_default(){  std::string onnx_path ="../../../examples/hub/onnx/cv/face_parsing_512x512.onnx";  std::string test_img_path ="../../../examples/lite/resources/test_lite_face_parsing.png";  std::string save_img_path ="../../../examples/logs/test_lite_face_parsing_bisenet.jpg";auto *face_parsing_bisenet =newlite::cv::segmentation::FaceParsingBiSeNet(onnx_path,8);// 8 threads  lite::types::FaceParsingContent content;  cv::Mat img_bgr =cv::imread(test_img_path);  face_parsing_bisenet->detect(img_bgr, content);if (content.flag && !content.merge.empty())cv::imwrite(save_img_path, content.merge);delete face_parsing_bisenet;}

The output is:

More classes for face parsing (hair, eyes, nose, mouth, others)

auto *segment =new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path);// 50Mbauto *segment =new lite::cv::segmentation::FaceParsingBiSeNetDyn(onnx_path);// Dynamic Shape Inference.

©️License

GNU General Public License v3.0

🎉Contribute

Please consider ⭐ this repo if you like it, as it is the simplest way to support me.

Star History Chart

About

🛠A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp