Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

small c++ library to quickly deploy models using onnxruntime

License

NotificationsYou must be signed in to change notification settings

cardboardcode/onnx_runtime_cpp

 
 

Repository files navigation

This is a C++ library to quickly useonnxruntime to deploy deep learning models.

Build


CPU
make default# build examplesmake apps
GPU with CUDA
make gpu_defaultmake gpu_apps

Run with Docker

CPU
# builddocker build -f ./dockerfiles/ubuntu2004.dockerfile -t onnx_runtime.# rundocker run -it --rm -v`pwd`:/workspace onnx_runtime
GPU with CUDA
# build# change the cuda version to match your local cuda version before build the dockerdocker build -f ./dockerfiles/ubuntu2004_gpu.dockerfile -t onnx_runtime_gpu.# rundocker run -it --rm --gpus all -v`pwd`:/workspace onnx_runtime_gpu
  • Onnxruntime will be built with TensorRT support if the environment has TensorRT. Checkthis memo for useful URLs related to building with TensorRT.
  • Be careful to choose TensorRT version compatible with onnxruntime. A good guess can be inferred fromHERE.
  • Also it is not possible to use models whose input shapes are dynamic with TensorRT backend, according tothis
GPU with CUDA & TensorRT
# build# change the cuda version to match your local cuda version before build the dockerdocker build -f ./dockerfiles/ubuntu2004_tensorrt.dockerfile -t onnx_runtime_gpu_tensorrt.# rundocker run -it --rm --gpus all -v`pwd`:/workspace onnx_runtime_gpu_tensorrt

Run


Image Classification With Squeezenet


Usage
# after make apps./build/examples/TestImageClassification ./data/squeezenet1.1.onnx ./data/images/dog.jpg

the following result can be obtained

264 : Cardigan, Cardigan Welsh corgi : 0.391365263 : Pembroke, Pembroke Welsh corgi : 0.376214227 : kelpie : 0.0314975158 : toy terrier : 0.0223435230 : Shetland sheepdog, Shetland sheep dog, Shetland : 0.020529

(back to top)

Object Detection With Tiny-Yolov2 trained on VOC dataset (with 20 classes)


Usage
  • Download model from onnx model zoo:HERE

  • The shape of the output would be

    OUTPUT_FEATUREMAP_SIZE X OUTPUT_FEATUREMAP_SIZE * NUM_ANCHORS * (NUM_CLASSES + 4 + 1)    where OUTPUT_FEATUREMAP_SIZE = 13; NUM_ANCHORS = 5; NUM_CLASSES = 20 for the tiny-yolov2 model from onnx model zoo
  • Test tiny-yolov2 inference apps
# after make apps./build/examples/tiny_yolo_v2 [path/to/tiny_yolov2/onnx/model] ./data/images/dog.jpg

(back to top)

Object Instance Segmentation With MaskRCNN trained on MS CoCo Dataset (80 + 1(background) clasess)


Usage
  • Download model from onnx model zoo:HERE

  • As also stated in the url above, there are four outputs: boxes(nboxes x 4), labels(nboxes), scores(nboxes), masks(nboxesx1x28x28)

  • Test mask-rcnn inference apps

# after make apps./build/examples/mask_rcnn [path/to/mask_rcnn/onnx/model] ./data/images/dogs.jpg./build/examples/mask_rcnn [path/to/mask_rcnn/onnx/model] ./data/images/indoor.jpg

(back to top)

Yolo V3 trained on Ms CoCo Dataset


Usage
  • Download model from onnx model zoo:HERE

  • Test yolo-v3 inference apps

# after make apps./build/examples/yolov3 [path/to/yolov3/onnx/model] ./data/images/no_way_home.jpg

(back to top)


Usage
# after make apps./build/examples/ultra_light_face_detector ./data/version-RFB-640.onnx ./data/images/endgame.jpg

(back to top)


Usage
  • Download onnx model trained on COCO dataset fromHERE
# this app tests yolox_l model but you can try with other yolox models also.wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l.onnx -O ./data/yolox_l.onnx
  • Test inference apps
# after make apps./build/examples/yolox ./data/yolox_l.onnx ./data/images/matrix.jpg

(back to top)


Usage
  • Download PaddleSeg's bisenetv2 trained on cityscapes dataset that has been converted to onnxHERE and copy to./data directory
You can also convert your own PaddleSeg with following procedures
  • Test inference apps
./build/examples/semantic_segmentation_paddleseg_bisenetv2 ./data/bisenetv2_cityscapes.onnx ./data/images/sample_city_scapes.png./build/examples/semantic_segmentation_paddleseg_bisenetv2 ./data/bisenetv2_cityscapes.onnx ./data/images/odaiba.jpg

(back to top)


Usage
  • Convert SuperPoint's pretrained weights to onnx format
git submodule update --init --recursivepython3 -m pip install -r scripts/superpoint/requirements.txtpython3 scripts/superpoint/convert_to_onnx.py
wget https://raw.githubusercontent.com/StaRainJ/Multi-modality-image-matching-database-metrics-methods/master/Multimodal_Image_Matching_Datasets/ComputerVision/CrossSeason/VisionCS_0a.png -P datawget https://raw.githubusercontent.com/StaRainJ/Multi-modality-image-matching-database-metrics-methods/master/Multimodal_Image_Matching_Datasets/ComputerVision/CrossSeason/VisionCS_0b.png -P data
  • Test inference apps
./build/examples/super_point /path/to/super_point.onnx data/VisionCS_0a.png data/VisionCS_0b.png

(back to top)


Usage
  • Convert SuperPoint's pretrained weights to onnx format: Follow the above instruction

  • Convert SuperGlue's pretrained weights to onnx format

git submodule update --init --recursivepython3 -m pip install -r scripts/superglue/requirements.txtpython3 -m pip install -r scripts/superglue/SuperGluePretrainedNetwork/requirements.txtpython3 scripts/superglue/convert_to_onnx.py
  • Download test images fromthis dataset: Or prepare some pairs of your own images

  • Test inference apps

./build/examples/super_glue /path/to/super_point.onnx /path/to/super_glue.onnx /path/to/1st/image /path/to/2nd/image

(back to top)


Usage
  • DownloadLoFTR weights indoords_new.ckpt fromHERE. (LoFTR'slatest commit seems to be only compatible with the new weights (Ref:zju3dv/LoFTR#48). Hence, this onnx cpp application is only compatible with _indoor_ds_new.ckpt weights)

  • Convert LoFTR's pretrained weights to onnx format

git submodule update --init --recursivepython3 -m pip install -r scripts/loftr/requirements.txtpython3 scripts/loftr/convert_to_onnx.py --model_path /path/to/indoor_ds_new.ckpt
  • Download test images fromthis dataset: Or prepare some pairs of your own images

  • Test inference apps

./build/examples/loftr /path/to/loftr.onnx /path/to/loftr.onnx /path/to/1st/image /path/to/2nd/image

(back to top)

About

small c++ library to quickly deploy models using onnxruntime

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++74.1%
  • Python14.5%
  • CMake4.1%
  • Dockerfile3.3%
  • Shell3.2%
  • Makefile0.8%

[8]ページ先頭

©2009-2025 Movatter.jp