- Notifications
You must be signed in to change notification settings - Fork0
small c++ library to quickly deploy models using onnxruntime
License
cardboardcode/onnx_runtime_cpp
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This is a C++ library to quickly useonnxruntime to deploy deep learning models.
CPU
make default# build examplesmake apps
GPU with CUDA
make gpu_defaultmake gpu_apps
CPU
# builddocker build -f ./dockerfiles/ubuntu2004.dockerfile -t onnx_runtime.# rundocker run -it --rm -v`pwd`:/workspace onnx_runtime
GPU with CUDA
# build# change the cuda version to match your local cuda version before build the dockerdocker build -f ./dockerfiles/ubuntu2004_gpu.dockerfile -t onnx_runtime_gpu.# rundocker run -it --rm --gpus all -v`pwd`:/workspace onnx_runtime_gpu
- Onnxruntime will be built with TensorRT support if the environment has TensorRT. Checkthis memo for useful URLs related to building with TensorRT.
- Be careful to choose TensorRT version compatible with onnxruntime. A good guess can be inferred fromHERE.
- Also it is not possible to use models whose input shapes are dynamic with TensorRT backend, according tothis
GPU with CUDA & TensorRT
# build# change the cuda version to match your local cuda version before build the dockerdocker build -f ./dockerfiles/ubuntu2004_tensorrt.dockerfile -t onnx_runtime_gpu_tensorrt.# rundocker run -it --rm --gpus all -v`pwd`:/workspace onnx_runtime_gpu_tensorrt
Usage
# after make apps./build/examples/TestImageClassification ./data/squeezenet1.1.onnx ./data/images/dog.jpg
the following result can be obtained
264 : Cardigan, Cardigan Welsh corgi : 0.391365263 : Pembroke, Pembroke Welsh corgi : 0.376214227 : kelpie : 0.0314975158 : toy terrier : 0.0223435230 : Shetland sheepdog, Shetland sheep dog, Shetland : 0.020529
Usage
Download model from onnx model zoo:HERE
The shape of the output would be
OUTPUT_FEATUREMAP_SIZE X OUTPUT_FEATUREMAP_SIZE * NUM_ANCHORS * (NUM_CLASSES + 4 + 1) where OUTPUT_FEATUREMAP_SIZE = 13; NUM_ANCHORS = 5; NUM_CLASSES = 20 for the tiny-yolov2 model from onnx model zoo
- Test tiny-yolov2 inference apps
# after make apps./build/examples/tiny_yolo_v2 [path/to/tiny_yolov2/onnx/model] ./data/images/dog.jpg
Usage
Download model from onnx model zoo:HERE
As also stated in the url above, there are four outputs: boxes(nboxes x 4), labels(nboxes), scores(nboxes), masks(nboxesx1x28x28)
Test mask-rcnn inference apps
# after make apps./build/examples/mask_rcnn [path/to/mask_rcnn/onnx/model] ./data/images/dogs.jpg./build/examples/mask_rcnn [path/to/mask_rcnn/onnx/model] ./data/images/indoor.jpg
Usage
Download model from onnx model zoo:HERE
Test yolo-v3 inference apps
# after make apps./build/examples/yolov3 [path/to/yolov3/onnx/model] ./data/images/no_way_home.jpg
Usage
- App to use onnx model trained with famous light-weightUltra-Light-Fast-Generic-Face-Detector-1MB
- Sample weight has been saved./data/version-RFB-640.onnx
- Test inference apps
# after make apps./build/examples/ultra_light_face_detector ./data/version-RFB-640.onnx ./data/images/endgame.jpg
Usage
- Download onnx model trained on COCO dataset fromHERE
# this app tests yolox_l model but you can try with other yolox models also.wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l.onnx -O ./data/yolox_l.onnx
- Test inference apps
# after make apps./build/examples/yolox ./data/yolox_l.onnx ./data/images/matrix.jpg
Usage
- Download PaddleSeg's bisenetv2 trained on cityscapes dataset that has been converted to onnxHERE and copy to./data directory
You can also convert your own PaddleSeg with following procedures
- export PaddleSeg model
- convert exported model to onnx format withPaddle2ONNX
- Test inference apps
./build/examples/semantic_segmentation_paddleseg_bisenetv2 ./data/bisenetv2_cityscapes.onnx ./data/images/sample_city_scapes.png./build/examples/semantic_segmentation_paddleseg_bisenetv2 ./data/bisenetv2_cityscapes.onnx ./data/images/odaiba.jpg
Usage
- Convert SuperPoint's pretrained weights to onnx format
git submodule update --init --recursivepython3 -m pip install -r scripts/superpoint/requirements.txtpython3 scripts/superpoint/convert_to_onnx.py
- Download test images fromthis dataset
wget https://raw.githubusercontent.com/StaRainJ/Multi-modality-image-matching-database-metrics-methods/master/Multimodal_Image_Matching_Datasets/ComputerVision/CrossSeason/VisionCS_0a.png -P datawget https://raw.githubusercontent.com/StaRainJ/Multi-modality-image-matching-database-metrics-methods/master/Multimodal_Image_Matching_Datasets/ComputerVision/CrossSeason/VisionCS_0b.png -P data
- Test inference apps
./build/examples/super_point /path/to/super_point.onnx data/VisionCS_0a.png data/VisionCS_0b.png
Usage
Convert SuperPoint's pretrained weights to onnx format: Follow the above instruction
Convert SuperGlue's pretrained weights to onnx format
git submodule update --init --recursivepython3 -m pip install -r scripts/superglue/requirements.txtpython3 -m pip install -r scripts/superglue/SuperGluePretrainedNetwork/requirements.txtpython3 scripts/superglue/convert_to_onnx.py
Download test images fromthis dataset: Or prepare some pairs of your own images
Test inference apps
./build/examples/super_glue /path/to/super_point.onnx /path/to/super_glue.onnx /path/to/1st/image /path/to/2nd/image
Usage
DownloadLoFTR weights indoords_new.ckpt fromHERE. (LoFTR'slatest commit seems to be only compatible with the new weights (Ref:zju3dv/LoFTR#48). Hence, this onnx cpp application is only compatible with _indoor_ds_new.ckpt weights)
Convert LoFTR's pretrained weights to onnx format
git submodule update --init --recursivepython3 -m pip install -r scripts/loftr/requirements.txtpython3 scripts/loftr/convert_to_onnx.py --model_path /path/to/indoor_ds_new.ckpt
Download test images fromthis dataset: Or prepare some pairs of your own images
Test inference apps
./build/examples/loftr /path/to/loftr.onnx /path/to/loftr.onnx /path/to/1st/image /path/to/2nd/image