Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Image classification with NVIDIA TensorRT from TensorFlow models.

License

NotificationsYou must be signed in to change notification settings

NVIDIA-AI-IOT/tf_to_trt_image_classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

landing graphic

This contains examples, scripts and code related to image classification using TensorFlow models(fromhere)converted to TensorRT. Converting TensorFlow models to TensorRT offers significant performancegains on the Jetson TX2 as seenbelow.

Models

The table below shows various details related to pretrained models ported from the TensorFlowslim model zoo.

ModelInput SizeTensorRT (TX2 / Half)TensorRT (TX2 / Float)TensorFlow (TX2 / Float)Input NameOutput NamePreprocessing Fn.
inception_v1224x2247.98ms12.8ms27.6msinputInceptionV1/Logits/SpatialSqueezeinception
inception_v3299x29926.3ms46.1ms98.4msinputInceptionV3/Logits/SpatialSqueezeinception
inception_v4299x29952.1ms88.2ms176msinputInceptionV4/Logits/Logits/BiasAddinception
inception_resnet_v2299x29953.0ms98.7ms168msinputInceptionResnetV2/Logits/Logits/BiasAddinception
resnet_v1_50224x22415.7ms27.1ms63.9msinputresnet_v1_50/SpatialSqueezevgg
resnet_v1_101224x22429.9ms51.8ms107msinputresnet_v1_101/SpatialSqueezevgg
resnet_v1_152224x22442.6ms78.2ms157msinputresnet_v1_152/SpatialSqueezevgg
resnet_v2_50299x29927.5ms44.4ms92.2msinputresnet_v2_50/SpatialSqueezeinception
resnet_v2_101299x29949.2ms83.1ms160msinputresnet_v2_101/SpatialSqueezeinception
resnet_v2_152299x29974.6ms124ms230msinputresnet_v2_152/SpatialSqueezeinception
mobilenet_v1_0p25_128128x1282.67ms2.65ms15.7msinputMobilenetV1/Logits/SpatialSqueezeinception
mobilenet_v1_0p5_160160x1603.95ms4.00ms16.9msinputMobilenetV1/Logits/SpatialSqueezeinception
mobilenet_v1_1p0_224224x22412.9ms12.9ms24.4msinputMobilenetV1/Logits/SpatialSqueezeinception
vgg_16224x22438.2ms79.2ms171msinputvgg_16/fc8/BiasAddvgg

The times recorded include data transfer to GPU, network execution, anddata transfer back from GPU. Time does not include preprocessing.Seescripts/test_tf.py,scripts/test_trt.py, andsrc/test/test_trt.cufor implementation details.

Setup

  1. Flash the Jetson TX2 using JetPack 3.2. Be sure to install

    • CUDA 9.0
    • OpenCV4Tegra
    • cuDNN
    • TensorRT 3.0
  2. Install pip on Jetson TX2.

    sudo apt-get install python-pip
  3. Install TensorFlow on Jetson TX2.

    1. Download the TensorFlow 1.5.0 pip wheel fromhere. This build of TensorFlow is provided as a convenience for the purposes of this project.

    2. Install TensorFlow using pip

        sudo pip install tensorflow-1.5.0rc0-cp27-cp27mu-linux_aarch64.whl
  4. Install uff exporter on Jetson TX2.

    1. Download TensorRT 3.0.4 for Ubuntu 16.04 and CUDA 9.0 tar package fromhttps://developer.nvidia.com/nvidia-tensorrt-download.

    2. Extract archive

        tar -xzf TensorRT-3.0.4.Ubuntu-16.04.3.x86_64.cuda-9.0.cudnn7.0.tar.gz
    3. Install uff python package using pip

        sudo pip install TensorRT-3.0.4/uff/uff-0.2.0-py2.py3-none-any.whl
  5. Clone and build this project

    git clone --recursive https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification.gitcd tf_to_trt_image_classificationmkdir buildcd buildcmake ..make cd ..

Download models and create frozen graphs

Run the following bash script to download all of the pretrained models.

source scripts/download_models.sh

If there are any models you don't want to use, simply remove the URL from the model list inscripts/download_models.sh.
Next, because the TensorFlow models are provided in checkpoint format, we must convert them to frozen graphs for optimization with TensorRT. Run thescripts/models_to_frozen_graphs.py script.

python scripts/models_to_frozen_graphs.py

If you removed any models in the previous step, you must add'exclude': true to the corresponding item in theNETS dictionary located inscripts/model_meta.py. If you are following the instructions for executing engines below, you may also need some sample images. Run the following script to download a few images from ImageNet.

source scripts/download_images.sh

Convert frozen graph to TensorRT engine

Run thescripts/convert_plan.py script from the root directory of the project, referencing themodels table for relevant parameters. For example, to convert the Inception V1 model run the following

python scripts/convert_plan.py data/frozen_graphs/inception_v1.pb data/plans/inception_v1.plan input 224 224 InceptionV1/Logits/SpatialSqueeze 1 0 float

The inputs to the convert_plan.py script are

  1. frozen graph path
  2. output plan path
  3. input node name
  4. input height
  5. input width
  6. output node name
  7. max batch size
  8. max workspace size
  9. data type (float or half)

This script assumes single output single input image models, and may not work out of the box for models other than those in the table above.

Execute TensorRT engine

Call theexamples/classify_image program from the root directory of the project, referencing themodels table for relevant parameters. For example, to run the Inception V1 model converted as above

./build/examples/classify_image/classify_image data/images/gordon_setter.jpg data/plans/inception_v1.plan data/imagenet_labels_1001.txt input InceptionV1/Logits/SpatialSqueeze inception

For reference, the inputs to the example program are

  1. input image path
  2. plan file path
  3. labels file (one label per line, line number corresponds to index in output)
  4. input node name
  5. output node name
  6. preprocessing function (either vgg or inception)

We provide two image label files in thedata folder. Some of the TensorFlow models were trained with an additional "background" class, causing the model to have 1001 outputs instead of 1000. To determine the number of outputs for each model, reference theNETS variable inscripts/model_meta.py.

Benchmark all models

To benchmark all of the models, first convert all of the models that youdownloaded above into TensorRT engines. Run the following script to convert all models

python scripts/frozen_graphs_to_plans.py

If you want to change parameters related to TensorRT optimization, just edit thescripts/frozen_graphs_to_plans.py file.Next, to benchmark all of the models run thescripts/test_trt.py script

python scripts/test_trt.py

Once finished, the timing results will be stored atdata/test_output_trt.txt.If you want to also benchmark the TensorFlow models, simply run.

python scripts/test_tf.py

The results will be stored atdata/test_output_tf.txt. This benchmarking script loads an example image as input, make sure you have downloaded the sample images asabove.

About

Image classification with NVIDIA TensorRT from TensorFlow models.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp