Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Enabling oneVPL source in G API

Dmitry Matveev edited this pageMay 16, 2023 ·1 revision

Building G-API with oneVPL Toolkit support

<installation path>/share/oneVPL/env/vars.bat"
  • Then specify extra options to OpenCV CMake:
 $ cmake /path-to-opencv -DWITH_GAPI_ONEVPL=ON
  • Run tests:
/path-to-opencv-build/bin/opencv_test_gapi --gtest_filter=*OneVPL_Source*
  • How to run example and configureoneVPL file-based source and launchOpenVINO inference please find out examples in/path-to-opencv-build/bin/example_gapi_onevpl_infer_single_roi.Also seeBuilding with OpenVINO Toolkit support section to how to configure G-API withOpenVINO

VPL Source capabilities & limitations

  • G-API oneVPL Source implements string-based parameters configuration mechanism throughCfgParam objects packed into array or initialization list. These parameters has reflection ofoneVPL configuration parameters which can be found by the linkhttps://spec.oneapi.io/versions/latest/elements/oneVPL/source/programming_guide/VPL_prg_session.html#dsp-conf-prop-table.Some of these parameters are MAJOR and some others are OPTIONAL. Using MAJOR params is necessary to make VPL source work, while OPTIONAL params provide extra optimization tricks or advice VPL dispatcher to select the preferable oneVPL library implementation (like a version index) and so on.All params havename andvalue fields which should be mapped to VPL-related configuration parameter by G-API oneVPL Source by itself.

Lets consider example of choosing Hardware Acceleration type for VPL Source:

As described inhttps://spec.oneapi.io/versions/latest/elements/oneVPL/source/programming_guide/VPL_prg_session.html#dsp-conf-prop-table it has namemfxImplDescription.AccelerationMode and with typeMFX_VARIANT_TYPE_U32 then we just use

std::vector<CfgParam> cfg_params;cfg_params.push_back(CfgParam::create_acceleration_mode(MFX_ACCEL_MODE_VIA_D3D11));

or

std::vector<CfgParam> cfg_params;cfg_params.push_back(CfgParam::create_acceleration_mode("MFX_ACCEL_MODE_VIA_D3D11"));

G-API oneVPL Source interface must parse eitherint orstring like parameter value representation. To find out which VPL parameters are supported please proceed byhttps://github.com/opencv/opencv/blob/4.x/modules/gapi/include/opencv2/gapi/streaming/onevpl/cfg_params.hpp#L63(List of parameters is updated regularly)

  • Only Windows platform is tested and supported with Hardware Acceleration DX11

According tooneVPL dispatcher the user is free to choose preferred acceleration type. But default deployment of VPL implementation driver supply hardware acceleration only which means that ALL decoding operations use hardware acceleration. It is possible only to clarify to G-API oneVPL Source what types of memory should produce oneVPL Source in its owncv::MediaFrame as result. If no parameters describedmfxImplDescription.AccelerationMode have passed during G-API oneVPL source construction thencv::MediaFrame will carry CPU memory as frame data (it means copy from CPU to acceleration during decode operation and back again). In otherwise, let's assume we passedCfgParam::create_acceleration_mode("MFX_ACCEL_MODE_VIA_D3D11")), acv::MediaFrame would carry GPU memory as data in DX11Texture2D inside and would require usingcv::MediaFrame::access to get it's value.

  • G-API oneVPL Source support video decoding either using RAW video stream formats (see onVPL support codes) and inner demultiplexing usingMicrosoft Foundation Primitives.

Implementation doesn't rely on file extension to choose format because usually it might be wrong. Instead the following interface is provided: If no parameters describedmfxImplDescription.mfxDecoderDescription.decoder.CodecID have passed during G-API oneVPL source construction then implementation try out demultiplexing schema; if specific codecId is set ( for exampleCfgParam::create_decoder_id(MFX_CODEC_HEVC)) then implementation assume RAW stream unconditionally.

Please use environment variableOPENCV_LOG_LEVEL=Info at least to consider full description in case of any source file errors but usually default levelOPENCV_LOG_LEVEL=Warn is enough

How-to-launch OpenVINO Inference using VPL Source practical guide

First of all make sure that conditions are met:https://github.com/opencv/opencv/wiki/Graph-API#building-with-openvino-toolkit-support andhttps://github.com/opencv/opencv/wiki/Graph-API#building-with-onevpl-toolkit-support.The sample with oneVPL & OpenVINO Inference Engine can be found ingapi/samples directory under openCV project directories tree. Only infer single ROI is supported at now.

(Current configuration parameters are obsolete and new will be introduced inhttps://github.com/opencv/opencv/pull/21716. Description written down in assumption that this PR was merged)

  • Source video file is a RAW encoded video stream (h265 for example)
example_gapi_onevpl_infer_single_roi --facem=<model path> --cfg_params="mfxImplDescription.mfxDecoderDescription.decoder.CodecID:MFX_CODEC_HEVC;" --input=<full RAW file path>

Please explore the full list of supported codec constants here:https://github.com/opencv/opencv/blob/4.x/modules/gapi/include/opencv2/gapi/streaming/onevpl/cfg_params.hpp#L95

  • Source file is a containerized media file: *.mkv, *.mp4, etc. (Applicable for WINDOWS only)
example_gapi_onevpl_infer_single_roi --facem=<model path> --cfg_params="" --input=<full RAW file path>

or

example_gapi_onevpl_infer_single_roi --facem=<model path> --input=<full file path>

Please pay attention that examples launch non-optimized pipeline for default acceleration types:

  • VPL Source uses GPU device for decoding with copying media frame into CPU RAM
  • VPL preprocessing used GPU device for decoding with copying media frame from/into CPU RAM
  • Inference uses CPU device too

Also it is possible to configure such pipeline stages in fine-grained way and seize heterogenous computation advantages. Thus, three acceleration parameters exposed:source_device,preproc_device andfaced. Variety combinations of eitherCPU &GPU values are supported.Full list of supported configuration enumerated in sample support device matrix and is constantly growing.

The most interesting cases are:

  • Default GPU-accelerated decoding & copy-based CPU use-case is similar with (synonym for empty parameters):
example_gapi_onevpl_infer_single_roi<...> --source_device=CPU --preproc_device=CPU --facem=CPU
  • GPU decode/preprocessing pipeline with CPU-based inference:
example_gapi_onevpl_infer_single_roi<...> --source_device=GPU --preproc_device=GPU --facem=CPU
  • Full copy-free GPU pipeline can be configured as:
example_gapi_onevpl_infer_single_roi<...> --source_device=GPU --preproc_device=GPU --facem=GPU

© Copyright 2019-2025, OpenCV team

Clone this wiki locally


[8]ページ先頭

©2009-2025 Movatter.jp