Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Deep learning based gaze estimation demo with a fun feature :-)

License

NotificationsYou must be signed in to change notification settings

yas-sim/gaze-estimation-with-laser-sparking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This program demonstrates how to use thegaze-estimation-adas-0002 model of the OpenVINOOpen Model Zoo withIntel(r) Distribution of OpenVINO(tm) toolkit.
This program finds the faces in an image, detect the landmark points on the detected faces to find the eyes, estimate the head rotation angle, and estimates the gaze orientation.
This program draws the gaze lines like the laser beams. Also the program detects the collision of the laser beams and draws sparkles at the crossing point of the laser beams (for fun).The gaze estimation model requires the head rotation angle and the cropped eye images as the input of the model. Therefore, the program useshead-pose-estimation-adas-0001 model to detect the head rotation angles andfacial-landmarks-35-adas-0002 model to detects key landmark points on the face. The landmark detection model detects 35 points from a face.

このプログラムはIntel(r) Distribution of OpenVINO(tm) toolkitを使った、OpenVINOOpen Model Zoogaze-estimation-adas-0002(視線推定)モデルの使い方を示すためのデモプログラムです。
プログラムはまず入力画像から顔を検出し、その後顔のランドマークポイントを検出し、頭の回転角度を検出し、最後に視線を推定します。
プログラムはレーザービームのように視線を描画します。また、レーザービーム同士が交差した場合、そこにスパークを描画します(遊びです)。
視線推定モデルは入力として頭の回転角度と切り抜いた2つの目の画像を必要とします。そのため、プログラムはhead-pose-estimation-adas-0001モデルを使用して頭の回転角を推定し、facial-landmarks-35-adas-0002モデルで顔のキーランドマークポイント(目や鼻の位置など)を推定しています。ランドマークモデルは1つの顔から35点のキーポイントを検出します。

Gaze Estimation Result

gaze

Required DL Models to Run This Demo

The demo expects the following models in the Intermediate Representation (IR) format:

  • face-detection-adas-0001
  • head-pose-estimation-adas-0001
  • facial-landmarks-35-adas-0002
  • gaze-estimation-adas-0002

You can download these models from OpenVINOOpen Model Zoo.In themodels.lst is the list of appropriate models for this demo that can be obtained viaModel downloader.Please see more information aboutModel downloaderhere.

How to Run

0. Prerequisites

  • OpenVINO 2021.3
    • If you haven't installed it, go to the OpenVINO web page and follow theGet Started guide to do it.

1. Install dependencies

The demo depends on:

  • numpy
  • scipy
  • opencv-python

To install all the required Python modules you can use:

(Linux) pip3 install -r requirements.in(Win10) pip install -r requirements.in

2. Download DL models from OMZ

UseModel Downloader to download the required models.

(Linux) python3$INTEL_OPENVINO_DIR/deployment_tools/tools/model_downloader/downloader.py --list models.lst(Win10) python"%INTEL_OPENVINO_DIR%\deployment_tools\tools\model_downloader\downloader.py" --list models.lst

3. Run the demo app

Attach a USB webCam as input of the demo program, then run the program. If you want to use a movie file as an input, you can modify the source code to do it.

Following keys are valid:
'f': Flip image
'l': Laser mode on/off
's': Spark mode on/off
'b': Boundary box on/off

(Linux) python3 gaze-estimation.py(Win10) python gaze-estimation.py

Demo Output

The application draws the results on the input image.

Tested Environment

  • Windows 10 x64 1909 and Ubuntu 18.04 LTS
  • Intel(r) Distribution of OpenVINO(tm) toolkit 2021.3
  • Python 3.6.5 x64

See Also


[8]ページ先頭

©2009-2025 Movatter.jp