Movatterモバイル変換


[0]ホーム

URL:


WO2020042542A1 - Method and apparatus for acquiring eye movement control calibration data - Google Patents

Method and apparatus for acquiring eye movement control calibration data
Download PDF

Info

Publication number
WO2020042542A1
WO2020042542A1PCT/CN2019/073766CN2019073766WWO2020042542A1WO 2020042542 A1WO2020042542 A1WO 2020042542A1CN 2019073766 WCN2019073766 WCN 2019073766WWO 2020042542 A1WO2020042542 A1WO 2020042542A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
eye
eyeball
calibration data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2019/073766
Other languages
French (fr)
Chinese (zh)
Inventor
蒋壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Water World Co Ltd
Original Assignee
Shenzhen Water World Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Water World Co LtdfiledCriticalShenzhen Water World Co Ltd
Publication of WO2020042542A1publicationCriticalpatent/WO2020042542A1/en
Anticipated expirationlegal-statusCritical
Ceasedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

Disclosed are a method and apparatus for acquiring eye movement control calibration data, wherein the method comprises: successively acquiring user images of human eyes gazing at a plurality of positioning points; successively searching for human eye images and eyeball images from the user images, and acquiring human eye position data and eyeball position data; and calculating calibration data, and successively recording the calibration data and corresponding position information of the plurality of positioning points. According to the present application, no special device needs to be used, and data collection can be carried out according to eye movement habits of a user.

Description

Translated fromChinese
眼动控制校准数据获取方法和装置Eye movement control calibration data acquisition method and device技术领域Technical field

本申请涉及人机交互技术领域,具体涉及一种眼动控制校准数据获取方法和装置。The present application relates to the field of human-computer interaction technology, and in particular, to a method and device for acquiring calibration data for eye movement control.

背景技术Background technique

眼动控制方法是一种非接触的人机互动方式,通过追踪眼球位置来计算眼睛的注视点的位置。眼动控制对于无法双手操作的用户起到重大帮助。随着智能终端的发展,具有眼球追踪功能的游戏电脑使玩家在游戏场景中更为身临其境。眼球追踪技术需要用到专用设备,如眼动仪。在这些专用设备使用过程中,用户需要根据说明书限定的眼动方式才可控制设备。人机交互方式的趋势是以人为中心、更为友好和便捷,因此眼动追踪也朝着根据用户眼动习惯来控制设备的方向发展。每个用户可以根据自己特定的眼动习惯先对设备进行校准,使得后续的眼动控制可以根据用户的眼动习惯来操作。现有技术的校准步骤中,通常根据用户盯住预设定位点的图像来进行图像处理,计算预设定位点对应的瞳孔中心位置来收集校准数据。但是根据此种方法得到的校准数据,在后续的眼动追踪操作中,视线判断的准确度低,用户体验不高。The eye movement control method is a non-contact human-computer interaction method. The position of the eye's fixation point is calculated by tracking the position of the eyeball. Eye movement control is a great help for users who ca n’t use both hands. With the development of smart terminals, gaming computers with eye tracking capabilities make players more immersive in the game scene. Eye-tracking technology requires special equipment, such as an eye tracker. During the use of these special equipment, users need to control the equipment according to the eye movements defined in the instructions. The trend of human-computer interaction is human-centered, more friendly and convenient, so eye tracking is also moving towards controlling the device according to the user's eye movement habits. Each user can first calibrate the device according to their specific eye movement habits, so that subsequent eye movement control can be operated according to the user's eye movement habits. In the calibration step in the prior art, image processing is usually performed according to a user staring at an image of a preset positioning point, and a pupil center position corresponding to the preset positioning point is calculated to collect calibration data. However, according to the calibration data obtained by this method, in the subsequent eye tracking operation, the accuracy of the gaze judgment is low, and the user experience is not high.

技术问题technical problem

本申请的目的在于提供一种眼动控制校准数据获取方法和装置,旨在解决现有技术中不能根据用户眼动习惯来获取准确的眼动控制校准数据的问题。The purpose of this application is to provide a method and device for acquiring eye movement control calibration data, which aims to solve the problem that in the prior art, accurate eye movement control calibration data cannot be obtained according to a user's eye movement habits.

技术解决方案Technical solutions

本申请提出一种眼动控制校准数据获取方法,包括:This application proposes a method for obtaining calibration data for eye movement control, including:

依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;Sequentially acquiring user images where the human eye fixes on a plurality of anchor points, wherein the anchor points are preset in a designated viewing area;

依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;Searching the human eye image and the eyeball image from the user image in order to obtain human eye position data and eyeball position data;

根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。Calculate calibration data according to the human eye position data and the eyeball position data, and sequentially record the calibration data and the corresponding plurality of position information of the anchor points.

本申请还提出了一种眼动控制校准数据获取装置,包括:The present application also proposes an eye movement control calibration data acquisition device, including:

图像获取模块,用于依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;An image acquisition module, configured to sequentially obtain a user image where a human eye fixes on a plurality of positioning points, wherein a plurality of the positioning points are preset in a designated viewing area;

图像分析模块,用于依次从所述用户图像中查找人眼图像和眼球图像,获 取人眼位置数据和眼球位置数据;An image analysis module, configured to sequentially search a human eye image and an eyeball image from the user image, and obtain human eye position data and eyeball position data;

数据计算模块,用于根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。A data calculation module is configured to calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.

本申请还提出一种计算机设备,其包括处理器、存储器及存储于所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述的眼动控制校准数据获取方法。The present application also proposes a computer device including a processor, a memory, and a computer program stored on the memory and executable on the processor. The processor implements the foregoing eye movement when the computer program is executed. Controls the method of acquiring calibration data.

有益效果Beneficial effect

本申请的眼动控制校准数据获取方法和装置,在指定观看区域预设至少一个定位点,在人眼注视一个定位点时,通过普通摄像头获取图像,从图像中查找人眼图像和眼球图像,根据人眼位置数据和眼球位置数据,计算校准数据,将校准数据和该定位点的位置信息保存在存储器中,直至所有定位点均采集完数据。校准数据可用于后续眼动追踪控制中,判断用户与指定观看区域的距离是否在预设范围内,并进行用户视线位置追踪,提高视线判断的准确度。本申请的眼动控制校准数据获取方法和装置无需采用专用设备,且可以根据用户的眼动习惯来采集数据,用户体验好。In the method and device for acquiring calibration data of eye movement control of the present application, at least one positioning point is preset in a designated viewing area, and when a human eye looks at one positioning point, an image is acquired through a common camera, and a human eye image and an eyeball image are searched from the image. According to the position data of the human eye and the position data of the eyeball, the calibration data is calculated, and the calibration data and the position information of the positioning point are stored in the memory until all the positioning points have been collected. The calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment. The method and device for acquiring eye movement control calibration data of the present application do not need to use special equipment, and can collect data according to the eye movement habits of the user, and the user experience is good.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本申请一实施例的眼动控制校准数据获取方法的流程示意图;FIG. 1 is a schematic flowchart of an eye movement control calibration data acquisition method according to an embodiment of the present application;

图2是本申请一实施例的指定观看区域的定位点的示意图(其中,图2a为各定位点的示意图,图2b为左边区域和右边区域的划分示意图,图2c为上边区域和下边区域的划分示意图);FIG. 2 is a schematic diagram of an anchor point in a designated viewing area according to an embodiment of the present application (wherein FIG. 2 a is a schematic diagram of each anchor point, FIG. 2 b is a schematic diagram of division of a left region and a right region, and FIG. 2 c is an illustration of an upper region and a lower region. Division diagram);

图3是本申请一实施例的眼动控制校准数据获取装置的结构示意框图;3 is a schematic block diagram of a structure of an eye movement control calibration data acquisition device according to an embodiment of the present application;

图4是图3中图像分析模块的结构示意框图;4 is a schematic block diagram of a structure of an image analysis module in FIG. 3;

图5是图3中数据计算模块的结构示意框图;5 is a schematic block diagram of a structure of a data calculation module in FIG. 3;

图6是图5中第一数据获取单元的结构示意框图;6 is a schematic block diagram of a structure of a first data acquisition unit in FIG. 5;

图7是图5中第二数据获取单元的结构示意框图;7 is a schematic block diagram of a structure of a second data obtaining unit in FIG. 5;

图8是本申请一实施例的计算机设备的结构示意图。FIG. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.

本发明的最佳实施方式Best Mode of the Invention

参照图1,本申请实施例提供了一种眼动控制校准数据获取方法,包括:Referring to FIG. 1, an embodiment of the present application provides a method for acquiring eye movement control calibration data, including:

S1、依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;S1. Obtain a user image in which a human eye looks at a plurality of anchor points in sequence, wherein a plurality of the anchor points are preset in a designated viewing area;

S2、依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;S2. Searching the human eye image and the eyeball image from the user image in order to obtain human eye position data and eyeball position data;

S3、根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。S3. Calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.

本实施例中,步骤S1中的指定观看区域包括与用户进行人机交互的终端设备界面,例如可以是智能手机显示屏、平板显示屏、智能电视显示屏、个人电脑显示屏、笔记本电脑显示屏等,本申请对此不作限定。用户图像可以通过摄像头获取,摄像头包括终端设备自带的前置摄像头、外接摄像头,如手机前置摄像头等,本申请对此不作限定。参照图2a,为指定观看区域的定位点的示意图,包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点,参照图2b,其中左上、左中、左下、中下、中中和中上包围的指定观看区域为左边区域,右上、右中、右下、中下、中中和中上包围的指定观看区域为右边区域,参照图2c,左上、左中、中中、右中、右上和中上包围的指定观看区域为上边区域,左下、左中、中中、右中、右下和中下包围的指定观看区域为下边区域。In this embodiment, the designated viewing area in step S1 includes a terminal device interface for human-computer interaction with a user, such as a smartphone display, a flat-panel display, a smart TV display, a personal computer display, and a laptop display. This application does not limit this. The user image can be obtained through a camera. The camera includes a front camera built in the terminal device, an external camera, such as a front camera of a mobile phone, etc., which is not limited in this application. Referring to FIG. 2a, it is a schematic diagram of the anchor points of the designated viewing area, including 9 anchor points of upper left, upper middle, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right. Referring to FIG. 2b, the upper left , The left, middle, bottom left, middle bottom, middle middle, and middle top surrounded by the designated viewing area is the left area, the top right, middle right, bottom right, middle bottom, the middle designated middle and upper center surrounded by the designated viewing area is the right area, refer to the figure 2c, the designated viewing area surrounded by top left, middle left, center middle, right middle, top right, and top middle is the top area, and the designated viewing area surrounded by bottom left, left middle, middle center, right middle, bottom right, and bottom middle is the bottom area .

以眼动控制手机显示屏为例,用户根据自己的习惯在距离手机显示屏合适的距离处,眼睛注视手机显示屏的一个定位点,通过手机前置摄像头采集人眼注视该定位点的图像。比如,可以预先设置注视时间,分别发送提醒用户持续注视每个定位点的提醒信息,以提醒用户持续注视该定位点;判断当前时刻距发送提醒信息的时刻之间的时长是否大于预设的注视时长,若当前时刻距发送所述提醒信息的时刻之间的时长大于预设的注视时长时,则生成拍摄用户图像的指令,摄像头获得拍摄用户图像的指令,采集图像;也可以在分别发送提醒用户持续注视每个定位点的信息后,用摄像头持续实时采集图像,通过预先训练好的分类器区分人眼的状态,如果判断人眼处于注视状态,则获取注视状态中上述图像的任一帧图像作为用户图像。进一步从获取的图像中查找人眼图像和眼球图像,获取到人眼位置数据和眼球位置数据;根据所述人眼位置数据和所述眼球位置数据计算一系列校准数据,依次记录所述校准数据与所述定位点的对应关系。校准数据可用于后续眼动追踪控制中,判断用户与指定观看区域的距离是否在预设范围内,并进行用户视线位置追踪,提高视线判断的准确度。Taking the eye movement control of the mobile phone display as an example, the user looks at an anchor point of the mobile phone display at an appropriate distance from the mobile phone display according to his habits, and collects an image of a human eye watching the anchor point through a front camera of the mobile phone. For example, a fixation time may be set in advance, and reminders for reminding the user to continuously look at each anchor point may be sent separately to remind the user to keep looking at the anchor point; judging whether the time between the current time and the moment when the reminder information is sent is greater than a preset fixation Duration, if the time between the current time and the time at which the reminder information is sent is greater than the preset gaze duration, an instruction to capture a user image is generated, and the camera obtains an instruction to capture a user image to collect the image; the reminder can also be sent separately After the user continuously looks at the information of each anchor point, the camera continuously collects images in real time, and distinguishes the state of the human eye through a pre-trained classifier. If it is determined that the human eye is in the gaze state, then any frame of the above image in the gaze state is obtained The image serves as the user image. Further searching for the human eye image and the eyeball image from the acquired image, obtaining the human eye position data and the eyeball position data; calculating a series of calibration data according to the human eye position data and the eyeball position data, and sequentially recording the calibration data Correspondence with the anchor point. The calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.

具体地,本实施例的用户首先看向左上定位点,摄像头采集人眼注视左 上定位点的图像,从该图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据,计算校准数据,记录该校准数据与左上定位点的对应关系。用户再开始看向中上定位点,其余步骤同左上定位点。直至左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点的校准数据和定位点的对应关系均采集完毕。Specifically, the user in this embodiment first looks at the upper left positioning point, and the camera collects an image of the human eye gazing at the upper left positioning point, finds the human eye image and the eyeball image from the image, obtains human eye position data and eyeball position data, and calculates a calibration. Data to record the correspondence between the calibration data and the upper left anchor point. The user then starts looking at the center-up anchor point, and the rest of the steps are the same as the top-left anchor point. Until the upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right calibration data of the nine positioning points and the corresponding relationship of the positioning points are collected.

本实施例中,所述依次从所述用户图像中查找人眼图像和眼球图像,获取人眼图像位置数据和眼球图像位置数据的步骤S2,包括:In this embodiment, step S2 of searching for a human eye image and an eyeball image from the user image in order to obtain human eye image position data and eyeball image position data includes:

S21、从所述用户图像中查找人脸图像;S21: Find a face image from the user image;

S22、从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;S22. Search for a human eye image from the human face image, and obtain human eye position data from the human face image, where the human eye image includes a left eye image and a right eye image;

S23、从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。S23. Find an eyeball image from the human eye image, and obtain eyeball position data from the human face image.

本实施例中,步骤S21先从图像中查找人脸图像,如果在图像中没有查找到人脸图像,则返回步骤S1,调整用户和指定观看区域的相对位置,直至摄像头获取的图像中能查找到人脸图像。人脸图像的查找方法较多,比如:利用人脸规则(如眼睛、鼻子、嘴巴等的分布)对输入图像进行人脸检测;通过寻找人脸面部不变的特征(如肤色、边缘、纹理)来对输入图像进行人脸检测;将人脸的面部特征用一个标准的人脸模板来描述,进行人脸检测时,先计算输入图像与标准人脸模板之间的相关值,然后再将求得的相关值与事先设定的阂值进行比较,以判别输入图像中是否存在人脸;将人脸区域看作一类模式,使用大量的人脸数据作样本训练,来学习潜在的规则并构造分类器,通过判别图像中所有可能区域模式属性来实现人脸的检测。本实施例将查找到的人脸图像用矩形框标出。In this embodiment, step S21 first finds a face image from the image. If no face image is found in the image, it returns to step S1 to adjust the relative position of the user and the designated viewing area until the image obtained by the camera can be found. To face images. There are many ways to search for facial images, such as: using face rules (such as the distribution of eyes, nose, mouth, etc.) to perform face detection on the input image; by finding features that are invariant to the face (such as skin color, edges, textures) ) To perform face detection on the input image; describe the facial features of the face with a standard face template. When performing face detection, first calculate the correlation value between the input image and the standard face template, and then The obtained correlation value is compared with a preset threshold value to determine whether a face exists in the input image; the face area is regarded as a type of pattern, and a large amount of face data is used as a sample training to learn potential rules A classifier is constructed to detect faces by discriminating all possible region pattern attributes in the image. In this embodiment, the found face image is marked with a rectangular frame.

步骤S22从人脸图像的矩形框中查找人眼图像,有利于缩小查找范围,提高人眼查找的查找效率和准确度,如果没有查找到人眼图像,则返回步骤S1,重新获取图像,直至步骤S22中能查找到人眼图像。人眼查找的方法包括基于模板匹配的方法、基于统计的方法和基于知识的方法。其中基于模板匹配的方 法包括灰度投影模板和几何特征模板:灰度投影法是指对人脸灰度图像进行水平和垂直方向的投影,分别统计出两个方向上的灰度值和/或灰度函数值,找出特定变化点,然后根据先验知识将不同方向上的变化点位置相结合,即得到人眼的位置;几何特征模板是利用眼睛的个体特征以及分布特征作为依据来实施人眼检测。基于统计的方法一般是通过对大量目标样本和非目标样本进行训练学习得到一组模型参数,然后基于模型构建分类器或者滤波器来检测目标。基于知识的方法是确定图像的应用环境,总结特定条件下可用于人眼检测的知识(如轮廓信息、色彩信息、位置信息)等,把它们归纳成指导人眼检测的规则。本实施例用矩形框分别框出左眼图像和右眼图像,获得下述人眼位置数据,包括:Step S22 searches for the human eye image from the rectangular frame of the face image, which is helpful to narrow the search range and improve the search efficiency and accuracy of the human eye search. If no human eye image is found, return to step S1 and reacquire the image until A human eye image can be found in step S22. Human eye search methods include template-based methods, statistics-based methods, and knowledge-based methods. Among them, the method based on template matching includes a gray projection template and a geometric feature template. The gray projection method refers to the horizontal and vertical projection of a gray image of a human face, and respectively counts the gray value and / or in two directions. The value of the gray function, find specific change points, and then combine the positions of change points in different directions according to prior knowledge to obtain the position of the human eye; the geometric feature template is implemented using the individual features and distribution features of the eyes as the basis Human eye detection. Statistics-based methods generally train and learn a large number of target samples and non-target samples to obtain a set of model parameters, and then build a classifier or filter to detect the target based on the model. The knowledge-based method is to determine the application environment of the image, summarize the knowledge (such as contour information, color information, position information) that can be used for human eye detection under specific conditions, and summarize them into rules that guide human eye detection. This embodiment uses a rectangular frame to frame the left-eye image and the right-eye image, respectively, to obtain the following human eye position data, including:

r1:左眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;r1 : the distance from the upper left vertex of the rectangular frame of the left-eye image to the left-most face image;

t1:左眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;t1 : the distance from the upper left vertex of the rectangular frame of the left eye image to the uppermost edge of the face image;

w1:左眼图像的矩形框的宽度;h1:左眼图像的矩形框的高度;w1 : the width of the rectangular frame of the left-eye image; h1 : the height of the rectangular frame of the left-eye image;

r2:右眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;r2 : the distance from the upper-left vertex of the rectangular frame of the right-eye image to the left-most face image;

t2:右眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;t2 : the distance from the top left vertex of the rectangular frame of the right eye image to the uppermost edge of the face image;

w2:右眼图像的矩形框的宽度;h2:右眼图像的矩形框的高度。w2 : the width of the rectangular frame of the right-eye image; h2 : the height of the rectangular frame of the right-eye image.

步骤S23从左眼图像中查找到左眼球图像,从右眼图像中查找右眼球图像,如果没有查找到眼球图像,则返回步骤S1,重新获取图像,直至步骤S23中能查找到眼球图像。眼球查找的方法包括神经网络法、边缘点积分投影曲线的极值位置判别法、模板匹配法、多分辨率的马赛克图法、几何及对称性检测法、基于霍夫变换法等。本实施例用矩形框分别框出左眼球图像和右眼球图像,获得下述眼球位置数据,包括:Step S23 finds the left eyeball image from the left eye image and the right eyeball image from the right eye image. If no eyeball image is found, the process returns to step S1 to acquire the image again until the eyeball image can be found in step S23. Eyeball search methods include neural network method, extreme point position discrimination method of edge point integral projection curve, template matching method, multi-resolution mosaic map method, geometric and symmetry detection method, and Hough transform-based method. This embodiment uses a rectangular frame to frame the left eyeball image and the right eyeball image, respectively, to obtain the following eyeball position data, including:

r3:左眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;r3 : the distance between the top left vertex of the rectangular frame of the left eyeball image and the leftmost face of the face image;

t3:左眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;t3 : the distance from the top left vertex of the rectangular frame of the left eyeball image to the uppermost edge of the face image;

w3:左眼球图像的矩形框的宽度;h3:左眼球图像的矩形框的高度;w3 : the width of the rectangular frame of the left eyeball image; h3 : the height of the rectangular frame of the left eyeball image;

r4:右眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;r4 : the distance between the top left vertex of the rectangular frame of the right eyeball image and the leftmost face of the face image;

t4:右眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;t4 : the distance from the top left vertex of the rectangular frame of the right eyeball image to the uppermost edge of the face image;

w4:右眼球图像的矩形框的宽度;h4:右眼球图像的矩形框的高度。w4 : the width of the rectangular frame of the right eyeball image; h4 : the height of the rectangular frame of the right eyeball image.

本实施例中给出了从人脸图像中获取眼球位置数据的具体参数。基于本申请的发明理念,也可以从人眼图像中获取眼球位置数据,本申请不对从人眼图像中获取眼球位置数据进行赘述。Specific parameters for obtaining eyeball position data from a face image are given in this embodiment. Based on the inventive concept of the present application, eyeball position data can also be obtained from a human eye image, and this application does not go into details about obtaining eyeball position data from a human eye image.

本实施例中,校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息的步骤S3,包括:In this embodiment, the calibration data includes distance calibration data, horizontal calibration data, and vertical calibration data. The calibration data is calculated according to the position data of the human eye and the position data of the eyeball, and the calibration data and corresponding multiple data are recorded in sequence. The step S3 of the positioning point location information includes:

S31、根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;以及根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;S31. Calculate distance calibration data when a human eye fixes on one of the positioning points according to the human eye position data; and calculate eyeballs when a human eye fixes on one of the positioning points according to the human eye position data and the eyeball position data. Horizontal position calibration data and eyeball position vertical calibration data;

S32、将所述距离校准数据、横向校准数据、纵向校准数据和对应的所述定位点位置信息保存在存储器中。S32. Save the distance calibration data, horizontal calibration data, vertical calibration data, and corresponding position information of the anchor point in a memory.

通过步骤S31~S32计算人眼注视一个定位点时的校准数据,并将校准数据和对应的定位点信息保存在存储器中。本实施例中对左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点进行一一计算和数据储存。距离校准数据用来定位人眼离指定观看区域的距离,横向校准数据和纵向校准数据用来指示人眼看向指定定位点时的眼球位置。Steps S31 to S32 are used to calculate the calibration data when the human eye looks at an anchor point, and the calibration data and the corresponding anchor point information are stored in the memory. In this embodiment, calculation and data storage are performed on the nine positioning points of upper left, upper middle, upper right, middle left, middle middle, right middle, lower left, middle lower, and lower right. The distance calibration data is used to locate the distance of the human eye from the specified viewing area, and the horizontal calibration data and vertical calibration data are used to indicate the position of the eyeball when the human eye looks at the specified positioning point.

本实施例中,所述根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据的步骤,包括:In this embodiment, the step of calculating distance calibration data when a human eye fixes on one of the positioning points according to the human eye position data includes:

S321、根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;S321. Calculate the coordinates of the center position of the left eye according to the position data of the left eye included in the position data of the human eye; and calculate the coordinates of the position center of the right eye according to the position data of the right eye included in the position data of the human eye;

S322、根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据。S322. Calculate a distance between the center of the left eye and the center of the right eye according to the coordinates of the left eye center position and the right eye center position to obtain the distance calibration data.

本实施例中,步骤S321可以通过公式(1)计算左眼中心位置坐标(x1,y1),In this embodiment, in step S321, the coordinates (x1 , y1 ) of the center position of the left eye can be calculated by formula (1)

Pot(x1,y1)=Pot(r1+w1/2,t1+h1/2)   (1)Pot (x1 , y1 ) = Pot (r1 + w1/2 , t1 + h1/2 ) (1)

通过公式(2)计算右眼中心位置坐标(x2,y2),Calculate the coordinates of the center position of the right eye (x2 , y2 ) by formula (2 ),

Pot(x2,y2)=Pot(r2+w2/2,t2+h2/2)    (2)Pot (x 2, y 2) = Pot (r 2 + w 2/2, t 2 + h 2/2) (2)

步骤S322可以通过公式(3)计算左眼中心与右眼中心的距离d,d即为距离校准数据。

Figure PCTCN2019073766-appb-000001
In step S322, the distance d between the center of the left eye and the center of the right eye can be calculated by formula (3), where d is the distance calibration data.
Figure PCTCN2019073766-appb-000001

通过d的值可以定位人眼距离指定观看区域的距离。The value of d can be used to locate the distance of the human eye from the specified viewing area.

本实施例中,所述根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据的步骤,包括:In this embodiment, the step of calculating, based on the human eye position data and the eyeball position data, horizontal eyeball position lateral calibration data and vertical eyeball position vertical calibration data when a human eye fixates on one of the positioning points includes:

S331、根据所述眼球位置数据包括的左眼球位置数据,计算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;S331. Calculate coordinates of the center position of the left eyeball according to the position data of the left eyeball included in the position data of the eyeball; and calculate coordinates of the position of the center position of the right eyeball according to the position data of the right eyeball included in the position data of the eyeball;

S332、根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;S332. Calculate a first lateral distance between the center of the left eyeball and the leftmost side of the left-eye image and the distance between the center of the left eyeball and the left-eye image according to the coordinates of the left-eye center position and the left-eye position data. A first longitudinal distance between the uppermost edges; and a second lateral distance between the center of the right eyeball and the rightmost side of the right eye image based on the right eyeball center position coordinates and the right eye position data, and the right The second longitudinal distance between the center of the eyeball and the lowermost edge of the right eye image;

S333、计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。S333. Calculate a ratio of the first lateral distance to the second lateral distance to obtain the lateral calibration data; and calculate a ratio of the first longitudinal distance to the second longitudinal distance to obtain the longitudinal calibration data. .

本实施例中,步骤S331中可以通过公式(4)计算左眼球中心位置坐标(x3,y3),Pot(x3,y3)=Pot(r3+w3/2,t3+h3/2)  (4)Embodiment, step S331 may be (4) calculates the left eyeball center position coordinates by the equation of the present embodiment(x 3, y 3), Pot (x 3, y 3) = Pot (r 3 + w 3/2, t 3 + h3/2) (4)

通过公式(5)计算右眼球中心位置坐标(x4,y4),Calculate the coordinates (x4 , y4 ) of the center position of the right eyeball by formula (5),

Pot(x4,y4)=Pot(r4+w4/2,t4+h4/2)   (5)Pot (x 4, y 4) = Pot (r 4 + w 4/2, t 4 + h 4/2) (5)

步骤S332可以通过公式(6)计算左眼球中心与左眼图像的最左边之间的第一横向距离d1:d1=x3–r1    (6)Step S332 can calculate the first lateral distance d1 between the center of the left eyeball and the leftmost side of the left eye image by formula (6): d1 = x3 -r1 (6)

通过公式(7)计算左眼球中心与左眼图像的最上边之间的第一纵向距离d3:d3=y3–t1    (7)The first longitudinal distance d3 between the center of the left eyeball and the uppermost edge of the left eye image is calculated by formula (7): d3 = y3 -t1 (7)

通过公式(8)计算右眼球中心与右眼图像的最右边之间的第二横向距 离d2:d2=r2+w2–x4     (8)The second lateral distance d2 between the center of the right eyeball and the rightmost side of the right eye image is calculated by formula (8): d2 = r2 + w2 -x4 (8)

通过公式(9)计算右眼球中心与右眼图像的最下边之间的第二纵向距离d4:d4=t2+h2–y4     (9)Calculate the second longitudinal distance d4 between the center of the right eyeball and the lowermost edge of the right eye image by formula (9): d4 = t2 + h2 -y4 (9)

步骤S333可以通过公式(10)计算横向校准数据m:m=d1/d2   (10)Step S333 can calculate the horizontal calibration data m by formula (10): m = d1 / d2 (10)

通过公式(11)计算纵向校准数据n:n=d3/d4    (11)Calculate the longitudinal calibration data n by formula (11): n = d3 / d4 (11)

本实施例的眼动校准控制方法,在指定观看区域设置9个定位点,人眼依次注视这9个定位点,依次记录人眼注视一个定位点时的校准数据和该定位点的对应关系。在人眼注视一个定位点时,通过摄像头获取图像,从图像中查找人脸图像,再从人脸图像中查找人眼图像,最后从人眼图像中查找眼球图像,该方法查找效率快且准确度高;根据人眼位置数据和眼球位置数据,计算出距离校准数据d、横向校准数据m和纵向校准数据n,将d、m、n和该定位点的位置信息保存在存储器中。所有定位点均采集完数据后,通过9个定位点的距离校准数据可以校准人眼距离指定观看区域的距离,从而将用户与指定观看区域的距离限定在指定范围内;通过9个定位点的横向校准数据和纵向校准数据可以推算用户视线所看向的指定观看区域的位置,视线追踪的准确度高。本实施例的眼动控制校准数据获取方法无需采用专用设备,且可以根据用户的眼动习惯来采集数据,用户体验好。In the eye movement calibration control method of this embodiment, nine positioning points are set in a specified viewing area, and the human eye sequentially looks at the nine positioning points, and sequentially records the correspondence between the calibration data and the positioning points when the human eye looks at one positioning point. When the human eye looks at an anchor point, the camera acquires an image, looks for a face image from the image, then looks for a human eye image, and finally looks for an eyeball image from the human eye image. This method is fast and accurate. Degree is high; distance calibration data d, horizontal calibration data m, and vertical calibration data n are calculated according to human eye position data and eyeball position data, and d, m, n and position information of the positioning point are stored in a memory. After all the positioning points have been collected, the distance calibration data of 9 positioning points can be used to calibrate the distance between the human eye and the specified viewing area, thereby limiting the distance between the user and the specified viewing area within the specified range. The horizontal calibration data and vertical calibration data can estimate the position of the specified viewing area to which the user's line of sight is looking, and the accuracy of the line of sight tracking is high. The method for acquiring the eye movement control calibration data in this embodiment does not require special equipment, and can collect data according to the eye movement habits of the user, and the user experience is good.

参照图3,本申请实施例还提供了一种眼动控制校准数据获取装置,包括:图像获取模块10,用于依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;Referring to FIG. 3, an embodiment of the present application further provides a device for acquiring eye movement control calibration data, including: animage acquisition module 10 for sequentially acquiring user images of a user gazing at a plurality of positioning points; The point is preset in the designated viewing area;

图像分析模块20,用于依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;Animage analysis module 20, configured to sequentially search a human eye image and an eyeball image from the user image, and obtain human eye position data and eyeball position data;

数据计算模块30,用于根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。Adata calculation module 30 is configured to calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.

本实施例中,图像获取模块10中的指定观看区域包括与用户进行人机交互的终端设备界面,例如可以是智能手机显示屏、平板显示屏、智能电视显示屏、个人电脑显示屏、笔记本电脑显示屏等。用户图像可以通过摄像头获取,摄像 头包括终端设备自带的前置摄像头、外接摄像头,如手机前置摄像头等。参照图2,为指定观看区域的定位点的示意图,包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点,其中左上、左中、左下、中下、中中和中上包围的指定观看区域为左边区域,右上、右中、右下、中下、中中和中上包围的指定观看区域为右边区域,左上、左中、中中、右中、右上和中上包围的指定观看区域为上边区域,左下、左中、中中、右中、右下和中下包围的指定观看区域为下边区域。In this embodiment, the designated viewing area in theimage acquisition module 10 includes a terminal device interface for human-computer interaction with a user, such as a smartphone display, a flat-panel display, a smart TV display, a personal computer display, and a notebook computer. Display, etc. User images can be obtained through cameras. The cameras include the front camera and external cameras, such as the front camera of the mobile phone. Referring to FIG. 2, a schematic diagram of an anchor point for a designated viewing area is provided, including upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right. Among them, upper left, middle left, The designated viewing area surrounded by bottom left, middle bottom, middle middle, and top middle is the left area, and the designated viewing area surrounded by top right, middle right, bottom right, middle bottom, middle, and middle top is the right area, top left, middle left, and middle. The designated viewing areas surrounded by middle, right middle, top right, and top middle are the top areas, and the designated viewing areas surrounded by bottom left, left middle, middle middle, right middle, bottom right, and bottom middle are the bottom areas.

以眼动控制手机显示屏为例,用户根据自己的习惯在距离手机显示屏合适的距离处,眼睛注视手机显示屏的一个定位点,通过手机前置摄像头采集人眼注视该定位点的图像。比如,可以预先设置注视时间,通过第一提醒单元分别发送提醒用户持续注视每个定位点的提醒信息,以提醒用户持续注视该定位点;通过第一判断单元判断当前时刻距发送提醒信息的时刻之间的时长是否大于预设的注视时长,若当前时刻距发送所述提醒信息的时刻之间的时长大于预设的注视时长时,通过第一图像获取单元生成拍摄用户图像的指令,则摄像头获得拍摄指令,采集图像;也可以在通过第二提醒单元分别发送提醒用户持续注视每个定位点的信息后,通过实时图像获取单元用摄像头持续实时采集图像,通过第二判断单元,根据训练好的分类器区分人眼的状态,如果判断人眼处于注视状态,则通过第二图像获取单元获取注视状态中上述图像的任一帧图像作为用户图像。进一步从获取的图像中查找人眼图像和眼球图像,获取到人眼位置数据和眼球位置数据;根据所述人眼位置数据和所述眼球位置数据计算一系列校准数据,依次记录所述校准数据与所述定位点的对应关系。校准数据可用于后续眼动追踪控制中,判断用户与指定观看区域的距离是否在预设范围内,并进行用户视线位置追踪,提高视线判断的准确度。Taking the eye movement control of the mobile phone display as an example, the user looks at an anchor point of the mobile phone display at an appropriate distance from the mobile phone display according to his habits, and collects an image of a human eye watching the anchor point through a front camera of the mobile phone. For example, the gaze time may be set in advance, and the first reminder unit may separately send reminder information reminding the user to continuously look at each anchor point to remind the user to continuously watch the anchor point; the first judgment unit may determine the current time from the time when the reminder information is sent. Whether the time between is greater than the preset gaze duration, and if the time between the current time and the time when the reminder information is sent is greater than the preset gaze duration, the first image acquisition unit generates an instruction to capture a user image, the camera Obtain shooting instructions and collect images; or you can send information reminding the user to continuously watch each anchor point through the second reminder unit, and then use the camera to continuously capture real-time images through the camera through the real-time image acquisition unit. The classifier distinguishes the state of the human eye. If it is determined that the human eye is in the gaze state, then the second image acquisition unit acquires any frame image of the above-mentioned image in the gaze state as the user image. Further searching for the human eye image and the eyeball image from the acquired image, obtaining the human eye position data and the eyeball position data; calculating a series of calibration data according to the human eye position data and the eyeball position data, and sequentially recording the calibration data Correspondence with the anchor point. The calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.

具体地,本实施例的用户首先看向左上定位点,摄像头采集人眼注视左上定位点的图像,从该图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据,计算校准数据,记录该校准数据与左上定位点的对应关系。用户再开始看向中上定位点,其余步骤同左上定位点。直至左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点的校准数据和定位点的对应关系均采集完毕。Specifically, the user in this embodiment first looks at the upper left positioning point, and the camera collects an image of the human eye gazing at the upper left positioning point, finds the human eye image and the eyeball image from the image, obtains human eye position data and eyeball position data, and calculates a calibration. Data to record the correspondence between the calibration data and the upper left anchor point. The user then starts looking at the center-up anchor point, and the rest of the steps are the same as the top-left anchor point. Until the upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right calibration data of the nine positioning points and the corresponding relationship of the positioning points are collected.

参照图4,本实施例中,所述图像分析模块20包括:Referring to FIG. 4, in this embodiment, theimage analysis module 20 includes:

人脸查找单元201,用于从所述用户图像中查找人脸图像;Aface finding unit 201, configured to find a face image from the user image;

人眼查找单元202,用于从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;The humaneye searching unit 202 is configured to search for a human eye image from the human face image and obtain human eye position data from the human face image, where the human eye image includes a left eye image and a right eye image;

眼球查找单元203,用于从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。Theeyeball search unit 203 is configured to search an eyeball image from the human eye image, and obtain eyeball position data from the human face image.

本实施例中,人脸查找单元201先从图像中查找人脸图像,如果在图像中没有查找到人脸图像,则返回步骤S1,调整用户和指定观看区域的相对位置,直至摄像头获取的图像中能查找到人脸图像。人脸图像的查找方法较多,比如:利用人脸规则(如眼睛、鼻子、嘴巴等的分布)对输入图像进行人脸检测;通过寻找人脸面部不变的特征(如肤色、边缘、纹理)来对输入图像进行人脸检测;将人脸的面部特征用一个标准的人脸模板来描述,进行人脸检测时,先计算输入图像与标准人脸模板之间的相关值,然后再将求得的相关值与事先设定的阂值进行比较,以判别输入图像中是否存在人脸;将人脸区域看作一类模式,使用大量的人脸数据作样本训练,来学习潜在的规则并构造分类器,通过判别图像中所有可能区域模式属性来实现人脸的检测。本实施例将查找到的人脸图像用矩形框标出。In this embodiment, theface search unit 201 first searches for a face image from the image. If no face image is found in the image, it returns to step S1 to adjust the relative position of the user and the designated viewing area until the image obtained by the camera Face images can be found in. There are many ways to search for facial images, such as: using face rules (such as the distribution of eyes, nose, mouth, etc.) to perform face detection on the input image; by finding features that are invariant to the face (such as skin color, edges, textures) ) To perform face detection on the input image; describe the facial features of the face with a standard face template. When performing face detection, first calculate the correlation value between the input image and the standard face template, and then The obtained correlation value is compared with a preset threshold value to determine whether a face exists in the input image; the face area is regarded as a type of pattern, and a large amount of face data is used as a sample training to learn potential rules A classifier is constructed to detect faces by discriminating all possible region pattern attributes in the image. In this embodiment, the found face image is marked with a rectangular frame.

人眼查找单元202从人脸图像的矩形框中查找人眼图像,有利于缩小查找范围,提高人眼查找的查找效率和准确度,如果没有查找到人眼图像,则返回步骤S1,重新获取图像,直至步骤S22中能查找到人眼图像。人眼查找的方法包括基于模板匹配的方法、基于统计的方法和基于知识的方法。其中基于模板匹配的方法包括灰度投影模板和几何特征模板:灰度投影法是指对人脸灰度图像进行水平和垂直方向的投影,分别统计出两个方向上的灰度值和/或灰度函数值,找出特定变化点,然后根据先验知识将不同方向上的变化点位置相结合,即得到人眼的位置;几何特征模板是利用眼睛的个体特征以及分布特征作为依据来实施人眼检测。基于统计的方法一般是通过对大量目标样本和非目标样本进行训练学习得到一组模型参数,然后基于模型构建分类器或者滤波器来检测目标。基于知识的方法是确定图像的应用环境,总结特定条件下可用于人眼检测的知识(如轮廓信息、色彩信息、位置信息)等,把它们归纳成指导人眼检测的规则。本实施例用矩形框分别框出左眼图像和右眼图像,获得下述人眼位 置数据,包括:The humaneye search unit 202 searches for a human eye image from a rectangular frame of the face image, which is beneficial to narrow the search range and improve the search efficiency and accuracy of the human eye search. If no human eye image is found, return to step S1 to reacquire Image until a human eye image can be found in step S22. Human eye search methods include template-based methods, statistics-based methods, and knowledge-based methods. Among them, the method based on template matching includes a gray projection template and a geometric feature template. The gray projection method refers to the horizontal and vertical projection of a gray image of a human face, and respectively counts the gray value and / or in two directions. The value of the gray function, find specific change points, and then combine the positions of change points in different directions according to prior knowledge to obtain the position of the human eye; the geometric feature template is implemented using the individual features and distribution features of the eyes as the basis Human eye detection. Statistics-based methods generally train and learn a large number of target samples and non-target samples to obtain a set of model parameters, and then build a classifier or filter to detect the target based on the model. The knowledge-based method is to determine the application environment of the image, summarize the knowledge (such as contour information, color information, position information) that can be used for human eye detection under specific conditions, and summarize them into rules that guide human eye detection. This embodiment uses a rectangular frame to frame the left-eye image and the right-eye image, respectively, to obtain the following human eye position data, including:

r1:左眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;r1 : the distance from the upper left vertex of the rectangular frame of the left-eye image to the left-most face image;

t1:左眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;t1 : the distance from the upper left vertex of the rectangular frame of the left eye image to the uppermost edge of the face image;

w1:左眼图像的矩形框的宽度;h1:左眼图像的矩形框的高度;w1 : the width of the rectangular frame of the left-eye image; h1 : the height of the rectangular frame of the left-eye image;

r2:右眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;r2 : the distance from the upper-left vertex of the rectangular frame of the right-eye image to the left-most face image;

t2:右眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;t2 : the distance from the top left vertex of the rectangular frame of the right eye image to the uppermost edge of the face image;

w2:右眼图像的矩形框的宽度;h2:右眼图像的矩形框的高度。w2 : the width of the rectangular frame of the right-eye image; h2 : the height of the rectangular frame of the right-eye image.

眼球查找单元203从左眼图像中查找到左眼球图像,从右眼图像中查找右眼球图像,如果没有查找到眼球图像,则返回步骤S1,重新获取图像,直至步骤S23中能查找到眼球图像。眼球查找的方法包括神经网络法、边缘点积分投影曲线的极值位置判别法、模板匹配法、多分辨率的马赛克图法、几何及对称性检测法、基于霍夫变换法等。本实施例用矩形框分别框出左眼球图像和右眼球图像,获得下述眼球位置数据,包括:Theeyeball search unit 203 finds the left eyeball image from the left eye image, and the right eyeball image from the right eye image. If no eyeball image is found, the process returns to step S1 to reacquire the image until the eyeball image can be found in step S23. . Eyeball search methods include neural network method, extreme point position discrimination method of edge point integral projection curve, template matching method, multi-resolution mosaic map method, geometric and symmetry detection method, and Hough transform-based method. This embodiment uses a rectangular frame to frame the left eyeball image and the right eyeball image, respectively, to obtain the following eyeball position data, including:

r3:左眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;r3 : the distance between the top left vertex of the rectangular frame of the left eyeball image and the leftmost face of the face image;

t3:左眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;t3 : the distance from the top left vertex of the rectangular frame of the left eyeball image to the uppermost edge of the face image;

w3:左眼球图像的矩形框的宽度;h3:左眼球图像的矩形框的高度;w3 : the width of the rectangular frame of the left eyeball image; h3 : the height of the rectangular frame of the left eyeball image;

r4:右眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;r4 : the distance between the top left vertex of the rectangular frame of the right eyeball image and the leftmost face of the face image;

t4:右眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;t4 : the distance from the top left vertex of the rectangular frame of the right eyeball image to the uppermost edge of the face image;

w4:右眼球图像的矩形框的宽度;h4:右眼球图像的矩形框的高度。w4 : the width of the rectangular frame of the right eyeball image; h4 : the height of the rectangular frame of the right eyeball image.

本实施例中给出了从人脸图像中获取眼球位置数据的具体参数。基于本申请的发明理念,也可以从人眼图像中获取眼球位置数据,本申请不对从人眼图像中获取眼球位置数据进行赘述。Specific parameters for obtaining eyeball position data from a face image are given in this embodiment. Based on the inventive concept of the present application, eyeball position data can also be obtained from a human eye image, and this application does not go into details about obtaining eyeball position data from a human eye image.

参照图5,本实施例中,所述校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述数据计算模块30包括:Referring to FIG. 5, in this embodiment, the calibration data includes distance calibration data, horizontal calibration data, and vertical calibration data, and thedata calculation module 30 includes:

第一数据获取单元301,用于根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;A firstdata obtaining unit 301, configured to calculate distance calibration data when a human eye looks at one of the positioning points according to the human eye position data;

第二数据获取单元302,用于根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;A seconddata obtaining unit 302, configured to calculate, according to the human eye position data and the eyeball position data, horizontal eyeball position lateral calibration data and vertical eyeball position calibration data when a human eye fixates on one of the positioning points;

数据存储单元303,用于将所述距离校准数据、横向校准数据、纵向校准 数据和对应的所述定位点位置信息保存在存储器中。Adata storage unit 303 is configured to store the distance calibration data, the horizontal calibration data, the vertical calibration data, and the corresponding position information of the anchor point in a memory.

本实施例中,通过第一数据获取单元301、第二数据获取单元302和数据存储单元303计算人眼注视一个定位点时的校准数据,并将校准数据和对应的定位点信息保存在存储器中。本实施例中对左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点进行一一计算和数据储存。距离校准数据用来定位人眼离指定观看区域的距离,横向校准数据和纵向校准数据用来指示人眼看向指定定位点时的眼球位置。In this embodiment, the firstdata acquisition unit 301, the seconddata acquisition unit 302, and thedata storage unit 303 are used to calculate calibration data when a human eye looks at an anchor point, and the calibration data and the corresponding anchor point information are stored in a memory. . In this embodiment, calculation and data storage are performed on the nine positioning points of upper left, upper middle, upper right, middle left, middle middle, right middle, lower left, middle lower, and lower right. The distance calibration data is used to locate the distance of the human eye from the specified viewing area, and the horizontal calibration data and vertical calibration data are used to indicate the position of the eyeball when the human eye looks at the specified positioning point.

参照图6,本实施例中,所述第一数据获取单元301包括:Referring to FIG. 6, in this embodiment, the firstdata obtaining unit 301 includes:

第一计算子单元3011,用于根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;Afirst calculation subunit 3011 is configured to calculate coordinates of a left eye center position according to left eye position data included in the human eye position data, and calculate a right eye center position according to right eye position data included in the human eye position data. coordinate;

第二计算子单元3012,用于根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据;Asecond calculation subunit 3012, configured to calculate the distance between the center of the left eye and the center of the right eye according to the coordinates of the center position of the left eye and the coordinates of the center position of the right eye to obtain the distance calibration data;

本实施例中,第一计算子单元3011可以通过公式(12)计算左眼中心位置坐标(x1,y1),Pot(x1,y1)=Pot(r1+w1/2,t1+h1/2)    (12)In this embodiment, thefirst calculation subunit 3011 can calculate the coordinates (x1 , y1 ) of the center position of the left eye by using formula (12), and Pot (x1 , y1 ) = Pot (r1 + w1/2 , t1 + h1/2 ) (12)

通过公式(13)计算右眼中心位置坐标(x2,y2),Calculate the coordinates (x2 , y2 ) of the center position of the right eye by formula (13),

Pot(x2,y2)=Pot(r2+w2/2,t2+h2/2)    (13)Pot (x 2, y 2) = Pot (r 2 + w 2/2, t 2 + h 2/2) (13)

第二计算子单元3012可以通过公式(14)计算左眼中心与右眼中心的距离d,d即为距离校准数据。Thesecond calculation subunit 3012 can calculate the distance d between the center of the left eye and the center of the right eye by using formula (14), where d is distance calibration data.

Figure PCTCN2019073766-appb-000002
Figure PCTCN2019073766-appb-000002

通过d的值可以定位人眼距离指定观看区域的距离。The value of d can be used to locate the distance of the human eye from the specified viewing area.

参照图7,本实施例中,所述第二数据获取单元302包括:Referring to FIG. 7, in this embodiment, the seconddata obtaining unit 302 includes:

第三计算子单元3021,用于根据所述眼球位置数据包括的左眼球位置数据,计算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;Athird calculation subunit 3021, configured to calculate coordinates of the left eyeball center position according to the left eyeball position data included in the eyeball position data; and calculate the right eyeball center position coordinates according to the right eyeball position data included in the eyeball position data;

第四计算子单元3022,用于根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和 左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;Afourth calculation subunit 3022 is configured to calculate a first lateral distance between the left eyeball center and the leftmost side of the left eye image, and the left eyeball center according to the left eyeball center position coordinates and the left eye position data. A first longitudinal distance from the uppermost edge of the left-eye image; and calculating a distance between the center of the right-eyeball and the right-most side of the right-eye image based on the right-eye-center position coordinates and the right-eye position data A second lateral distance, and a second longitudinal distance between the center of the right eyeball and the lowermost edge of the right eye image;

第五计算子单元3023,用于计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。Afifth calculation subunit 3023 is configured to calculate a ratio of the first lateral distance to the second lateral distance to obtain the lateral calibration data; and calculate a ratio of the first longitudinal distance to the second longitudinal distance To obtain the longitudinal calibration data.

本实施例中,第三计算子单元3021中可以通过公式(15)计算左眼球中心位置坐标(x3,y3),Pot(x3,y3)=Pot(r3+w3/2,t3+h3/2)   (15)Embodiment, thethird computing sub-unit 3021 may calculate the position coordinates of the eyeball center left by the equation (15) according to the present embodiment(x 3, y 3), Pot (x 3, y 3) = Pot (r 3 + w 3/2, t 3 + h 3/2 ) (15)

通过公式(16)计算右眼球中心位置坐标(x4,y4),Calculate the coordinates (x4 , y4 ) of the center position of the right eyeball by formula (16),

Pot(x4,y4)=Pot(r4+w4/2,t4+h4/2)     (16)Pot (x 4, y 4) = Pot (r 4 + w 4/2, t 4 + h 4/2) (16)

第四计算子单元3022可以通过公式(17)计算左眼球中心与左眼图像的最左边之间的第一横向距离d1:d1=x3–r1   (17)Thefourth calculation subunit 3022 can calculate the first lateral distance d1 between the center of the left eyeball and the leftmost side of the left eye image by formula (17): d1 = x3 -r1 (17)

通过公式(18)计算左眼球中心与左眼图像的最上边之间的第一纵向距离d3:d3=y3–t1    (18)The first longitudinal distance d3 between the center of the left eyeball and the uppermost edge of the left eye image is calculated by formula (18): d3 = y3 -t1 (18)

通过公式(19)计算右眼球中心与右眼图像的最右边之间的第二横向距离d2:d2=r2+w2–x4      (19)The second lateral distance d2 between the center of the right eyeball and the rightmost side of the right eye image is calculated by formula (19): d2 = r2 + w2 -x4 (19)

通过公式(20)计算右眼球中心与右眼图像的最下边之间的第二纵向距离d4:d4=t2+h2–y4     (20)Calculate the second longitudinal distance d4 between the center of the right eyeball and the lowermost edge of the right eye image by formula (20): d4 = t2 + h2 -y4 (20)

第五计算子单元3023可以通过公式(21)计算横向校准数据m:Thefifth calculation subunit 3023 can calculate the lateral calibration data m by formula (21):

m=d1/d2     (21)m = d1 / d2 (21)

通过公式(22)计算纵向校准数据n:n=d3/d4    (22)Calculate the longitudinal calibration data n by formula (22): n = d3 / d4 (22)

本实施例的眼动校准控制装置,在指定观看区域设置9个定位点,人眼依次注视这9个定位点,依次记录人眼注视一个定位点时的校准数据和该定位点的对应关系。在人眼注视一个定位点时,通过摄像头获取图像,从图像中查找 人脸图像,再从人脸图像中查找人眼图像,最后从人眼图像中查找眼球图像,该方法查找效率快且准确度高;根据人眼位置数据和眼球位置数据,计算出距离校准数据d、横向校准数据m和纵向校准数据n,将d、m、n和该定位点的位置信息保存在存储器中。所有定位点均采集完数据后,通过9个定位点的距离校准数据可以校准人眼距离指定观看区域的距离,从而将用户与指定观看区域的距离限定在指定范围内;通过9个定位点的横向校准数据和纵向校准数据可以推算用户视线所看向的指定观看区域的位置,视线追踪的准确度高。本实施例的眼动控制校准数据获取装置无需采用专用设备,且可以根据用户的眼动习惯来采集数据,用户体验好。In the eye movement calibration control device of this embodiment, nine positioning points are set in a designated viewing area, and the human eye sequentially looks at the nine positioning points, and sequentially records the correspondence between the calibration data and the positioning points when the human eye looks at one positioning point. When the human eye looks at an anchor point, the camera acquires an image, looks for a face image from the image, then looks for a human eye image, and finally looks for an eyeball image from the human eye image. This method is fast and accurate. Degree is high; distance calibration data d, horizontal calibration data m, and vertical calibration data n are calculated according to human eye position data and eyeball position data, and d, m, n and position information of the positioning point are stored in a memory. After all the positioning points have been collected, the distance calibration data of 9 positioning points can be used to calibrate the distance between the human eye and the specified viewing area, thereby limiting the distance between the user and the specified viewing area within the specified range. The horizontal calibration data and vertical calibration data can estimate the position of the specified viewing area to which the user's line of sight is looking, and the accuracy of the line of sight tracking is high. The apparatus for acquiring eye movement control calibration data in this embodiment does not need to use special equipment, and can collect data according to a user's eye movement habits, and the user experience is good.

本申请还提出一种计算机设备03,其包括处理器04、存储器01及存储于所述存储器01上并可在所述处理器04上运行的计算机程序02,所述处理器04执行所述计算机程序02时实现上述的眼动控制校准数据获取方法。This application also proposes acomputer device 03, which includes aprocessor 04, amemory 01, and acomputer program 02 stored on thememory 01 and executable on theprocessor 04. Theprocessor 04 executes the computer Atprogram 02, the above-mentioned method for acquiring eye movement control calibration data is implemented.

Claims (17)

Translated fromChinese
一种眼动控制校准数据获取方法,其特征在于,包括:A method for acquiring eye movement control calibration data, comprising:依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;Sequentially acquiring user images where the human eye fixes on a plurality of anchor points, wherein the anchor points are preset in a designated viewing area;依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;Searching the human eye image and the eyeball image from the user image in order to obtain human eye position data and eyeball position data;根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。Calculate calibration data according to the human eye position data and the eyeball position data, and sequentially record the calibration data and the corresponding plurality of position information of the anchor points.如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述依次从所述用户图像中查找人眼图像和眼球图像,获取人眼图像位置数据和眼球图像位置数据的步骤,包括:The method for acquiring eye movement control calibration data according to claim 1, wherein the steps of sequentially searching for a human eye image and an eyeball image from the user image, and obtaining human eye image position data and eyeball image position data, include:从所述用户图像中查找人脸图像;Searching for a face image from the user image;从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;Searching for a human eye image from the human face image, and acquiring human eye position data from the human face image, the human eye image including a left eye image and a right eye image;从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。Find an eyeball image from the human eye image, and obtain eyeball position data from the human face image.如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息的步骤,包括:The method for acquiring eye movement control calibration data according to claim 1, wherein the calibration data comprises distance calibration data, horizontal calibration data and vertical calibration data, and the method is based on the position data of the human eye and the position of the eyeball. The step of calculating the calibration data by data, and sequentially recording the calibration data and the corresponding position information of the plurality of positioning points includes:根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;以及根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;Calculating distance calibration data when a human eye fixes on one of the positioning points according to the human eye position data; and calculating a lateral position of an eyeball when the human eye fixes on one of the positioning points according to the human eye position data and the eyeball position data Calibration data and longitudinal calibration data of eyeball position;将所述距离校准数据、横向校准数据、纵向校准数据和对应的所述定位点位置信息保存在存储器中。The distance calibration data, the horizontal calibration data, the vertical calibration data, and the corresponding position information of the anchor point are stored in a memory.如权利要求3所述的眼动控制校准数据获取方法,其特征在于,所述根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据的步骤,包括:The method for acquiring eye movement control calibration data according to claim 3, wherein the step of calculating distance calibration data when a human eye fixates on one of the positioning points according to the human eye position data, comprises:根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;Calculating left eye center position coordinates according to the left eye position data included in the human eye position data; and calculating right eye center position coordinates according to the right eye position data included in the human eye position data;根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据。Calculate the distance between the center of the left eye and the center of the right eye according to the coordinates of the left eye center position and the right eye center position, and obtain the distance calibration data.如权利要求3所述的眼动控制校准数据获取方法,其特征在于,所述根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据的步骤,包括:The method for acquiring calibration data of eye movement control according to claim 3, wherein the horizontal calibration data of the position of the eyeball when the human eye fixates on one of the positioning points is calculated according to the position data of the human eye and the position data of the eyeball. Steps to calibrate the data longitudinally with the eyeball position include:根据所述眼球位置数据包括的左眼球位置数据,计算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;Calculate the coordinates of the left eyeball center position according to the left eyeball position data included in the eyeball position data; and calculate the right eyeball center position coordinates according to the right eyeball position data included in the eyeball position data;根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;Calculating a first lateral distance between the center of the left eyeball and the leftmost of the left-eye image, and the center of the left eyeball and the uppermost edge of the left-eye image according to the left-eye center position coordinates and the left-eye position data A first longitudinal distance therebetween; and a second lateral distance between the center of the right eyeball and the rightmost side of the right eye image, and the center of the right eyeball, based on the right eyeball center position coordinates and the right eye position data A second longitudinal distance from the lowermost edge of the right-eye image;计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。Calculating a ratio of the first lateral distance to the second lateral distance to obtain the lateral calibration data; and calculating a ratio of the first longitudinal distance to the second longitudinal distance to obtain the longitudinal calibration data.如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述依次获取人眼注视多个定位点的用户图像的步骤,包括:The method for acquiring eye movement control calibration data according to claim 1, wherein the step of sequentially acquiring user images in which a human eye looks at a plurality of anchor points comprises:分别发送提醒用户持续注视每个所述定位点的提醒信息;Sending reminder information reminding the user to continuously watch each of the anchor points;分别判断当前时刻距发送所述提醒信息的时刻之间的时长是否大于预设的注视时长;Separately determining whether the time between the current time and the time when the reminder information is sent is greater than a preset gaze duration;若是,则分别生成拍摄所述用户图像的指令,以获取所述用户图像。If yes, an instruction for capturing the user image is generated to obtain the user image.如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述依次获取人眼注视多个定位点的用户图像的步骤,包括:The method for acquiring eye movement control calibration data according to claim 1, wherein the step of sequentially acquiring user images in which a human eye looks at a plurality of anchor points comprises:分别发送提醒用户持续注视每个所述定位点的提醒信息;Sending reminder information reminding the user to continuously watch each of the anchor points;分别获取摄像头实时采集的图像;Acquire the images collected by the camera in real time;分别通过预训练的分类器判断所述图像内所包含的人眼的状态;Judge the state of the human eye contained in the image through a pre-trained classifier, respectively;若所述人眼处于注视状态,则分别从实时采集的所述图像中获取所述用户图像。If the human eye is in a gaze state, the user images are acquired from the images acquired in real time, respectively.如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述指定观看区域内包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个所述定位点。The method for acquiring eye movement control calibration data according to claim 1, wherein the designated viewing area includes upper left, middle upper, upper right, left middle, middle, right middle, lower left, lower middle, and lower right 9 said anchor points.一种眼动控制校准数据获取装置,其特征在于,包括:An eye movement control calibration data acquisition device, comprising:图像获取模块,用于依次获取人眼注视多个定位点的用户图像;其中,多 个所述定位点预设于指定观看区域内;An image acquisition module, configured to sequentially obtain user images where a human eye fixes on a plurality of anchor points; wherein the anchor points are preset in a designated viewing area;图像分析模块,用于依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;An image analysis module, configured to sequentially search a human eye image and an eyeball image from the user image, and obtain human eye position data and eyeball position data;数据计算模块,用于根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。A data calculation module is configured to calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述图像分析模块包括:The apparatus for acquiring eye movement control calibration data according to claim 9, wherein the image analysis module comprises:人脸查找单元,用于从所述用户图像中查找人脸图像;A face finding unit, configured to find a face image from the user image;人眼查找单元,用于从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;A human eye search unit, configured to search for a human eye image from the human face image, and obtain human eye position data from the human face image, where the human eye image includes a left eye image and a right eye image;眼球查找单元,用于从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。The eyeball search unit is configured to search an eyeball image from the human eye image, and obtain eyeball position data from the human face image.如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述数据计算模块包括:The apparatus for acquiring eye movement control calibration data according to claim 9, wherein the calibration data comprises distance calibration data, horizontal calibration data and vertical calibration data, and the data calculation module comprises:第一数据获取单元,用于根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;A first data obtaining unit, configured to calculate distance calibration data when a human eye fixes on the positioning point according to the human eye position data;第二数据获取单元,用于根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;A second data obtaining unit, configured to calculate, according to the human eye position data and the eyeball position data, horizontal eyeball position horizontal calibration data and vertical eyeball position calibration data when a human eye fixates on one of the positioning points;数据存储单元,用于将所述距离校准数据、横向校准数据、纵向校准数据和对应的所述定位点位置信息保存在存储器中。A data storage unit is configured to store the distance calibration data, the horizontal calibration data, the vertical calibration data, and the corresponding position information of the positioning point in a memory.如权利要求11所述的眼动控制校准数据获取装置,其特征在于,所述第一数据获取单元包括:The apparatus for acquiring eye movement control calibration data according to claim 11, wherein the first data acquisition unit comprises:第一计算子单元,用于根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;A first calculation subunit, configured to calculate coordinates of the center position of the left eye according to the left eye position data included in the position data of the human eye; and calculate coordinates of the center position of the right eye according to the right eye position data included in the position data of the human eye ;第二计算子单元,用于根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据。A second calculation subunit is configured to calculate a distance between the center of the left eye and the center of the right eye according to the coordinates of the left eye center position and the right eye center position to obtain the distance calibration data.如权利要求11所述的眼动控制校准数据获取装置,其特征在于,所述第二数据获取单元包括:The apparatus for acquiring eye movement control calibration data according to claim 11, wherein the second data acquisition unit comprises:第三计算子单元,用于根据所述眼球位置数据包括的左眼球位置数据,计 算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;A third calculation subunit, configured to calculate coordinates of the center position of the left eyeball based on the left eyeball position data included in the eyeball position data; and calculate coordinates of the center position of the right eyeball based on the right eyeball position data included in the eyeball position data;第四计算子单元,用于根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;A fourth calculation subunit, configured to calculate a first lateral distance between the center of the left eyeball and the leftmost side of the left eye image according to the coordinates of the left eyeball center position and the left eye position data, and the left eyeball center and A first longitudinal distance between the uppermost edges of the left-eye image; and calculating a first distance between the center of the right-eyeball and the right-most side of the right-eye image according to the coordinates of the right-eye center position and the right-eye position data Two horizontal distances, and a second vertical distance between the center of the right eyeball and the lowermost edge of the right eye image;第五计算子单元,用于计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。A fifth calculation subunit, configured to calculate a ratio between the first lateral distance and the second lateral distance to obtain the lateral calibration data; and calculate a ratio between the first longitudinal distance and the second longitudinal distance, Obtaining the longitudinal calibration data.如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述图像获取模块包括:The apparatus for acquiring eye movement control calibration data according to claim 9, wherein the image acquisition module comprises:第一提醒单元,用于分别发送提醒用户持续注视每个所述定位点的提醒信息;A first reminder unit, configured to separately send reminder information reminding a user to continuously watch each of the anchor points;第一判断单元,用于分别判断当前时刻距发送所述提醒信息的时刻之间的时长是否大于预设的注视时长;A first determining unit, configured to separately determine whether a time between a current time and a time when the reminder information is sent is greater than a preset gaze duration;第一图像获取单元,用于若当前时刻距发送所述提醒信息的时刻之间的时长大于预设的注视时长,则分别生成拍摄所述用户图像的指令,以获取所述用户图像。A first image obtaining unit is configured to generate instructions for capturing the user image respectively if the time between the current time and the time when the reminder information is sent is greater than a preset gaze duration, to obtain the user image.如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述图像获取模块包括:The apparatus for acquiring eye movement control calibration data according to claim 9, wherein the image acquisition module comprises:第二提醒单元,用于分别发送提醒用户持续注视每个所述定位点的信息;A second reminding unit, configured to separately send information reminding the user to continuously watch each of the anchor points;实时图像获取单元,用于分别获取摄像头实时采集的图像;A real-time image acquisition unit, configured to acquire images acquired by a camera in real time;第二判断单元,用于分别通过预训练的分类器判断所述图像内所包含的人眼的状态;A second determination unit, configured to determine a state of a human eye included in the image through a pre-trained classifier, respectively;第二图像获取单元,用于若所述人眼处于注视状态,则分别从实时采集的所述图像中获取所述用户图像。A second image acquisition unit is configured to acquire the user image from the images acquired in real time, if the human eye is in a gaze state.如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述指定观看区域内包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个所述定位点。The apparatus for acquiring eye movement control calibration data according to claim 9, wherein the designated viewing area includes upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, lower middle, and lower right 9 said anchor points.一种计算机设备,其特征在于,其包括处理器、存储器及存储于所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机 程序时实现如权利要求1~8任一项所述的眼动控制校准数据获取方法。A computer device, comprising a processor, a memory, and a computer program stored on the memory and executable on the processor. The processor implements the computer program according to claim 1 when executing the computer program. The method for acquiring eye movement control calibration data according to any one of 8 to 8.
PCT/CN2019/0737662018-08-312019-01-29Method and apparatus for acquiring eye movement control calibration dataCeasedWO2020042542A1 (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN201811014201.12018-08-31
CN201811014201.1ACN109343700B (en)2018-08-312018-08-31Eye movement control calibration data acquisition method and device

Publications (1)

Publication NumberPublication Date
WO2020042542A1true WO2020042542A1 (en)2020-03-05

Family

ID=65292236

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/CN2019/073766CeasedWO2020042542A1 (en)2018-08-312019-01-29Method and apparatus for acquiring eye movement control calibration data

Country Status (2)

CountryLink
CN (1)CN109343700B (en)
WO (1)WO2020042542A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111444789A (en)*2020-03-122020-07-24深圳市时代智汇科技有限公司Myopia prevention method and system based on video induction technology
CN113255476A (en)*2021-05-082021-08-13西北大学Target tracking method and system based on eye movement tracking and storage medium
CN114995412A (en)*2022-05-272022-09-02东南大学 A remote control car control system and method based on eye tracking technology
CN115100575A (en)*2022-07-222022-09-23北方民族大学 An eye movement video data processing method and system based on image processing technology

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109976528B (en)*2019-03-222023-01-24北京七鑫易维信息技术有限公司Method for adjusting watching area based on head movement and terminal equipment
CN110275608B (en)*2019-05-072020-08-04清华大学Human eye sight tracking method
CN110399930B (en)*2019-07-292021-09-03北京七鑫易维信息技术有限公司Data processing method and system
CN110780742B (en)*2019-10-312021-11-02Oppo广东移动通信有限公司 Eye tracking processing method and related device
CN111290580B (en)*2020-02-132022-05-31Oppo广东移动通信有限公司 Calibration method and related device based on gaze tracking
JP7640291B2 (en)2021-03-082025-03-05本田技研工業株式会社 Calibration device and calibration method
CN113918007B (en)*2021-04-272022-07-05广州市保伦电子有限公司Video interactive operation method based on eyeball tracking
CN116824683B (en)*2023-02-202023-12-12广州视景医疗软件有限公司Eye movement data acquisition method and system based on mobile equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20060110008A1 (en)*2003-11-142006-05-25Roel VertegaalMethod and apparatus for calibration-free eye tracking
CN101807110A (en)*2009-02-172010-08-18由田新技股份有限公司Pupil positioning method and system
CN102802502A (en)*2010-03-222012-11-28皇家飞利浦电子股份有限公司System and method for tracking the point of gaze of an observer
CN105094337A (en)*2015-08-192015-11-25华南理工大学Three-dimensional gaze estimation method based on irises and pupils
CN109375765A (en)*2018-08-312019-02-22深圳市沃特沃德股份有限公司Eyeball tracking exchange method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102830793B (en)*2011-06-162017-04-05北京三星通信技术研究有限公司Sight tracing and equipment
CN102662476B (en)*2012-04-202015-01-21天津大学Gaze estimation method
CN107436675A (en)*2016-05-252017-12-05深圳纬目信息技术有限公司A kind of visual interactive method, system and equipment
US9996744B2 (en)*2016-06-292018-06-12International Business Machines CorporationSystem, method, and recording medium for tracking gaze using only a monocular camera from a moving screen
CN107633240B (en)*2017-10-192021-08-03京东方科技集团股份有限公司 Eye tracking method and device, smart glasses
CN108427503B (en)*2018-03-262021-03-16京东方科技集团股份有限公司Human eye tracking method and human eye tracking device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20060110008A1 (en)*2003-11-142006-05-25Roel VertegaalMethod and apparatus for calibration-free eye tracking
CN101807110A (en)*2009-02-172010-08-18由田新技股份有限公司Pupil positioning method and system
CN102802502A (en)*2010-03-222012-11-28皇家飞利浦电子股份有限公司System and method for tracking the point of gaze of an observer
CN105094337A (en)*2015-08-192015-11-25华南理工大学Three-dimensional gaze estimation method based on irises and pupils
CN109375765A (en)*2018-08-312019-02-22深圳市沃特沃德股份有限公司Eyeball tracking exchange method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111444789A (en)*2020-03-122020-07-24深圳市时代智汇科技有限公司Myopia prevention method and system based on video induction technology
CN111444789B (en)*2020-03-122023-06-20深圳市时代智汇科技有限公司Myopia prevention method and system based on video induction technology
CN113255476A (en)*2021-05-082021-08-13西北大学Target tracking method and system based on eye movement tracking and storage medium
CN113255476B (en)*2021-05-082023-05-19西北大学 A target tracking method, system and storage medium based on eye tracking
CN114995412A (en)*2022-05-272022-09-02东南大学 A remote control car control system and method based on eye tracking technology
CN115100575A (en)*2022-07-222022-09-23北方民族大学 An eye movement video data processing method and system based on image processing technology

Also Published As

Publication numberPublication date
CN109343700A (en)2019-02-15
CN109343700B (en)2020-10-27

Similar Documents

PublicationPublication DateTitle
WO2020042542A1 (en)Method and apparatus for acquiring eye movement control calibration data
CN109375765B (en)Eyeball tracking interaction method and device
CN105913487B (en)One kind is based on the matched direction of visual lines computational methods of iris edge analysis in eye image
Li et al.Learning to predict gaze in egocentric video
CN104951084B (en)Eye-controlling focus method and device
CN105184246B (en)Living body detection method and living body detection system
US9075453B2 (en)Human eye controlled computer mouse interface
CN104978548B (en)A kind of gaze estimation method and device based on three-dimensional active shape model
EP2992405B1 (en)System and method for probabilistic object tracking over time
CN108985210A (en)A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
KR101288447B1 (en)Gaze tracking apparatus, display apparatus and method therof
CN114391117A (en) Eye tracking delay enhancement
CN107729871A (en)Infrared light-based human eye movement track tracking method and device
CN105912126B (en)A kind of gesture motion is mapped to the adaptive adjusting gain method at interface
CN110051319A (en)Adjusting method, device, equipment and the storage medium of eyeball tracking sensor
WO2024113275A1 (en)Gaze point acquisition method and apparatus, electronic device, and storage medium
WO2023071882A1 (en)Human eye gaze detection method, control method and related device
CN110321820A (en)A kind of sight drop point detection method based on contactless device
CN114078278A (en)Method and device for positioning fixation point, electronic equipment and storage medium
KR20230085901A (en)Method and device for providing alopecia information
US20170115513A1 (en)Method of determining at least one behavioural parameter
Yang et al.Continuous gaze tracking with implicit saliency-aware calibration on mobile devices
CN117576771A (en)Visual attention assessment method, device, medium and equipment
Yang et al.vGaze: Implicit saliency-aware calibration for continuous gaze tracking on mobile devices
CN114092985A (en) A terminal control method, device, terminal and storage medium

Legal Events

DateCodeTitleDescription
121Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number:19854978

Country of ref document:EP

Kind code of ref document:A1

NENPNon-entry into the national phase

Ref country code:DE

122Ep: pct application non-entry in european phase

Ref document number:19854978

Country of ref document:EP

Kind code of ref document:A1


[8]ページ先頭

©2009-2025 Movatter.jp