







本申请涉及人机交互技术领域,具体涉及一种眼动控制校准数据获取方法和装置。The present application relates to the field of human-computer interaction technology, and in particular, to a method and device for acquiring calibration data for eye movement control.
眼动控制方法是一种非接触的人机互动方式,通过追踪眼球位置来计算眼睛的注视点的位置。眼动控制对于无法双手操作的用户起到重大帮助。随着智能终端的发展,具有眼球追踪功能的游戏电脑使玩家在游戏场景中更为身临其境。眼球追踪技术需要用到专用设备,如眼动仪。在这些专用设备使用过程中,用户需要根据说明书限定的眼动方式才可控制设备。人机交互方式的趋势是以人为中心、更为友好和便捷,因此眼动追踪也朝着根据用户眼动习惯来控制设备的方向发展。每个用户可以根据自己特定的眼动习惯先对设备进行校准,使得后续的眼动控制可以根据用户的眼动习惯来操作。现有技术的校准步骤中,通常根据用户盯住预设定位点的图像来进行图像处理,计算预设定位点对应的瞳孔中心位置来收集校准数据。但是根据此种方法得到的校准数据,在后续的眼动追踪操作中,视线判断的准确度低,用户体验不高。The eye movement control method is a non-contact human-computer interaction method. The position of the eye's fixation point is calculated by tracking the position of the eyeball. Eye movement control is a great help for users who ca n’t use both hands. With the development of smart terminals, gaming computers with eye tracking capabilities make players more immersive in the game scene. Eye-tracking technology requires special equipment, such as an eye tracker. During the use of these special equipment, users need to control the equipment according to the eye movements defined in the instructions. The trend of human-computer interaction is human-centered, more friendly and convenient, so eye tracking is also moving towards controlling the device according to the user's eye movement habits. Each user can first calibrate the device according to their specific eye movement habits, so that subsequent eye movement control can be operated according to the user's eye movement habits. In the calibration step in the prior art, image processing is usually performed according to a user staring at an image of a preset positioning point, and a pupil center position corresponding to the preset positioning point is calculated to collect calibration data. However, according to the calibration data obtained by this method, in the subsequent eye tracking operation, the accuracy of the gaze judgment is low, and the user experience is not high.
本申请的目的在于提供一种眼动控制校准数据获取方法和装置,旨在解决现有技术中不能根据用户眼动习惯来获取准确的眼动控制校准数据的问题。The purpose of this application is to provide a method and device for acquiring eye movement control calibration data, which aims to solve the problem that in the prior art, accurate eye movement control calibration data cannot be obtained according to a user's eye movement habits.
本申请提出一种眼动控制校准数据获取方法,包括:This application proposes a method for obtaining calibration data for eye movement control, including:
依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;Sequentially acquiring user images where the human eye fixes on a plurality of anchor points, wherein the anchor points are preset in a designated viewing area;
依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;Searching the human eye image and the eyeball image from the user image in order to obtain human eye position data and eyeball position data;
根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。Calculate calibration data according to the human eye position data and the eyeball position data, and sequentially record the calibration data and the corresponding plurality of position information of the anchor points.
本申请还提出了一种眼动控制校准数据获取装置,包括:The present application also proposes an eye movement control calibration data acquisition device, including:
图像获取模块,用于依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;An image acquisition module, configured to sequentially obtain a user image where a human eye fixes on a plurality of positioning points, wherein a plurality of the positioning points are preset in a designated viewing area;
图像分析模块,用于依次从所述用户图像中查找人眼图像和眼球图像,获 取人眼位置数据和眼球位置数据;An image analysis module, configured to sequentially search a human eye image and an eyeball image from the user image, and obtain human eye position data and eyeball position data;
数据计算模块,用于根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。A data calculation module is configured to calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.
本申请还提出一种计算机设备,其包括处理器、存储器及存储于所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述的眼动控制校准数据获取方法。The present application also proposes a computer device including a processor, a memory, and a computer program stored on the memory and executable on the processor. The processor implements the foregoing eye movement when the computer program is executed. Controls the method of acquiring calibration data.
本申请的眼动控制校准数据获取方法和装置,在指定观看区域预设至少一个定位点,在人眼注视一个定位点时,通过普通摄像头获取图像,从图像中查找人眼图像和眼球图像,根据人眼位置数据和眼球位置数据,计算校准数据,将校准数据和该定位点的位置信息保存在存储器中,直至所有定位点均采集完数据。校准数据可用于后续眼动追踪控制中,判断用户与指定观看区域的距离是否在预设范围内,并进行用户视线位置追踪,提高视线判断的准确度。本申请的眼动控制校准数据获取方法和装置无需采用专用设备,且可以根据用户的眼动习惯来采集数据,用户体验好。In the method and device for acquiring calibration data of eye movement control of the present application, at least one positioning point is preset in a designated viewing area, and when a human eye looks at one positioning point, an image is acquired through a common camera, and a human eye image and an eyeball image are searched from the image. According to the position data of the human eye and the position data of the eyeball, the calibration data is calculated, and the calibration data and the position information of the positioning point are stored in the memory until all the positioning points have been collected. The calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment. The method and device for acquiring eye movement control calibration data of the present application do not need to use special equipment, and can collect data according to the eye movement habits of the user, and the user experience is good.
图1是本申请一实施例的眼动控制校准数据获取方法的流程示意图;FIG. 1 is a schematic flowchart of an eye movement control calibration data acquisition method according to an embodiment of the present application;
图2是本申请一实施例的指定观看区域的定位点的示意图(其中,图2a为各定位点的示意图,图2b为左边区域和右边区域的划分示意图,图2c为上边区域和下边区域的划分示意图);FIG. 2 is a schematic diagram of an anchor point in a designated viewing area according to an embodiment of the present application (wherein FIG. 2 a is a schematic diagram of each anchor point, FIG. 2 b is a schematic diagram of division of a left region and a right region, and FIG. 2 c is an illustration of an upper region and a lower region. Division diagram);
图3是本申请一实施例的眼动控制校准数据获取装置的结构示意框图;3 is a schematic block diagram of a structure of an eye movement control calibration data acquisition device according to an embodiment of the present application;
图4是图3中图像分析模块的结构示意框图;4 is a schematic block diagram of a structure of an image analysis module in FIG. 3;
图5是图3中数据计算模块的结构示意框图;5 is a schematic block diagram of a structure of a data calculation module in FIG. 3;
图6是图5中第一数据获取单元的结构示意框图;6 is a schematic block diagram of a structure of a first data acquisition unit in FIG. 5;
图7是图5中第二数据获取单元的结构示意框图;7 is a schematic block diagram of a structure of a second data obtaining unit in FIG. 5;
图8是本申请一实施例的计算机设备的结构示意图。FIG. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
参照图1,本申请实施例提供了一种眼动控制校准数据获取方法,包括:Referring to FIG. 1, an embodiment of the present application provides a method for acquiring eye movement control calibration data, including:
S1、依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;S1. Obtain a user image in which a human eye looks at a plurality of anchor points in sequence, wherein a plurality of the anchor points are preset in a designated viewing area;
S2、依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;S2. Searching the human eye image and the eyeball image from the user image in order to obtain human eye position data and eyeball position data;
S3、根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。S3. Calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.
本实施例中,步骤S1中的指定观看区域包括与用户进行人机交互的终端设备界面,例如可以是智能手机显示屏、平板显示屏、智能电视显示屏、个人电脑显示屏、笔记本电脑显示屏等,本申请对此不作限定。用户图像可以通过摄像头获取,摄像头包括终端设备自带的前置摄像头、外接摄像头,如手机前置摄像头等,本申请对此不作限定。参照图2a,为指定观看区域的定位点的示意图,包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点,参照图2b,其中左上、左中、左下、中下、中中和中上包围的指定观看区域为左边区域,右上、右中、右下、中下、中中和中上包围的指定观看区域为右边区域,参照图2c,左上、左中、中中、右中、右上和中上包围的指定观看区域为上边区域,左下、左中、中中、右中、右下和中下包围的指定观看区域为下边区域。In this embodiment, the designated viewing area in step S1 includes a terminal device interface for human-computer interaction with a user, such as a smartphone display, a flat-panel display, a smart TV display, a personal computer display, and a laptop display. This application does not limit this. The user image can be obtained through a camera. The camera includes a front camera built in the terminal device, an external camera, such as a front camera of a mobile phone, etc., which is not limited in this application. Referring to FIG. 2a, it is a schematic diagram of the anchor points of the designated viewing area, including 9 anchor points of upper left, upper middle, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right. Referring to FIG. 2b, the upper left , The left, middle, bottom left, middle bottom, middle middle, and middle top surrounded by the designated viewing area is the left area, the top right, middle right, bottom right, middle bottom, the middle designated middle and upper center surrounded by the designated viewing area is the right area, refer to the figure 2c, the designated viewing area surrounded by top left, middle left, center middle, right middle, top right, and top middle is the top area, and the designated viewing area surrounded by bottom left, left middle, middle center, right middle, bottom right, and bottom middle is the bottom area .
以眼动控制手机显示屏为例,用户根据自己的习惯在距离手机显示屏合适的距离处,眼睛注视手机显示屏的一个定位点,通过手机前置摄像头采集人眼注视该定位点的图像。比如,可以预先设置注视时间,分别发送提醒用户持续注视每个定位点的提醒信息,以提醒用户持续注视该定位点;判断当前时刻距发送提醒信息的时刻之间的时长是否大于预设的注视时长,若当前时刻距发送所述提醒信息的时刻之间的时长大于预设的注视时长时,则生成拍摄用户图像的指令,摄像头获得拍摄用户图像的指令,采集图像;也可以在分别发送提醒用户持续注视每个定位点的信息后,用摄像头持续实时采集图像,通过预先训练好的分类器区分人眼的状态,如果判断人眼处于注视状态,则获取注视状态中上述图像的任一帧图像作为用户图像。进一步从获取的图像中查找人眼图像和眼球图像,获取到人眼位置数据和眼球位置数据;根据所述人眼位置数据和所述眼球位置数据计算一系列校准数据,依次记录所述校准数据与所述定位点的对应关系。校准数据可用于后续眼动追踪控制中,判断用户与指定观看区域的距离是否在预设范围内,并进行用户视线位置追踪,提高视线判断的准确度。Taking the eye movement control of the mobile phone display as an example, the user looks at an anchor point of the mobile phone display at an appropriate distance from the mobile phone display according to his habits, and collects an image of a human eye watching the anchor point through a front camera of the mobile phone. For example, a fixation time may be set in advance, and reminders for reminding the user to continuously look at each anchor point may be sent separately to remind the user to keep looking at the anchor point; judging whether the time between the current time and the moment when the reminder information is sent is greater than a preset fixation Duration, if the time between the current time and the time at which the reminder information is sent is greater than the preset gaze duration, an instruction to capture a user image is generated, and the camera obtains an instruction to capture a user image to collect the image; the reminder can also be sent separately After the user continuously looks at the information of each anchor point, the camera continuously collects images in real time, and distinguishes the state of the human eye through a pre-trained classifier. If it is determined that the human eye is in the gaze state, then any frame of the above image in the gaze state is obtained The image serves as the user image. Further searching for the human eye image and the eyeball image from the acquired image, obtaining the human eye position data and the eyeball position data; calculating a series of calibration data according to the human eye position data and the eyeball position data, and sequentially recording the calibration data Correspondence with the anchor point. The calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.
具体地,本实施例的用户首先看向左上定位点,摄像头采集人眼注视左 上定位点的图像,从该图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据,计算校准数据,记录该校准数据与左上定位点的对应关系。用户再开始看向中上定位点,其余步骤同左上定位点。直至左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点的校准数据和定位点的对应关系均采集完毕。Specifically, the user in this embodiment first looks at the upper left positioning point, and the camera collects an image of the human eye gazing at the upper left positioning point, finds the human eye image and the eyeball image from the image, obtains human eye position data and eyeball position data, and calculates a calibration. Data to record the correspondence between the calibration data and the upper left anchor point. The user then starts looking at the center-up anchor point, and the rest of the steps are the same as the top-left anchor point. Until the upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right calibration data of the nine positioning points and the corresponding relationship of the positioning points are collected.
本实施例中,所述依次从所述用户图像中查找人眼图像和眼球图像,获取人眼图像位置数据和眼球图像位置数据的步骤S2,包括:In this embodiment, step S2 of searching for a human eye image and an eyeball image from the user image in order to obtain human eye image position data and eyeball image position data includes:
S21、从所述用户图像中查找人脸图像;S21: Find a face image from the user image;
S22、从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;S22. Search for a human eye image from the human face image, and obtain human eye position data from the human face image, where the human eye image includes a left eye image and a right eye image;
S23、从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。S23. Find an eyeball image from the human eye image, and obtain eyeball position data from the human face image.
本实施例中,步骤S21先从图像中查找人脸图像,如果在图像中没有查找到人脸图像,则返回步骤S1,调整用户和指定观看区域的相对位置,直至摄像头获取的图像中能查找到人脸图像。人脸图像的查找方法较多,比如:利用人脸规则(如眼睛、鼻子、嘴巴等的分布)对输入图像进行人脸检测;通过寻找人脸面部不变的特征(如肤色、边缘、纹理)来对输入图像进行人脸检测;将人脸的面部特征用一个标准的人脸模板来描述,进行人脸检测时,先计算输入图像与标准人脸模板之间的相关值,然后再将求得的相关值与事先设定的阂值进行比较,以判别输入图像中是否存在人脸;将人脸区域看作一类模式,使用大量的人脸数据作样本训练,来学习潜在的规则并构造分类器,通过判别图像中所有可能区域模式属性来实现人脸的检测。本实施例将查找到的人脸图像用矩形框标出。In this embodiment, step S21 first finds a face image from the image. If no face image is found in the image, it returns to step S1 to adjust the relative position of the user and the designated viewing area until the image obtained by the camera can be found. To face images. There are many ways to search for facial images, such as: using face rules (such as the distribution of eyes, nose, mouth, etc.) to perform face detection on the input image; by finding features that are invariant to the face (such as skin color, edges, textures) ) To perform face detection on the input image; describe the facial features of the face with a standard face template. When performing face detection, first calculate the correlation value between the input image and the standard face template, and then The obtained correlation value is compared with a preset threshold value to determine whether a face exists in the input image; the face area is regarded as a type of pattern, and a large amount of face data is used as a sample training to learn potential rules A classifier is constructed to detect faces by discriminating all possible region pattern attributes in the image. In this embodiment, the found face image is marked with a rectangular frame.
步骤S22从人脸图像的矩形框中查找人眼图像,有利于缩小查找范围,提高人眼查找的查找效率和准确度,如果没有查找到人眼图像,则返回步骤S1,重新获取图像,直至步骤S22中能查找到人眼图像。人眼查找的方法包括基于模板匹配的方法、基于统计的方法和基于知识的方法。其中基于模板匹配的方 法包括灰度投影模板和几何特征模板:灰度投影法是指对人脸灰度图像进行水平和垂直方向的投影,分别统计出两个方向上的灰度值和/或灰度函数值,找出特定变化点,然后根据先验知识将不同方向上的变化点位置相结合,即得到人眼的位置;几何特征模板是利用眼睛的个体特征以及分布特征作为依据来实施人眼检测。基于统计的方法一般是通过对大量目标样本和非目标样本进行训练学习得到一组模型参数,然后基于模型构建分类器或者滤波器来检测目标。基于知识的方法是确定图像的应用环境,总结特定条件下可用于人眼检测的知识(如轮廓信息、色彩信息、位置信息)等,把它们归纳成指导人眼检测的规则。本实施例用矩形框分别框出左眼图像和右眼图像,获得下述人眼位置数据,包括:Step S22 searches for the human eye image from the rectangular frame of the face image, which is helpful to narrow the search range and improve the search efficiency and accuracy of the human eye search. If no human eye image is found, return to step S1 and reacquire the image until A human eye image can be found in step S22. Human eye search methods include template-based methods, statistics-based methods, and knowledge-based methods. Among them, the method based on template matching includes a gray projection template and a geometric feature template. The gray projection method refers to the horizontal and vertical projection of a gray image of a human face, and respectively counts the gray value and / or in two directions. The value of the gray function, find specific change points, and then combine the positions of change points in different directions according to prior knowledge to obtain the position of the human eye; the geometric feature template is implemented using the individual features and distribution features of the eyes as the basis Human eye detection. Statistics-based methods generally train and learn a large number of target samples and non-target samples to obtain a set of model parameters, and then build a classifier or filter to detect the target based on the model. The knowledge-based method is to determine the application environment of the image, summarize the knowledge (such as contour information, color information, position information) that can be used for human eye detection under specific conditions, and summarize them into rules that guide human eye detection. This embodiment uses a rectangular frame to frame the left-eye image and the right-eye image, respectively, to obtain the following human eye position data, including:
r1:左眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;r1 : the distance from the upper left vertex of the rectangular frame of the left-eye image to the left-most face image;
t1:左眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;t1 : the distance from the upper left vertex of the rectangular frame of the left eye image to the uppermost edge of the face image;
w1:左眼图像的矩形框的宽度;h1:左眼图像的矩形框的高度;w1 : the width of the rectangular frame of the left-eye image; h1 : the height of the rectangular frame of the left-eye image;
r2:右眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;r2 : the distance from the upper-left vertex of the rectangular frame of the right-eye image to the left-most face image;
t2:右眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;t2 : the distance from the top left vertex of the rectangular frame of the right eye image to the uppermost edge of the face image;
w2:右眼图像的矩形框的宽度;h2:右眼图像的矩形框的高度。w2 : the width of the rectangular frame of the right-eye image; h2 : the height of the rectangular frame of the right-eye image.
步骤S23从左眼图像中查找到左眼球图像,从右眼图像中查找右眼球图像,如果没有查找到眼球图像,则返回步骤S1,重新获取图像,直至步骤S23中能查找到眼球图像。眼球查找的方法包括神经网络法、边缘点积分投影曲线的极值位置判别法、模板匹配法、多分辨率的马赛克图法、几何及对称性检测法、基于霍夫变换法等。本实施例用矩形框分别框出左眼球图像和右眼球图像,获得下述眼球位置数据,包括:Step S23 finds the left eyeball image from the left eye image and the right eyeball image from the right eye image. If no eyeball image is found, the process returns to step S1 to acquire the image again until the eyeball image can be found in step S23. Eyeball search methods include neural network method, extreme point position discrimination method of edge point integral projection curve, template matching method, multi-resolution mosaic map method, geometric and symmetry detection method, and Hough transform-based method. This embodiment uses a rectangular frame to frame the left eyeball image and the right eyeball image, respectively, to obtain the following eyeball position data, including:
r3:左眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;r3 : the distance between the top left vertex of the rectangular frame of the left eyeball image and the leftmost face of the face image;
t3:左眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;t3 : the distance from the top left vertex of the rectangular frame of the left eyeball image to the uppermost edge of the face image;
w3:左眼球图像的矩形框的宽度;h3:左眼球图像的矩形框的高度;w3 : the width of the rectangular frame of the left eyeball image; h3 : the height of the rectangular frame of the left eyeball image;
r4:右眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;r4 : the distance between the top left vertex of the rectangular frame of the right eyeball image and the leftmost face of the face image;
t4:右眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;t4 : the distance from the top left vertex of the rectangular frame of the right eyeball image to the uppermost edge of the face image;
w4:右眼球图像的矩形框的宽度;h4:右眼球图像的矩形框的高度。w4 : the width of the rectangular frame of the right eyeball image; h4 : the height of the rectangular frame of the right eyeball image.
本实施例中给出了从人脸图像中获取眼球位置数据的具体参数。基于本申请的发明理念,也可以从人眼图像中获取眼球位置数据,本申请不对从人眼图像中获取眼球位置数据进行赘述。Specific parameters for obtaining eyeball position data from a face image are given in this embodiment. Based on the inventive concept of the present application, eyeball position data can also be obtained from a human eye image, and this application does not go into details about obtaining eyeball position data from a human eye image.
本实施例中,校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息的步骤S3,包括:In this embodiment, the calibration data includes distance calibration data, horizontal calibration data, and vertical calibration data. The calibration data is calculated according to the position data of the human eye and the position data of the eyeball, and the calibration data and corresponding multiple data are recorded in sequence. The step S3 of the positioning point location information includes:
S31、根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;以及根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;S31. Calculate distance calibration data when a human eye fixes on one of the positioning points according to the human eye position data; and calculate eyeballs when a human eye fixes on one of the positioning points according to the human eye position data and the eyeball position data. Horizontal position calibration data and eyeball position vertical calibration data;
S32、将所述距离校准数据、横向校准数据、纵向校准数据和对应的所述定位点位置信息保存在存储器中。S32. Save the distance calibration data, horizontal calibration data, vertical calibration data, and corresponding position information of the anchor point in a memory.
通过步骤S31~S32计算人眼注视一个定位点时的校准数据,并将校准数据和对应的定位点信息保存在存储器中。本实施例中对左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点进行一一计算和数据储存。距离校准数据用来定位人眼离指定观看区域的距离,横向校准数据和纵向校准数据用来指示人眼看向指定定位点时的眼球位置。Steps S31 to S32 are used to calculate the calibration data when the human eye looks at an anchor point, and the calibration data and the corresponding anchor point information are stored in the memory. In this embodiment, calculation and data storage are performed on the nine positioning points of upper left, upper middle, upper right, middle left, middle middle, right middle, lower left, middle lower, and lower right. The distance calibration data is used to locate the distance of the human eye from the specified viewing area, and the horizontal calibration data and vertical calibration data are used to indicate the position of the eyeball when the human eye looks at the specified positioning point.
本实施例中,所述根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据的步骤,包括:In this embodiment, the step of calculating distance calibration data when a human eye fixes on one of the positioning points according to the human eye position data includes:
S321、根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;S321. Calculate the coordinates of the center position of the left eye according to the position data of the left eye included in the position data of the human eye; and calculate the coordinates of the position center of the right eye according to the position data of the right eye included in the position data of the human eye;
S322、根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据。S322. Calculate a distance between the center of the left eye and the center of the right eye according to the coordinates of the left eye center position and the right eye center position to obtain the distance calibration data.
本实施例中,步骤S321可以通过公式(1)计算左眼中心位置坐标(x1,y1),In this embodiment, in step S321, the coordinates (x1 , y1 ) of the center position of the left eye can be calculated by formula (1)
Pot(x1,y1)=Pot(r1+w1/2,t1+h1/2) (1)Pot (x1 , y1 ) = Pot (r1 + w1/2 , t1 + h1/2 ) (1)
通过公式(2)计算右眼中心位置坐标(x2,y2),Calculate the coordinates of the center position of the right eye (x2 , y2 ) by formula (2 ),
Pot(x2,y2)=Pot(r2+w2/2,t2+h2/2) (2)Pot (x 2, y 2) = Pot (r 2 + w 2/2, t 2 + h 2/2) (2)
步骤S322可以通过公式(3)计算左眼中心与右眼中心的距离d,d即为距离校准数据。In step S322, the distance d between the center of the left eye and the center of the right eye can be calculated by formula (3), where d is the distance calibration data.
通过d的值可以定位人眼距离指定观看区域的距离。The value of d can be used to locate the distance of the human eye from the specified viewing area.
本实施例中,所述根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据的步骤,包括:In this embodiment, the step of calculating, based on the human eye position data and the eyeball position data, horizontal eyeball position lateral calibration data and vertical eyeball position vertical calibration data when a human eye fixates on one of the positioning points includes:
S331、根据所述眼球位置数据包括的左眼球位置数据,计算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;S331. Calculate coordinates of the center position of the left eyeball according to the position data of the left eyeball included in the position data of the eyeball; and calculate coordinates of the position of the center position of the right eyeball according to the position data of the right eyeball included in the position data of the eyeball;
S332、根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;S332. Calculate a first lateral distance between the center of the left eyeball and the leftmost side of the left-eye image and the distance between the center of the left eyeball and the left-eye image according to the coordinates of the left-eye center position and the left-eye position data. A first longitudinal distance between the uppermost edges; and a second lateral distance between the center of the right eyeball and the rightmost side of the right eye image based on the right eyeball center position coordinates and the right eye position data, and the right The second longitudinal distance between the center of the eyeball and the lowermost edge of the right eye image;
S333、计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。S333. Calculate a ratio of the first lateral distance to the second lateral distance to obtain the lateral calibration data; and calculate a ratio of the first longitudinal distance to the second longitudinal distance to obtain the longitudinal calibration data. .
本实施例中,步骤S331中可以通过公式(4)计算左眼球中心位置坐标(x3,y3),Pot(x3,y3)=Pot(r3+w3/2,t3+h3/2) (4)Embodiment, step S331 may be (4) calculates the left eyeball center position coordinates by the equation of the present embodiment(x 3, y 3), Pot (x 3, y 3) = Pot (r 3 + w 3/2, t 3 + h3/2) (4)
通过公式(5)计算右眼球中心位置坐标(x4,y4),Calculate the coordinates (x4 , y4 ) of the center position of the right eyeball by formula (5),
Pot(x4,y4)=Pot(r4+w4/2,t4+h4/2) (5)Pot (x 4, y 4) = Pot (r 4 + w 4/2, t 4 + h 4/2) (5)
步骤S332可以通过公式(6)计算左眼球中心与左眼图像的最左边之间的第一横向距离d1:d1=x3–r1 (6)Step S332 can calculate the first lateral distance d1 between the center of the left eyeball and the leftmost side of the left eye image by formula (6): d1 = x3 -r1 (6)
通过公式(7)计算左眼球中心与左眼图像的最上边之间的第一纵向距离d3:d3=y3–t1 (7)The first longitudinal distance d3 between the center of the left eyeball and the uppermost edge of the left eye image is calculated by formula (7): d3 = y3 -t1 (7)
通过公式(8)计算右眼球中心与右眼图像的最右边之间的第二横向距 离d2:d2=r2+w2–x4 (8)The second lateral distance d2 between the center of the right eyeball and the rightmost side of the right eye image is calculated by formula (8): d2 = r2 + w2 -x4 (8)
通过公式(9)计算右眼球中心与右眼图像的最下边之间的第二纵向距离d4:d4=t2+h2–y4 (9)Calculate the second longitudinal distance d4 between the center of the right eyeball and the lowermost edge of the right eye image by formula (9): d4 = t2 + h2 -y4 (9)
步骤S333可以通过公式(10)计算横向校准数据m:m=d1/d2 (10)Step S333 can calculate the horizontal calibration data m by formula (10): m = d1 / d2 (10)
通过公式(11)计算纵向校准数据n:n=d3/d4 (11)Calculate the longitudinal calibration data n by formula (11): n = d3 / d4 (11)
本实施例的眼动校准控制方法,在指定观看区域设置9个定位点,人眼依次注视这9个定位点,依次记录人眼注视一个定位点时的校准数据和该定位点的对应关系。在人眼注视一个定位点时,通过摄像头获取图像,从图像中查找人脸图像,再从人脸图像中查找人眼图像,最后从人眼图像中查找眼球图像,该方法查找效率快且准确度高;根据人眼位置数据和眼球位置数据,计算出距离校准数据d、横向校准数据m和纵向校准数据n,将d、m、n和该定位点的位置信息保存在存储器中。所有定位点均采集完数据后,通过9个定位点的距离校准数据可以校准人眼距离指定观看区域的距离,从而将用户与指定观看区域的距离限定在指定范围内;通过9个定位点的横向校准数据和纵向校准数据可以推算用户视线所看向的指定观看区域的位置,视线追踪的准确度高。本实施例的眼动控制校准数据获取方法无需采用专用设备,且可以根据用户的眼动习惯来采集数据,用户体验好。In the eye movement calibration control method of this embodiment, nine positioning points are set in a specified viewing area, and the human eye sequentially looks at the nine positioning points, and sequentially records the correspondence between the calibration data and the positioning points when the human eye looks at one positioning point. When the human eye looks at an anchor point, the camera acquires an image, looks for a face image from the image, then looks for a human eye image, and finally looks for an eyeball image from the human eye image. This method is fast and accurate. Degree is high; distance calibration data d, horizontal calibration data m, and vertical calibration data n are calculated according to human eye position data and eyeball position data, and d, m, n and position information of the positioning point are stored in a memory. After all the positioning points have been collected, the distance calibration data of 9 positioning points can be used to calibrate the distance between the human eye and the specified viewing area, thereby limiting the distance between the user and the specified viewing area within the specified range. The horizontal calibration data and vertical calibration data can estimate the position of the specified viewing area to which the user's line of sight is looking, and the accuracy of the line of sight tracking is high. The method for acquiring the eye movement control calibration data in this embodiment does not require special equipment, and can collect data according to the eye movement habits of the user, and the user experience is good.
参照图3,本申请实施例还提供了一种眼动控制校准数据获取装置,包括:图像获取模块10,用于依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;Referring to FIG. 3, an embodiment of the present application further provides a device for acquiring eye movement control calibration data, including: an
图像分析模块20,用于依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;An
数据计算模块30,用于根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。A
本实施例中,图像获取模块10中的指定观看区域包括与用户进行人机交互的终端设备界面,例如可以是智能手机显示屏、平板显示屏、智能电视显示屏、个人电脑显示屏、笔记本电脑显示屏等。用户图像可以通过摄像头获取,摄像 头包括终端设备自带的前置摄像头、外接摄像头,如手机前置摄像头等。参照图2,为指定观看区域的定位点的示意图,包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点,其中左上、左中、左下、中下、中中和中上包围的指定观看区域为左边区域,右上、右中、右下、中下、中中和中上包围的指定观看区域为右边区域,左上、左中、中中、右中、右上和中上包围的指定观看区域为上边区域,左下、左中、中中、右中、右下和中下包围的指定观看区域为下边区域。In this embodiment, the designated viewing area in the
以眼动控制手机显示屏为例,用户根据自己的习惯在距离手机显示屏合适的距离处,眼睛注视手机显示屏的一个定位点,通过手机前置摄像头采集人眼注视该定位点的图像。比如,可以预先设置注视时间,通过第一提醒单元分别发送提醒用户持续注视每个定位点的提醒信息,以提醒用户持续注视该定位点;通过第一判断单元判断当前时刻距发送提醒信息的时刻之间的时长是否大于预设的注视时长,若当前时刻距发送所述提醒信息的时刻之间的时长大于预设的注视时长时,通过第一图像获取单元生成拍摄用户图像的指令,则摄像头获得拍摄指令,采集图像;也可以在通过第二提醒单元分别发送提醒用户持续注视每个定位点的信息后,通过实时图像获取单元用摄像头持续实时采集图像,通过第二判断单元,根据训练好的分类器区分人眼的状态,如果判断人眼处于注视状态,则通过第二图像获取单元获取注视状态中上述图像的任一帧图像作为用户图像。进一步从获取的图像中查找人眼图像和眼球图像,获取到人眼位置数据和眼球位置数据;根据所述人眼位置数据和所述眼球位置数据计算一系列校准数据,依次记录所述校准数据与所述定位点的对应关系。校准数据可用于后续眼动追踪控制中,判断用户与指定观看区域的距离是否在预设范围内,并进行用户视线位置追踪,提高视线判断的准确度。Taking the eye movement control of the mobile phone display as an example, the user looks at an anchor point of the mobile phone display at an appropriate distance from the mobile phone display according to his habits, and collects an image of a human eye watching the anchor point through a front camera of the mobile phone. For example, the gaze time may be set in advance, and the first reminder unit may separately send reminder information reminding the user to continuously look at each anchor point to remind the user to continuously watch the anchor point; the first judgment unit may determine the current time from the time when the reminder information is sent. Whether the time between is greater than the preset gaze duration, and if the time between the current time and the time when the reminder information is sent is greater than the preset gaze duration, the first image acquisition unit generates an instruction to capture a user image, the camera Obtain shooting instructions and collect images; or you can send information reminding the user to continuously watch each anchor point through the second reminder unit, and then use the camera to continuously capture real-time images through the camera through the real-time image acquisition unit. The classifier distinguishes the state of the human eye. If it is determined that the human eye is in the gaze state, then the second image acquisition unit acquires any frame image of the above-mentioned image in the gaze state as the user image. Further searching for the human eye image and the eyeball image from the acquired image, obtaining the human eye position data and the eyeball position data; calculating a series of calibration data according to the human eye position data and the eyeball position data, and sequentially recording the calibration data Correspondence with the anchor point. The calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.
具体地,本实施例的用户首先看向左上定位点,摄像头采集人眼注视左上定位点的图像,从该图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据,计算校准数据,记录该校准数据与左上定位点的对应关系。用户再开始看向中上定位点,其余步骤同左上定位点。直至左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点的校准数据和定位点的对应关系均采集完毕。Specifically, the user in this embodiment first looks at the upper left positioning point, and the camera collects an image of the human eye gazing at the upper left positioning point, finds the human eye image and the eyeball image from the image, obtains human eye position data and eyeball position data, and calculates a calibration. Data to record the correspondence between the calibration data and the upper left anchor point. The user then starts looking at the center-up anchor point, and the rest of the steps are the same as the top-left anchor point. Until the upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right calibration data of the nine positioning points and the corresponding relationship of the positioning points are collected.
参照图4,本实施例中,所述图像分析模块20包括:Referring to FIG. 4, in this embodiment, the
人脸查找单元201,用于从所述用户图像中查找人脸图像;A
人眼查找单元202,用于从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;The human
眼球查找单元203,用于从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。The
本实施例中,人脸查找单元201先从图像中查找人脸图像,如果在图像中没有查找到人脸图像,则返回步骤S1,调整用户和指定观看区域的相对位置,直至摄像头获取的图像中能查找到人脸图像。人脸图像的查找方法较多,比如:利用人脸规则(如眼睛、鼻子、嘴巴等的分布)对输入图像进行人脸检测;通过寻找人脸面部不变的特征(如肤色、边缘、纹理)来对输入图像进行人脸检测;将人脸的面部特征用一个标准的人脸模板来描述,进行人脸检测时,先计算输入图像与标准人脸模板之间的相关值,然后再将求得的相关值与事先设定的阂值进行比较,以判别输入图像中是否存在人脸;将人脸区域看作一类模式,使用大量的人脸数据作样本训练,来学习潜在的规则并构造分类器,通过判别图像中所有可能区域模式属性来实现人脸的检测。本实施例将查找到的人脸图像用矩形框标出。In this embodiment, the
人眼查找单元202从人脸图像的矩形框中查找人眼图像,有利于缩小查找范围,提高人眼查找的查找效率和准确度,如果没有查找到人眼图像,则返回步骤S1,重新获取图像,直至步骤S22中能查找到人眼图像。人眼查找的方法包括基于模板匹配的方法、基于统计的方法和基于知识的方法。其中基于模板匹配的方法包括灰度投影模板和几何特征模板:灰度投影法是指对人脸灰度图像进行水平和垂直方向的投影,分别统计出两个方向上的灰度值和/或灰度函数值,找出特定变化点,然后根据先验知识将不同方向上的变化点位置相结合,即得到人眼的位置;几何特征模板是利用眼睛的个体特征以及分布特征作为依据来实施人眼检测。基于统计的方法一般是通过对大量目标样本和非目标样本进行训练学习得到一组模型参数,然后基于模型构建分类器或者滤波器来检测目标。基于知识的方法是确定图像的应用环境,总结特定条件下可用于人眼检测的知识(如轮廓信息、色彩信息、位置信息)等,把它们归纳成指导人眼检测的规则。本实施例用矩形框分别框出左眼图像和右眼图像,获得下述人眼位 置数据,包括:The human
r1:左眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;r1 : the distance from the upper left vertex of the rectangular frame of the left-eye image to the left-most face image;
t1:左眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;t1 : the distance from the upper left vertex of the rectangular frame of the left eye image to the uppermost edge of the face image;
w1:左眼图像的矩形框的宽度;h1:左眼图像的矩形框的高度;w1 : the width of the rectangular frame of the left-eye image; h1 : the height of the rectangular frame of the left-eye image;
r2:右眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;r2 : the distance from the upper-left vertex of the rectangular frame of the right-eye image to the left-most face image;
t2:右眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;t2 : the distance from the top left vertex of the rectangular frame of the right eye image to the uppermost edge of the face image;
w2:右眼图像的矩形框的宽度;h2:右眼图像的矩形框的高度。w2 : the width of the rectangular frame of the right-eye image; h2 : the height of the rectangular frame of the right-eye image.
眼球查找单元203从左眼图像中查找到左眼球图像,从右眼图像中查找右眼球图像,如果没有查找到眼球图像,则返回步骤S1,重新获取图像,直至步骤S23中能查找到眼球图像。眼球查找的方法包括神经网络法、边缘点积分投影曲线的极值位置判别法、模板匹配法、多分辨率的马赛克图法、几何及对称性检测法、基于霍夫变换法等。本实施例用矩形框分别框出左眼球图像和右眼球图像,获得下述眼球位置数据,包括:The
r3:左眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;r3 : the distance between the top left vertex of the rectangular frame of the left eyeball image and the leftmost face of the face image;
t3:左眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;t3 : the distance from the top left vertex of the rectangular frame of the left eyeball image to the uppermost edge of the face image;
w3:左眼球图像的矩形框的宽度;h3:左眼球图像的矩形框的高度;w3 : the width of the rectangular frame of the left eyeball image; h3 : the height of the rectangular frame of the left eyeball image;
r4:右眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;r4 : the distance between the top left vertex of the rectangular frame of the right eyeball image and the leftmost face of the face image;
t4:右眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;t4 : the distance from the top left vertex of the rectangular frame of the right eyeball image to the uppermost edge of the face image;
w4:右眼球图像的矩形框的宽度;h4:右眼球图像的矩形框的高度。w4 : the width of the rectangular frame of the right eyeball image; h4 : the height of the rectangular frame of the right eyeball image.
本实施例中给出了从人脸图像中获取眼球位置数据的具体参数。基于本申请的发明理念,也可以从人眼图像中获取眼球位置数据,本申请不对从人眼图像中获取眼球位置数据进行赘述。Specific parameters for obtaining eyeball position data from a face image are given in this embodiment. Based on the inventive concept of the present application, eyeball position data can also be obtained from a human eye image, and this application does not go into details about obtaining eyeball position data from a human eye image.
参照图5,本实施例中,所述校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述数据计算模块30包括:Referring to FIG. 5, in this embodiment, the calibration data includes distance calibration data, horizontal calibration data, and vertical calibration data, and the
第一数据获取单元301,用于根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;A first
第二数据获取单元302,用于根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;A second
数据存储单元303,用于将所述距离校准数据、横向校准数据、纵向校准 数据和对应的所述定位点位置信息保存在存储器中。A
本实施例中,通过第一数据获取单元301、第二数据获取单元302和数据存储单元303计算人眼注视一个定位点时的校准数据,并将校准数据和对应的定位点信息保存在存储器中。本实施例中对左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点进行一一计算和数据储存。距离校准数据用来定位人眼离指定观看区域的距离,横向校准数据和纵向校准数据用来指示人眼看向指定定位点时的眼球位置。In this embodiment, the first
参照图6,本实施例中,所述第一数据获取单元301包括:Referring to FIG. 6, in this embodiment, the first
第一计算子单元3011,用于根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;A
第二计算子单元3012,用于根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据;A
本实施例中,第一计算子单元3011可以通过公式(12)计算左眼中心位置坐标(x1,y1),Pot(x1,y1)=Pot(r1+w1/2,t1+h1/2) (12)In this embodiment, the
通过公式(13)计算右眼中心位置坐标(x2,y2),Calculate the coordinates (x2 , y2 ) of the center position of the right eye by formula (13),
Pot(x2,y2)=Pot(r2+w2/2,t2+h2/2) (13)Pot (x 2, y 2) = Pot (r 2 + w 2/2, t 2 + h 2/2) (13)
第二计算子单元3012可以通过公式(14)计算左眼中心与右眼中心的距离d,d即为距离校准数据。The
通过d的值可以定位人眼距离指定观看区域的距离。The value of d can be used to locate the distance of the human eye from the specified viewing area.
参照图7,本实施例中,所述第二数据获取单元302包括:Referring to FIG. 7, in this embodiment, the second
第三计算子单元3021,用于根据所述眼球位置数据包括的左眼球位置数据,计算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;A
第四计算子单元3022,用于根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和 左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;A
第五计算子单元3023,用于计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。A
本实施例中,第三计算子单元3021中可以通过公式(15)计算左眼球中心位置坐标(x3,y3),Pot(x3,y3)=Pot(r3+w3/2,t3+h3/2) (15)Embodiment, the
通过公式(16)计算右眼球中心位置坐标(x4,y4),Calculate the coordinates (x4 , y4 ) of the center position of the right eyeball by formula (16),
Pot(x4,y4)=Pot(r4+w4/2,t4+h4/2) (16)Pot (x 4, y 4) = Pot (r 4 + w 4/2, t 4 + h 4/2) (16)
第四计算子单元3022可以通过公式(17)计算左眼球中心与左眼图像的最左边之间的第一横向距离d1:d1=x3–r1 (17)The
通过公式(18)计算左眼球中心与左眼图像的最上边之间的第一纵向距离d3:d3=y3–t1 (18)The first longitudinal distance d3 between the center of the left eyeball and the uppermost edge of the left eye image is calculated by formula (18): d3 = y3 -t1 (18)
通过公式(19)计算右眼球中心与右眼图像的最右边之间的第二横向距离d2:d2=r2+w2–x4 (19)The second lateral distance d2 between the center of the right eyeball and the rightmost side of the right eye image is calculated by formula (19): d2 = r2 + w2 -x4 (19)
通过公式(20)计算右眼球中心与右眼图像的最下边之间的第二纵向距离d4:d4=t2+h2–y4 (20)Calculate the second longitudinal distance d4 between the center of the right eyeball and the lowermost edge of the right eye image by formula (20): d4 = t2 + h2 -y4 (20)
第五计算子单元3023可以通过公式(21)计算横向校准数据m:The
m=d1/d2 (21)m = d1 / d2 (21)
通过公式(22)计算纵向校准数据n:n=d3/d4 (22)Calculate the longitudinal calibration data n by formula (22): n = d3 / d4 (22)
本实施例的眼动校准控制装置,在指定观看区域设置9个定位点,人眼依次注视这9个定位点,依次记录人眼注视一个定位点时的校准数据和该定位点的对应关系。在人眼注视一个定位点时,通过摄像头获取图像,从图像中查找 人脸图像,再从人脸图像中查找人眼图像,最后从人眼图像中查找眼球图像,该方法查找效率快且准确度高;根据人眼位置数据和眼球位置数据,计算出距离校准数据d、横向校准数据m和纵向校准数据n,将d、m、n和该定位点的位置信息保存在存储器中。所有定位点均采集完数据后,通过9个定位点的距离校准数据可以校准人眼距离指定观看区域的距离,从而将用户与指定观看区域的距离限定在指定范围内;通过9个定位点的横向校准数据和纵向校准数据可以推算用户视线所看向的指定观看区域的位置,视线追踪的准确度高。本实施例的眼动控制校准数据获取装置无需采用专用设备,且可以根据用户的眼动习惯来采集数据,用户体验好。In the eye movement calibration control device of this embodiment, nine positioning points are set in a designated viewing area, and the human eye sequentially looks at the nine positioning points, and sequentially records the correspondence between the calibration data and the positioning points when the human eye looks at one positioning point. When the human eye looks at an anchor point, the camera acquires an image, looks for a face image from the image, then looks for a human eye image, and finally looks for an eyeball image from the human eye image. This method is fast and accurate. Degree is high; distance calibration data d, horizontal calibration data m, and vertical calibration data n are calculated according to human eye position data and eyeball position data, and d, m, n and position information of the positioning point are stored in a memory. After all the positioning points have been collected, the distance calibration data of 9 positioning points can be used to calibrate the distance between the human eye and the specified viewing area, thereby limiting the distance between the user and the specified viewing area within the specified range. The horizontal calibration data and vertical calibration data can estimate the position of the specified viewing area to which the user's line of sight is looking, and the accuracy of the line of sight tracking is high. The apparatus for acquiring eye movement control calibration data in this embodiment does not need to use special equipment, and can collect data according to a user's eye movement habits, and the user experience is good.
本申请还提出一种计算机设备03,其包括处理器04、存储器01及存储于所述存储器01上并可在所述处理器04上运行的计算机程序02,所述处理器04执行所述计算机程序02时实现上述的眼动控制校准数据获取方法。This application also proposes a
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811014201.1 | 2018-08-31 | ||
| CN201811014201.1ACN109343700B (en) | 2018-08-31 | 2018-08-31 | Eye movement control calibration data acquisition method and device |
| Publication Number | Publication Date |
|---|---|
| WO2020042542A1true WO2020042542A1 (en) | 2020-03-05 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/073766CeasedWO2020042542A1 (en) | 2018-08-31 | 2019-01-29 | Method and apparatus for acquiring eye movement control calibration data |
| Country | Link |
|---|---|
| CN (1) | CN109343700B (en) |
| WO (1) | WO2020042542A1 (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111444789A (en)* | 2020-03-12 | 2020-07-24 | 深圳市时代智汇科技有限公司 | Myopia prevention method and system based on video induction technology |
| CN113255476A (en)* | 2021-05-08 | 2021-08-13 | 西北大学 | Target tracking method and system based on eye movement tracking and storage medium |
| CN114995412A (en)* | 2022-05-27 | 2022-09-02 | 东南大学 | A remote control car control system and method based on eye tracking technology |
| CN115100575A (en)* | 2022-07-22 | 2022-09-23 | 北方民族大学 | An eye movement video data processing method and system based on image processing technology |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109976528B (en)* | 2019-03-22 | 2023-01-24 | 北京七鑫易维信息技术有限公司 | Method for adjusting watching area based on head movement and terminal equipment |
| CN110275608B (en)* | 2019-05-07 | 2020-08-04 | 清华大学 | Human eye sight tracking method |
| CN110399930B (en)* | 2019-07-29 | 2021-09-03 | 北京七鑫易维信息技术有限公司 | Data processing method and system |
| CN110780742B (en)* | 2019-10-31 | 2021-11-02 | Oppo广东移动通信有限公司 | Eye tracking processing method and related device |
| CN111290580B (en)* | 2020-02-13 | 2022-05-31 | Oppo广东移动通信有限公司 | Calibration method and related device based on gaze tracking |
| JP7640291B2 (en) | 2021-03-08 | 2025-03-05 | 本田技研工業株式会社 | Calibration device and calibration method |
| CN113918007B (en)* | 2021-04-27 | 2022-07-05 | 广州市保伦电子有限公司 | Video interactive operation method based on eyeball tracking |
| CN116824683B (en)* | 2023-02-20 | 2023-12-12 | 广州视景医疗软件有限公司 | Eye movement data acquisition method and system based on mobile equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060110008A1 (en)* | 2003-11-14 | 2006-05-25 | Roel Vertegaal | Method and apparatus for calibration-free eye tracking |
| CN101807110A (en)* | 2009-02-17 | 2010-08-18 | 由田新技股份有限公司 | Pupil positioning method and system |
| CN102802502A (en)* | 2010-03-22 | 2012-11-28 | 皇家飞利浦电子股份有限公司 | System and method for tracking the point of gaze of an observer |
| CN105094337A (en)* | 2015-08-19 | 2015-11-25 | 华南理工大学 | Three-dimensional gaze estimation method based on irises and pupils |
| CN109375765A (en)* | 2018-08-31 | 2019-02-22 | 深圳市沃特沃德股份有限公司 | Eyeball tracking exchange method and device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102830793B (en)* | 2011-06-16 | 2017-04-05 | 北京三星通信技术研究有限公司 | Sight tracing and equipment |
| CN102662476B (en)* | 2012-04-20 | 2015-01-21 | 天津大学 | Gaze estimation method |
| CN107436675A (en)* | 2016-05-25 | 2017-12-05 | 深圳纬目信息技术有限公司 | A kind of visual interactive method, system and equipment |
| US9996744B2 (en)* | 2016-06-29 | 2018-06-12 | International Business Machines Corporation | System, method, and recording medium for tracking gaze using only a monocular camera from a moving screen |
| CN107633240B (en)* | 2017-10-19 | 2021-08-03 | 京东方科技集团股份有限公司 | Eye tracking method and device, smart glasses |
| CN108427503B (en)* | 2018-03-26 | 2021-03-16 | 京东方科技集团股份有限公司 | Human eye tracking method and human eye tracking device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060110008A1 (en)* | 2003-11-14 | 2006-05-25 | Roel Vertegaal | Method and apparatus for calibration-free eye tracking |
| CN101807110A (en)* | 2009-02-17 | 2010-08-18 | 由田新技股份有限公司 | Pupil positioning method and system |
| CN102802502A (en)* | 2010-03-22 | 2012-11-28 | 皇家飞利浦电子股份有限公司 | System and method for tracking the point of gaze of an observer |
| CN105094337A (en)* | 2015-08-19 | 2015-11-25 | 华南理工大学 | Three-dimensional gaze estimation method based on irises and pupils |
| CN109375765A (en)* | 2018-08-31 | 2019-02-22 | 深圳市沃特沃德股份有限公司 | Eyeball tracking exchange method and device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111444789A (en)* | 2020-03-12 | 2020-07-24 | 深圳市时代智汇科技有限公司 | Myopia prevention method and system based on video induction technology |
| CN111444789B (en)* | 2020-03-12 | 2023-06-20 | 深圳市时代智汇科技有限公司 | Myopia prevention method and system based on video induction technology |
| CN113255476A (en)* | 2021-05-08 | 2021-08-13 | 西北大学 | Target tracking method and system based on eye movement tracking and storage medium |
| CN113255476B (en)* | 2021-05-08 | 2023-05-19 | 西北大学 | A target tracking method, system and storage medium based on eye tracking |
| CN114995412A (en)* | 2022-05-27 | 2022-09-02 | 东南大学 | A remote control car control system and method based on eye tracking technology |
| CN115100575A (en)* | 2022-07-22 | 2022-09-23 | 北方民族大学 | An eye movement video data processing method and system based on image processing technology |
| Publication number | Publication date |
|---|---|
| CN109343700A (en) | 2019-02-15 |
| CN109343700B (en) | 2020-10-27 |
| Publication | Publication Date | Title |
|---|---|---|
| WO2020042542A1 (en) | Method and apparatus for acquiring eye movement control calibration data | |
| CN109375765B (en) | Eyeball tracking interaction method and device | |
| CN105913487B (en) | One kind is based on the matched direction of visual lines computational methods of iris edge analysis in eye image | |
| Li et al. | Learning to predict gaze in egocentric video | |
| CN104951084B (en) | Eye-controlling focus method and device | |
| CN105184246B (en) | Living body detection method and living body detection system | |
| US9075453B2 (en) | Human eye controlled computer mouse interface | |
| CN104978548B (en) | A kind of gaze estimation method and device based on three-dimensional active shape model | |
| EP2992405B1 (en) | System and method for probabilistic object tracking over time | |
| CN108985210A (en) | A kind of Eye-controlling focus method and system based on human eye geometrical characteristic | |
| KR101288447B1 (en) | Gaze tracking apparatus, display apparatus and method therof | |
| CN114391117A (en) | Eye tracking delay enhancement | |
| CN107729871A (en) | Infrared light-based human eye movement track tracking method and device | |
| CN105912126B (en) | A kind of gesture motion is mapped to the adaptive adjusting gain method at interface | |
| CN110051319A (en) | Adjusting method, device, equipment and the storage medium of eyeball tracking sensor | |
| WO2024113275A1 (en) | Gaze point acquisition method and apparatus, electronic device, and storage medium | |
| WO2023071882A1 (en) | Human eye gaze detection method, control method and related device | |
| CN110321820A (en) | A kind of sight drop point detection method based on contactless device | |
| CN114078278A (en) | Method and device for positioning fixation point, electronic equipment and storage medium | |
| KR20230085901A (en) | Method and device for providing alopecia information | |
| US20170115513A1 (en) | Method of determining at least one behavioural parameter | |
| Yang et al. | Continuous gaze tracking with implicit saliency-aware calibration on mobile devices | |
| CN117576771A (en) | Visual attention assessment method, device, medium and equipment | |
| Yang et al. | vGaze: Implicit saliency-aware calibration for continuous gaze tracking on mobile devices | |
| CN114092985A (en) | A terminal control method, device, terminal and storage medium |
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | Ref document number:19854978 Country of ref document:EP Kind code of ref document:A1 | |
| NENP | Non-entry into the national phase | Ref country code:DE | |
| 122 | Ep: pct application non-entry in european phase | Ref document number:19854978 Country of ref document:EP Kind code of ref document:A1 |