Movatterモバイル変換


[0]ホーム

URL:


CN108268858A - A kind of real-time method for detecting sight line of high robust - Google Patents

A kind of real-time method for detecting sight line of high robust
Download PDF

Info

Publication number
CN108268858A
CN108268858ACN201810118195.8ACN201810118195ACN108268858ACN 108268858 ACN108268858 ACN 108268858ACN 201810118195 ACN201810118195 ACN 201810118195ACN 108268858 ACN108268858 ACN 108268858A
Authority
CN
China
Prior art keywords
eye
real
sight line
time method
sight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810118195.8A
Other languages
Chinese (zh)
Other versions
CN108268858B (en
Inventor
韦东旭
沈海斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJUfiledCriticalZhejiang University ZJU
Priority to CN201810118195.8ApriorityCriticalpatent/CN108268858B/en
Publication of CN108268858ApublicationCriticalpatent/CN108268858A/en
Application grantedgrantedCritical
Publication of CN108268858BpublicationCriticalpatent/CN108268858B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种高鲁棒的实时视线检测方法,可用于对人眼视线进行摄像头的实时追踪和动态检测。首先,利用根据人眼结构的先验知识初始化得到的双圆活动模型检测精确的虹膜中心位置;然后,再利用稳定的人脸特征点检测方法检测再计算得到精确的参考点位置,和虹膜中心位置组合得到“眼向量”。接下来,“眼向量”和头部姿态通过二阶映射得到视线所指位置的坐标。最后,利用权重系数对左眼和右眼计算得到的位置坐标加权求和,提高检测的准确度。本发明精确检测了虹膜中心和参考点的位置,且计算量不是很大,可以达到实时视线检测的效果,使得该系统具有成本低、准确性高的特点,并且有很高的实用价值。

The invention discloses a highly robust real-time line of sight detection method, which can be used for real-time tracking and dynamic detection of a camera on the line of sight of human eyes. First, use the double-circle activity model initialized based on the prior knowledge of the human eye structure to detect the precise iris center position; then, use the stable face feature point detection method to detect and calculate the precise reference point position, and the iris center The positions are combined to get the "eye vector". Next, the "eye vector" and the head pose are mapped to obtain the coordinates of the pointing position of the line of sight. Finally, the position coordinates calculated by the left eye and the right eye are weighted and summed by using the weight coefficient to improve the detection accuracy. The invention accurately detects the positions of the iris center and the reference point, and the calculation amount is not very large, and the effect of real-time line of sight detection can be achieved, so that the system has the characteristics of low cost, high accuracy and high practical value.

Description

Translated fromChinese
一种高鲁棒的实时视线检测方法A Robust Real-time Line of Sight Detection Method

技术领域technical field

本发明涉及模式识别、图像处理和人机交互领域,特别是涉及一种高鲁棒的实时视线检测方法。The invention relates to the fields of pattern recognition, image processing and human-computer interaction, in particular to a highly robust real-time line of sight detection method.

背景技术Background technique

视线检测是一种人机交互技术,它可以帮助计算机实现和人类用户之间方便且高效的交互。人机交互一直以来都是计算机领域关注的焦点,好的人机交互技术将极大地提升电子产品的用户体验,提高产品优势。并且,随着人机交互技术的发展,生活将变得更加智能和自动化,工作将变得更加高效和便捷。目前,它被广泛地应用于虚拟现实、增强现实以及网上购物和广告等方面。一种精确、实时的视线检测技术将大大降低这些应用的门槛,为实现更好的人机交互体验提供了保障。Gaze detection is a human-computer interaction technology that can help computers achieve convenient and efficient interactions with human users. Human-computer interaction has always been the focus of attention in the computer field. Good human-computer interaction technology will greatly improve the user experience of electronic products and improve product advantages. Moreover, with the development of human-computer interaction technology, life will become more intelligent and automated, and work will become more efficient and convenient. At present, it is widely used in virtual reality, augmented reality, and online shopping and advertising. An accurate and real-time line-of-sight detection technology will greatly reduce the threshold of these applications and provide a guarantee for a better human-computer interaction experience.

目前,视线检测技术主要分为两种:基于传感器的方法和基于计算机视觉的方法。其中,基于传感器的方法会与人体产生物理接触,比如在眼部贴上电极检测眼球转动时产生的电信号。相比于基于传感器的方法,基于计算机视觉的方法不需要直接接触人体,对使用者更加友好且便捷。At present, the line of sight detection technology is mainly divided into two types: sensor-based methods and computer vision-based methods. Among them, the sensor-based method will make physical contact with the human body, such as attaching electrodes to the eye to detect the electrical signal generated when the eyeball rotates. Compared with sensor-based methods, computer vision-based methods do not require direct contact with the human body, and are more user-friendly and convenient.

对于基于计算机视觉的方法,当前普遍使用红外设备来辅助进行图像的采样,这样得到的图像经过处理可以获得较高精度的检测结果。然而,这类方法需要昂贵的红外设备,并且其检测效果会受到环境光照的影响。并且,该方法对于佩戴眼镜的人检测效果也不理想,因为镜片会产生反射影响检测。除红外设备辅助的方法外,还有的使用复杂的成像设备来采样图像,例如使用多角度摄像机、高清摄像机、深度摄像机等。然而,这类方法需要配备特定的成像设备,在现实生活中也难以广泛应用。For methods based on computer vision, infrared devices are currently widely used to assist in image sampling, so that the obtained images can be processed to obtain higher-precision detection results. However, such methods require expensive infrared equipment, and their detection performance is affected by ambient light. Moreover, this method is not ideal for the detection of people wearing glasses, because the reflection of the lens will affect the detection. In addition to the method assisted by infrared equipment, some use complex imaging equipment to sample images, such as multi-angle cameras, high-definition cameras, depth cameras, etc. However, this type of method needs to be equipped with specific imaging equipment, and it is difficult to be widely used in real life.

综上可知,若要将视线检测广泛应用到生活中,必须满足以下的条件:(1)检测时无需直接接触人体;(2)检测可以使用任意图像采集设备,包括低成本的手机前置摄像头;(3)高精度的实时视线检测算法。To sum up, in order to widely apply line of sight detection to daily life, the following conditions must be met: (1) no direct contact with the human body is required for detection; (2) any image acquisition device can be used for detection, including low-cost front-facing cameras of mobile phones ; (3) High-precision real-time line of sight detection algorithm.

因此,本发明提出了一种仅采用单个低成本摄像头的、基于回归的、精确的实时视线检测系统的解决方案,并且可以投入日常生活的实际应用。Therefore, the present invention proposes a solution for a regression-based, accurate real-time gaze detection system using only a single low-cost camera, and can be put into practical application in daily life.

发明内容Contents of the invention

本发明的目的在于克服现有技术的不足,提出一种可以适用于低分辨率图像中视线检测的,并且具有高精度的方法。The purpose of the present invention is to overcome the deficiencies of the prior art, and propose a method that is suitable for line-of-sight detection in low-resolution images and has high precision.

为了实现上述目的,本发明采用以下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:

一种高鲁棒的实时视线检测方法包括以下步骤:A highly robust real-time line of sight detection method includes the following steps:

1)由人脸特征点检测算法检测出图像中的人脸的68个特征点;1) Detect 68 feature points of the face in the image by the face feature point detection algorithm;

2)检测出人脸特征点后,利用双圆活动模型求得虹膜中心位置;2) After detecting the feature points of the face, use the double-circle activity model to obtain the iris center position;

3)检测出人脸特征点后,利用取平均位置的方式获得参考点位置,并和虹膜中心位置结合计算得到“眼向量”;3) After detecting the feature points of the face, use the method of taking the average position to obtain the reference point position, and combine it with the iris center position to calculate the "eye vector";

4)求得头部姿态的估计;4) Obtain the estimation of the head pose;

5)利用回归算法求得与“眼向量”和头部姿态相关的映射函数,用该映射函数求得左右眼的视线位置;5) Use the regression algorithm to obtain the mapping function related to the "eye vector" and the head posture, and use the mapping function to obtain the line-of-sight positions of the left and right eyes;

6)将左右眼的视线位置进行加权求和,得到最终的视线位置。6) The sight positions of the left and right eyes are weighted and summed to obtain the final sight position.

优选的,所述的利用双圆活动模型求得虹膜中心位置的步骤之前还包括预处理步骤,所述的预处理步骤具体为:Preferably, before the step of obtaining the center position of the iris using the double-circle activity model, a preprocessing step is also included, and the preprocessing step is specifically:

1)利用68个特征点中在眼睛附近的12个特征点提取出眼睛区域;1) Utilize 12 feature points near the eyes in the 68 feature points to extract the eye region;

2)对眼睛区域图像进行腐蚀处理,滤除噪声。2) Erosion processing is performed on the image of the eye area to filter out noise.

优选的,所述的双圆活动模型的外圆和内圆的半径大小为眼睛区域的宽度乘以一个设定的比例系数得到,使得虹膜的边缘能够位于内圆外圆之间,且内圆直径为眼睛区域宽度的0.25倍,外圆直径为内圆直径的1.44倍。Preferably, the radii of the outer circle and the inner circle of the double-circle movable model are obtained by multiplying the width of the eye area by a set proportional coefficient, so that the edge of the iris can be located between the inner circle and the outer circle, and the inner circle The diameter is 0.25 times the width of the eye area, and the diameter of the outer circle is 1.44 times the diameter of the inner circle.

优选的,所述的利用双圆活动模型求得虹膜中心位置的过程中,在从左到右移动粗略估计好虹膜位置后,在该位置的上下左右-5~+5像素范围内进行移动,选择差值最大的位置作为最终的虹膜中心位置。Preferably, in the process of obtaining the center position of the iris using the double-circle motion model, after roughly estimating the position of the iris by moving from left to right, move within the range of -5 to +5 pixels up, down, left, and right of the position, The position with the largest difference is selected as the final iris center position.

优选的,所述的利用取平均位置的方式获得参考点位置,具体为:从68个特征点中选取面部轮廓的36个特征点,将这些特征点的横纵坐标值求平均,即得到一个稳定的参考点。Preferably, the method of obtaining the reference point position by taking the average position is specifically: selecting 36 feature points of the facial contour from 68 feature points, and averaging the horizontal and vertical coordinate values of these feature points to obtain a stable reference point.

优选的,所述的求得头部姿态的估计的方法为:从68个特征点中选取左右眼角点、左右嘴角点、鼻尖和下巴,得到6个特征点的位置,再由OpenCV库提供的迭代算法得到头部姿态。Preferably, the described method for obtaining the estimation of the head pose is: select the left and right corners of the eyes, the left and right corners of the mouth, the tip of the nose and the chin from 68 feature points to obtain the positions of the 6 feature points, and then provide the An iterative algorithm obtains the head pose.

优选的,所述步骤5)中的映射函数表示为一个n阶多项式函数,其形式如下:Preferably, the mapping function in the step 5) is expressed as an n-order polynomial function, and its form is as follows:

其中,gh和gv分别表示视线位置的横坐标和纵坐标;eh和ev分别表示“眼向量”的横坐标和纵坐标;hp,hy和hr分别表示头部姿态的俯仰角,偏航角和翻滚角;而ak和bk则代表函数式中k阶项的系数,也即映射函数的固定参数。Among them, gh and gv represent the abscissa and ordinate of the gaze position respectively; eh andev represent the abscissa and ordinate of the "eye vector"respectively; hp , hy and hr represent the head posture Pitch angle, yaw angle and roll angle; and ak and bk represent the coefficients of the k-order term in the function formula, that is, the fixed parameters of the mapping function.

优选的,所述步骤6)具体为:Preferably, the step 6) is specifically:

求得的左右眼的视线坐标,通过加权求和得到最终的坐标,具体形式如下:The obtained line-of-sight coordinates of the left and right eyes are obtained through weighted summation to obtain the final coordinates. The specific form is as follows:

gfh=w*glh+(1-w)*grh,w∈[0,1]gfh =w*glh +(1-w)*grh ,w∈[0,1]

gfv=w*glv+(1-w)*grv,w∈[0,1]gfv =w*glv +(1-w)*grv ,w∈[0,1]

其中,w是权重系数;glh和grh分别是左眼和右眼的视线横坐标;glv和grv分别是左眼和右眼的视线纵坐标,gfh和gfv分别表示最终视线位置的横坐标和纵坐标。Among them, w is the weight coefficient; glh and grh are the abscissas of the line of sight of the left eye and the right eye respectively; glv and grv are the ordinates of the line of sight of the left and right eyes respectively, and gfh and gfv represent the final line of sight The abscissa and ordinate of the position.

本发明的高鲁棒的实时视线检测方法的优点是:The advantage of the highly robust real-time line of sight detection method of the present invention is:

1.该方法的计算成本低,根据实际使用的表现,可以在主频3GHz的CPU处理器上达到每秒50帧左右的检测速度并且无需其他专用设备,因此该方法的硬件要求不高且具有实时性。1. The calculation cost of this method is low. According to the performance of actual use, the detection speed of about 50 frames per second can be achieved on a CPU processor with a main frequency of 3GHz and no other special equipment is needed. Therefore, the hardware requirements of this method are not high and have real-time.

2.该方法的鲁棒性高,根据实际使用的表现,对于低分辨率图像也能准确检测到视线位置,除此之外受光照的影响也比较小,故对于很多不同场景都具有普适性。2. This method has high robustness. According to the performance of actual use, it can also accurately detect the position of the line of sight for low-resolution images. In addition, it is less affected by illumination, so it is universal for many different scenes. sex.

3.该方法利用双圆活动模型进行虹膜中心检测的过程中,先对双圆活动模型根据实际中人眼和虹膜的大小比例进行了初始化,大大减少了模型移动的范围,从而减少了迭代计算的次数。3. In the process of using the double-circle activity model to detect the center of the iris, the method first initializes the double-circle activity model according to the actual size ratio of the human eye and the iris, which greatly reduces the range of model movement, thereby reducing iterative calculations times.

4.该方法利用双圆活动模型进行虹膜中心检测的过程中,分为粗略检测阶段和精确定位阶段进行,在粗略检测到虹膜中心位置后又让双圆活动模型在粗略位置周围-5~+5像素范围内精确计算最终位置,大大提高了虹膜中心位置检测的精度,从而提高了整体视线检测的效果。4. In the process of using the double-circle active model to detect the iris center, it is divided into a rough detection stage and a precise positioning stage. After the iris center is roughly detected, the double-circle active model is placed around the rough position -5~+ The final position is accurately calculated within the range of 5 pixels, which greatly improves the accuracy of iris center position detection, thereby improving the overall line of sight detection effect.

5.该方法计算最终视线位置的过程中,同时利用了基于虹膜中心和参考点位置得到的“眼向量”以及基于迭代算法得到的头部姿态,综合两者的检测结果进行映射来计算视线位置,大大提高了该方法的准确性和鲁棒性。5. In the process of calculating the final gaze position, the method uses the "eye vector" obtained based on the iris center and the reference point position and the head pose based on the iterative algorithm, and maps the detection results of the two to calculate the gaze position , greatly improving the accuracy and robustness of the method.

6.该方法计算最终视线位置的过程中,利用加权求和的方法综合了左眼和右眼的检测结果,大大提高了该方法的准确性和鲁棒性。6. In the process of calculating the final line-of-sight position of the method, the detection results of the left eye and the right eye are synthesized by using the method of weighted summation, which greatly improves the accuracy and robustness of the method.

附图说明Description of drawings

图1是本视线检测系统的算法总体流程图。Figure 1 is an overall flow chart of the algorithm of the line of sight detection system.

图2是人脸68个特征点位置的示意图;Fig. 2 is a schematic diagram of the positions of 68 feature points of the face;

图3是人脸68个特征点标号的示意图;Fig. 3 is a schematic diagram of 68 feature point labels of a human face;

具体实施方式Detailed ways

下面结合技术方案和附图对本发明进一步说明。The present invention will be further described below in conjunction with the technical scheme and accompanying drawings.

如图1所示,本发明的一种高鲁棒的实时视线检测方法包括以下步骤:As shown in Figure 1, a kind of highly robust real-time line of sight detection method of the present invention comprises the following steps:

1)由人脸特征点检测算法检测出图像中的人脸的68个特征点,如图2所示;1) Detect 68 feature points of the face in the image by the face feature point detection algorithm, as shown in Figure 2;

2)检测出人脸特征点后,利用双圆活动模型求得虹膜中心位置;2) After detecting the feature points of the face, use the double-circle activity model to obtain the iris center position;

3)检测出人脸特征点后,利用取平均位置的方式获得参考点位置,并和虹膜中心位置结合计算得到“眼向量”;3) After detecting the feature points of the face, use the method of taking the average position to obtain the reference point position, and combine it with the iris center position to calculate the "eye vector";

4)求得头部姿态的估计;4) Obtain the estimation of the head pose;

5)利用回归算法求得与“眼向量”和头部姿态相关的映射函数,用该映射函数求得左右眼的视线位置;所述步骤5)中的映射函数表示为一个n阶多项式函数,其形式如下:5) Utilize the regression algorithm to obtain the mapping function related to the "eye vector" and the head posture, and use the mapping function to obtain the sight position of the left and right eyes; the mapping function in the step 5) is expressed as an n-order polynomial function, Its form is as follows:

其中,gh和gv分别表示视线位置的横坐标和纵坐标;eh和ev分别表示“眼向量”的横坐标和纵坐标;hp,hy和hr分别表示头部姿态的俯仰角,偏航角和翻滚角;而ak和bk则代表函数式中k阶项的系数,也即映射函数的固定参数。Among them, gh and gv represent the abscissa and ordinate of the gaze position respectively; eh andev represent the abscissa and ordinate of the "eye vector"respectively; hp , hy and hr represent the head posture Pitch angle, yaw angle and roll angle; and ak and bk represent the coefficients of the k-order term in the function formula, that is, the fixed parameters of the mapping function.

6)将左右眼的视线位置进行加权求和,得到最终的视线位置,具体为:6) Weighted and summed the sight positions of the left and right eyes to obtain the final sight position, specifically:

求得的左右眼的视线坐标,通过加权求和得到最终的坐标,具体形式如下:The obtained line-of-sight coordinates of the left and right eyes are obtained through weighted summation to obtain the final coordinates. The specific form is as follows:

gfh=w*glh+(1-w)*grh,w∈[0,1]gfh =w*glh +(1-w)*grh ,w∈[0,1]

gfv=w*glv+(1-w)*grv,w∈[0,1]gfv =w*glv +(1-w)*grv ,w∈[0,1]

其中,w是权重系数;glh和grh分别是左眼和右眼的视线横坐标;glv和grv分别是左眼和右眼的视线纵坐标,gfh和gfv分别表示最终视线位置的横坐标和纵坐标。Among them, w is the weight coefficient; glh and grh are the abscissas of the line of sight of the left eye and the right eye respectively; glv and grv are the ordinates of the line of sight of the left and right eyes respectively, and gfh and gfv represent the final line of sight The abscissa and ordinate of the position.

在本实施例中,所述的利用双圆活动模型求得虹膜中心位置的步骤之前还包括预处理步骤,所述的预处理步骤具体为:In this embodiment, the step of obtaining the center position of the iris using the double-circle activity model also includes a preprocessing step, and the preprocessing step is specifically:

1)利用68个特征点中在眼睛附近的12个特征点(图3中的37~48号点)提取出眼睛区域;1) Utilize 12 feature points (points 37-48 in Fig. 3) near the eyes among the 68 feature points to extract the eye region;

2)对眼睛区域图像进行腐蚀处理,滤除噪声。2) Erosion processing is performed on the image of the eye area to filter out noise.

在本发明的一个优选实施例中,所述的双圆活动模型的外圆和内圆的半径大小为眼睛区域的宽度乘以一个设定的比例系数得到,使得虹膜的边缘能够位于内圆外圆之间,且内圆直径为眼睛区域宽度的0.25倍,外圆直径为内圆直径的1.44倍。In a preferred embodiment of the present invention, the radii of the outer circle and the inner circle of the two-circle movable model are obtained by multiplying the width of the eye area by a set proportional coefficient, so that the edge of the iris can be located outside the inner circle between the circles, and the diameter of the inner circle is 0.25 times the width of the eye area, and the diameter of the outer circle is 1.44 times the diameter of the inner circle.

在本发明的一个优选实施例中,所述的利用双圆活动模型求得虹膜中心位置的过程中,在从左到右移动粗略估计好虹膜位置后,在该位置的上下左右-5~+5像素范围内进行移动,选择差值最大的位置作为最终的虹膜中心位置。In a preferred embodiment of the present invention, in the process of obtaining the center position of the iris using the double-circle activity model, after roughly estimating the position of the iris by moving from left to right, -5~+ Move within a range of 5 pixels, and select the position with the largest difference as the final iris center position.

在本发明的一个优选实施例中,所述的利用取平均位置的方式获得参考点位置,具体为:从68个特征点中选取面部轮廓的36个特征点(图3中的1~36号点),将这些特征点的横纵坐标值求平均,即得到一个稳定的参考点。In a preferred embodiment of the present invention, the method of obtaining the reference point position by means of taking the average position is specifically: selecting 36 feature points of the facial contour from 68 feature points (numbers 1 to 36 in Fig. 3 points), and average the horizontal and vertical coordinate values of these feature points to obtain a stable reference point.

在本发明的一个优选实施例中,所述的求得头部姿态的估计的方法为:从68个特征点中选取左右眼角点、左右嘴角点、鼻尖和下巴,得到6个特征点的位置,再由OpenCV库提供的迭代算法得到头部姿态。In a preferred embodiment of the present invention, the method for obtaining the estimation of the head posture is as follows: select the left and right corners of the eyes, the left and right corners of the mouth, the tip of the nose and the chin from the 68 feature points to obtain the positions of the 6 feature points , and then the head pose is obtained by the iterative algorithm provided by the OpenCV library.

Claims (8)

CN201810118195.8A2018-02-062018-02-06High-robustness real-time sight line detection methodExpired - Fee RelatedCN108268858B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810118195.8ACN108268858B (en)2018-02-062018-02-06High-robustness real-time sight line detection method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810118195.8ACN108268858B (en)2018-02-062018-02-06High-robustness real-time sight line detection method

Publications (2)

Publication NumberPublication Date
CN108268858Atrue CN108268858A (en)2018-07-10
CN108268858B CN108268858B (en)2020-10-16

Family

ID=62773617

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810118195.8AExpired - Fee RelatedCN108268858B (en)2018-02-062018-02-06High-robustness real-time sight line detection method

Country Status (1)

CountryLink
CN (1)CN108268858B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110046546A (en)*2019-03-052019-07-23成都旷视金智科技有限公司A kind of adaptive line of sight method for tracing, device, system and storage medium
CN110275608A (en)*2019-05-072019-09-24清华大学 Human Eye Gaze Tracking Method
CN110321820A (en)*2019-06-242019-10-11东南大学A kind of sight drop point detection method based on contactless device
CN110909611A (en)*2019-10-292020-03-24深圳云天励飞技术有限公司 A method, device, readable storage medium and terminal device for detecting an area of interest
CN113378777A (en)*2021-06-302021-09-10沈阳康慧类脑智能协同创新中心有限公司Sight line detection method and device based on monocular camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102749991A (en)*2012-04-122012-10-24广东百泰科技有限公司Non-contact free space eye-gaze tracking method suitable for man-machine interaction
CN102930278A (en)*2012-10-162013-02-13天津大学Human eye sight estimation method and device
CN105303170A (en)*2015-10-162016-02-03浙江工业大学Human eye feature based sight line estimation method
WO2018000020A1 (en)*2016-06-292018-01-04Seeing Machines LimitedSystems and methods for performing eye gaze tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102749991A (en)*2012-04-122012-10-24广东百泰科技有限公司Non-contact free space eye-gaze tracking method suitable for man-machine interaction
CN102930278A (en)*2012-10-162013-02-13天津大学Human eye sight estimation method and device
CN105303170A (en)*2015-10-162016-02-03浙江工业大学Human eye feature based sight line estimation method
WO2018000020A1 (en)*2016-06-292018-01-04Seeing Machines LimitedSystems and methods for performing eye gaze tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAHBOUBEH SHAMSI.ET AL: ""Fast Algorithm for Iris Localization Using"", <2009 INTERNATIONAL CONFERENCE OF SOFT COMPUTING AND PATTERN RECOGNITION>*
龚秀锋等: ""基于标记点检测的视线跟踪注视点估计"", 《计算机工程》*

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110046546A (en)*2019-03-052019-07-23成都旷视金智科技有限公司A kind of adaptive line of sight method for tracing, device, system and storage medium
CN110046546B (en)*2019-03-052021-06-15成都旷视金智科技有限公司 An adaptive gaze tracking method, device, system and storage medium
CN110275608A (en)*2019-05-072019-09-24清华大学 Human Eye Gaze Tracking Method
CN110321820A (en)*2019-06-242019-10-11东南大学A kind of sight drop point detection method based on contactless device
CN110321820B (en)*2019-06-242022-03-04东南大学Sight line drop point detection method based on non-contact equipment
CN110909611A (en)*2019-10-292020-03-24深圳云天励飞技术有限公司 A method, device, readable storage medium and terminal device for detecting an area of interest
CN113378777A (en)*2021-06-302021-09-10沈阳康慧类脑智能协同创新中心有限公司Sight line detection method and device based on monocular camera

Also Published As

Publication numberPublication date
CN108268858B (en)2020-10-16

Similar Documents

PublicationPublication DateTitle
CN106056092B (en) Gaze Estimation Method for Head Mounted Devices Based on Iris and Pupil
CN108268858B (en)High-robustness real-time sight line detection method
CN109271914B (en)Method, device, storage medium and terminal equipment for detecting sight line drop point
CN106796449B (en) Eye tracking method and device
EP3608755B1 (en)Electronic apparatus operated by head movement and operation method thereof
CN105389539B (en) A method and system for 3D gesture pose estimation based on depth data
Itoh et al.Interaction-free calibration for optical see-through head-mounted displays based on 3d eye localization
Plopski et al.Corneal-imaging calibration for optical see-through head-mounted displays
CN104951808B (en)A kind of 3D direction of visual lines methods of estimation for robot interactive object detection
CN111414798A (en)Head posture detection method and system based on RGB-D image
CN103677270B (en)A kind of man-machine interaction method based on eye-tracking
CN105138965B (en) A near-eye eye-tracking method and system thereof
CN102831392B (en)Device for remote iris tracking and acquisition, and method thereof
CN105303170B (en)A kind of gaze estimation method based on human eye feature
CN103870796B (en)Eye sight evaluation method and device
CN110060275B (en) Method and system for detecting blood flow velocity in human microcirculation
CN103353935A (en)3D dynamic gesture identification method for intelligent home system
CN112183200B (en)Eye movement tracking method and system based on video image
CN105759967A (en)Global hand gesture detecting method based on depth data
CN110781712B (en)Human head space positioning method based on human face detection and recognition
CN107729871A (en)Infrared light-based human eye movement track tracking method and device
CN114078278B (en) Gaze point positioning method and device, electronic device and storage medium
WO2020063000A1 (en)Neural network training and line of sight detection methods and apparatuses, and electronic device
CN108369744A (en)3D gaze point detection through binocular homography mapping
CN108681403A (en) A car control method using eye-tracking

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20201016

Termination date:20210206

CF01Termination of patent right due to non-payment of annual fee

[8]ページ先頭

©2009-2025 Movatter.jp