Movatterモバイル変換


[0]ホーム

URL:


CN110084191B - Eye shielding detection method and system - Google Patents

Eye shielding detection method and system
Download PDF

Info

Publication number
CN110084191B
CN110084191BCN201910343779.XACN201910343779ACN110084191BCN 110084191 BCN110084191 BCN 110084191BCN 201910343779 ACN201910343779 ACN 201910343779ACN 110084191 BCN110084191 BCN 110084191B
Authority
CN
China
Prior art keywords
eye
image
neural network
convolutional neural
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910343779.XA
Other languages
Chinese (zh)
Other versions
CN110084191A (en
Inventor
黄国恒
胡可
谢靓茹
黄斯彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of TechnologyfiledCriticalGuangdong University of Technology
Priority to CN201910343779.XApriorityCriticalpatent/CN110084191B/en
Publication of CN110084191ApublicationCriticalpatent/CN110084191A/en
Application grantedgrantedCritical
Publication of CN110084191BpublicationCriticalpatent/CN110084191B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种眼部遮挡检测方法及系统,方法包括:从获取的脸部图像中获取眼部区域图像;使用第一卷积神经网络对眼部区域图像提取特征,运算出眼部位置,使用第二卷积神经网络对眼部区域图像提取特征,得到眼部区域图像的特征图,并根据得到的眼部位置从所述特征图中获取眼部特征;对获得的眼部特征进行反卷积处理,根据反卷积处理得到的图像运算出指示眼部遮挡情况的结果。本发明眼部遮挡检测方法及系统,根据获取的用户脸部图像能够检测出用户眼部的遮挡情况,与现有技术相比可以不用测试人员提示用户遮挡眼部,从而降低了测试人员的工作量。

The invention discloses an eye occlusion detection method and system. The method includes: obtaining an eye area image from an acquired facial image; using a first convolutional neural network to extract features of the eye area image and calculating the eye position. , use the second convolutional neural network to extract features from the eye area image, obtain a feature map of the eye area image, and obtain eye features from the feature map based on the obtained eye position; perform Deconvolution processing is used to calculate a result indicating eye occlusion based on the image obtained by the deconvolution processing. The eye occlusion detection method and system of the present invention can detect the occlusion of the user's eyes based on the acquired user's face image. Compared with the existing technology, the tester does not need to prompt the user to cover the eyes, thereby reducing the work of the tester. quantity.

Description

Translated fromChinese
一种眼部遮挡检测方法及系统An eye occlusion detection method and system

技术领域Technical field

本发明涉及计算机视觉技术领域,特别是涉及一种眼部遮挡检测方法及系统。The present invention relates to the field of computer vision technology, and in particular to an eye occlusion detection method and system.

背景技术Background technique

在使用视力测试表对用户测视力时,依次对用户的两眼睛分别测试,需要用户睁开当前要测试的眼睛而遮挡住另一眼睛,现有技术中,需要由测试人员提示用户睁开哪只眼睛或者遮挡住哪只眼睛,这给测试人员带来了很大工作量。When using a vision test chart to measure the user's vision, the user's two eyes are tested separately in sequence. The user needs to open the eye currently to be tested and cover the other eye. In the existing technology, the tester needs to prompt the user which eye to open. Only one eye or which eye is covered, which brings a lot of workload to the testers.

发明内容Contents of the invention

有鉴于此,本发明提供一种眼部遮挡检测方法及系统,根据获取的用户脸部图像能够检测出用户眼部的遮挡情况,与现有技术相比降低了测试人员的工作量。In view of this, the present invention provides an eye occlusion detection method and system, which can detect the occlusion of the user's eyes based on the acquired user's face image, which reduces the workload of the tester compared with the existing technology.

为解决上述技术问题,本发明提供如下技术方案:In order to solve the above technical problems, the present invention provides the following technical solutions:

一种眼部遮挡检测方法,包括:An eye occlusion detection method, including:

从获取的脸部图像中获取眼部区域图像;Obtain the eye area image from the acquired facial image;

使用第一卷积神经网络对所述眼部区域图像提取特征,运算出眼部位置,使用第二卷积神经网络对所述眼部区域图像提取特征,得到所述眼部区域图像的特征图,并根据得到的眼部位置从所述特征图中获取眼部特征;Use the first convolutional neural network to extract features from the eye area image and calculate the eye position. Use the second convolutional neural network to extract features from the eye area image to obtain a feature map of the eye area image. , and obtain eye features from the feature map according to the obtained eye position;

对获得的眼部特征进行反卷积处理,根据反卷积处理得到的图像运算出指示眼部遮挡情况的结果。The obtained eye features are deconvolved, and a result indicating eye occlusion is calculated based on the image obtained by the deconvolution process.

优选的,使用依次级联的第三卷积神经网络和第四卷积神经网络对所述脸部图像处理,从所述脸部图像中获取眼部区域图像;Preferably, the facial image is processed using a third convolutional neural network and a fourth convolutional neural network cascaded in sequence, and an eye region image is obtained from the facial image;

所述第三卷积神经网络用于对所述脸部图像运算处理,根据所述脸部图像计算出一系列将脸部框出的边界框以及一系列将眼部框出的边界框;The third convolutional neural network is used to perform computational processing on the facial image, and calculate a series of bounding boxes that frame the face and a series of bounding boxes that frame the eyes based on the facial image;

所述第四卷积神经网络用于对所述脸部图像运算处理,根据所述第三卷积神经网络输出的一系列将脸部框出的边界框筛选出更准确的将脸部框出的边界框,根据所述第三卷积神经网络输出的一系列将眼部框出的边界框筛选出更准确的将眼部框出的边界框。The fourth convolutional neural network is used to process the facial image, and selects a more accurate face-framing frame based on a series of face-framing bounding boxes output by the third convolutional neural network. The bounding box that frames the eye is filtered out according to a series of bounding boxes that frame the eye output by the third convolutional neural network to select a more accurate bounding box that frames the eye.

优选的,所述第三卷积神经网络包括依次级联的1个卷积池化层、卷积层和池化层,用于生成2个用于分类的特征图、4个用于边界框判断的特征图和10个用于判断脸部特征点的特征图;Preferably, the third convolutional neural network includes a convolutional pooling layer, a convolutional layer and a pooling layer cascaded in sequence to generate 2 feature maps for classification and 4 for bounding boxes. The judged feature map and 10 feature maps used to judge facial feature points;

所述第四卷积神经网络包括依次级联的2个卷积池化层、池化层和全连接层,用于生成2个用于分类的特征图、4个用于边界框判断的特征图和10个用于判断脸部特征点的特征图。The fourth convolutional neural network includes 2 convolutional pooling layers, a pooling layer and a fully connected layer cascaded in sequence, and is used to generate 2 feature maps for classification and 4 features for bounding box judgment. Figure and 10 feature maps used to determine facial feature points.

优选的,所述第三卷积神经网络具体用于根据边界框的回归值对获得的边界框进行校准,所述第四卷积神经网络具体用于根据边界框的回归值对获得的边界框进行校准。Preferably, the third convolutional neural network is specifically used to calibrate the obtained bounding box according to the regression value of the bounding box, and the fourth convolutional neural network is specifically used to calibrate the obtained bounding box according to the regression value of the bounding box. Perform calibration.

优选的,所述第三卷积神经网络具体用于使用非极大值抑制法合并重叠的边界框,所述第四卷积神经网络具体用于使用非极大值抑制法合并重叠的边界框。Preferably, the third convolutional neural network is specifically used to merge overlapping bounding boxes using a non-maximum suppression method, and the fourth convolutional neural network is specifically used to merge overlapping bounding boxes using a non-maximum suppression method. .

优选的,对获得的眼部特征进行反卷积处理包括:对获得的眼部特征进行反卷积处理而得到与原图像大小相同的图像,而后对得到的图像进行滑动步长为预设值的卷积处理,得到图像大小大于原图像的图像。Preferably, performing deconvolution processing on the obtained eye features includes: performing deconvolution processing on the obtained eye features to obtain an image with the same size as the original image, and then sliding the obtained image with a step size of a preset value. The convolution process produces an image whose size is larger than the original image.

优选的,根据反卷积处理得到的图像运算出指示眼部遮挡情况的结果包括:Preferably, the result indicating eye occlusion is calculated based on the image obtained by the deconvolution process and includes:

对反卷积处理得到的图像提取特征,得到用于描述图像特征的特征向量;Extract features from the image obtained by deconvolution to obtain feature vectors used to describe image features;

将得到的特征向量输入到预先训练好的分类器内,由所述分类器输出眼部是否遮挡的结果。The obtained feature vector is input into a pre-trained classifier, and the classifier outputs the result of whether the eyes are blocked.

一种眼部遮挡检测系统,用于执行以上所述的眼部遮挡检测方法。An eye occlusion detection system is used to perform the above-mentioned eye occlusion detection method.

由上述技术方案可知,本发明所提供的眼部遮挡检测方法及系统,首先从获取的脸部图像中获取眼部区域图像,然后使用第一卷积神经网络对眼部区域图像提取特征,运算出眼部位置,使用第二卷积神经网络对眼部区域图像提取特征,得到眼部区域图像的特征图,并根据得到的眼部位置从特征图中获取眼部特征,进一步对获得的眼部特征进行反卷积处理,根据反卷积处理得到的图像运算出指示眼部遮挡情况的结果。本发明眼部遮挡检测方法及系统,根据获取的用户脸部图像能够检测出用户眼部的遮挡情况,与现有技术相比可以不用测试人员提示用户遮挡眼部,从而降低了测试人员的工作量。It can be seen from the above technical solution that the eye occlusion detection method and system provided by the present invention first obtain the eye area image from the acquired facial image, and then use the first convolutional neural network to extract features of the eye area image and calculate After extracting the eye position, use the second convolutional neural network to extract features from the eye area image to obtain the feature map of the eye area image, and obtain the eye features from the feature map based on the obtained eye position, and further analyze the obtained eye area image. The facial features are deconvolved, and a result indicating eye occlusion is calculated based on the image obtained by the deconvolution process. The eye occlusion detection method and system of the present invention can detect the occlusion of the user's eyes based on the acquired user's face image. Compared with the existing technology, the tester does not need to prompt the user to cover the eyes, thereby reducing the work of the tester. quantity.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without exerting creative efforts.

图1为本发明实施例提供的一种眼部遮挡检测方法的流程图;Figure 1 is a flow chart of an eye occlusion detection method provided by an embodiment of the present invention;

图2为本发明实施例中从脸部图像中获取眼部区域图像的处理流程图;Figure 2 is a processing flow chart for obtaining an eye area image from a facial image in an embodiment of the present invention;

图3为本发明实施例中根据眼部区域图像获得眼部特征的处理流程图。Figure 3 is a processing flow chart for obtaining eye features based on eye region images in an embodiment of the present invention.

具体实施方式Detailed ways

为了使本技术领域的人员更好地理解本发明中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to enable those skilled in the art to better understand the technical solutions in the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described The embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts should fall within the scope of protection of the present invention.

请参考图1,图1为本发明实施例提供的一种眼部遮挡检测方法的流程图,由图可知,本实施例眼部遮挡检测方法包括以下步骤:Please refer to Figure 1. Figure 1 is a flow chart of an eye occlusion detection method provided by an embodiment of the present invention. As can be seen from the figure, the eye occlusion detection method in this embodiment includes the following steps:

S10:从获取的脸部图像中获取眼部区域图像。S10: Obtain the eye region image from the acquired facial image.

对于拍摄获得的用户的脸部图像,从获得的脸部图像中获取眼部区域图像。优选的请参考图2,图2为本实施例中从脸部图像中获取眼部区域图像的处理流程图,本实施例可使用依次级联的第三卷积神经网络30和第四卷积神经网络31对脸部图像处理,从脸部图像中获取眼部区域图像。For the user's face image obtained by shooting, an eye region image is obtained from the obtained face image. Preferably, please refer to Figure 2. Figure 2 is a processing flow chart for obtaining an eye region image from a facial image in this embodiment. This embodiment can use the third convolutional neural network 30 and the fourth convolution cascaded in sequence. The neural network 31 processes the facial image and obtains the eye area image from the facial image.

具体的,所述第三卷积神经网络30用于对所述脸部图像运算处理,根据所述脸部图像计算出一系列将脸部框出的边界框以及一系列将眼部框出的边界框;所述第四卷积神经网络31用于对所述脸部图像运算处理,根据所述第三卷积神经网络输出的一系列将脸部框出的边界框筛选出更准确的将脸部框出的边界框,根据所述第三卷积神经网络输出的一系列将眼部框出的边界框筛选出更准确的将眼部框出的边界框。Specifically, the third convolutional neural network 30 is used to perform computational processing on the facial image, and calculate a series of bounding boxes that frame the face and a series of bounding boxes that frame the eyes based on the facial image. Bounding boxes; the fourth convolutional neural network 31 is used to perform operations on the facial image, and select more accurate ones based on a series of bounding boxes that frame the face output by the third convolutional neural network. A more accurate bounding box for the eyes is selected based on a series of bounding boxes for the eyes output by the third convolutional neural network.

具体的,第三卷积神经网络30会先将输入的图片按照不同的缩放比例,生成不同大小的图片,形成图片的特征金字塔,而后对各不同大小的图片进行运算处理,计算出一系列将脸部框出的边界框以及一系列将眼部框出的边界框。在实际应用中可根据应用需求,第三卷积神经网络还可根据输入的脸部图像能够计算出将其它脸部特征点框出的边界框,比如鼻子部位,左嘴角部位和右嘴角部位等。Specifically, the third convolutional neural network 30 will first generate images of different sizes according to different scaling ratios of the input images to form a feature pyramid of the images, and then perform operations on the images of different sizes to calculate a series of A bounding box for the face and a series of bounding boxes for the eyes. In actual applications, according to application requirements, the third convolutional neural network can also calculate the bounding box that frames other facial feature points based on the input facial image, such as the nose, left corner of the mouth, and right corner of the mouth, etc. .

第三卷积神经网络30具体用于根据边界框的回归值对获得的边界框进行校准。边界框的回归值表征了边界框内图像包含所要框出的特征的概率,根据各边界框的回归值可以对获得的边界框进行校准,将得到的边界框中不正确的边界框排除,将其中包含特征的概率较小的边界框排除。The third convolutional neural network 30 is specifically used to calibrate the obtained bounding box according to the regression value of the bounding box. The regression value of the bounding box represents the probability that the image in the bounding box contains the feature to be framed. The obtained bounding box can be calibrated according to the regression value of each bounding box, and incorrect bounding boxes in the obtained bounding box are excluded. Bounding boxes with a small probability of containing features are excluded.

第三卷积神经网络30还具体用于使用非极大值抑制法合并重叠的边界框,这样排除其中重叠的边界框,以获得更准确的将特征区域框出的边界框。使用非极大值抑制法合并边界框的方法具体包括以下步骤:The third convolutional neural network 30 is also specifically used to merge overlapping bounding boxes using a non-maximum suppression method, thereby excluding overlapping bounding boxes to obtain a more accurate bounding box that frames the feature area. The method of merging bounding boxes using non-maximum suppression method specifically includes the following steps:

S20:将获得的所有边界框按照得分从高到低依次排列,选出得分最高的边界框,边界框的得分即边界框的回归值;S20: Arrange all the obtained bounding boxes in order from high to low scores, and select the bounding box with the highest score. The score of the bounding box is the regression value of the bounding box;

S21:遍历除得分最高的边界框之外其余的边界框,若边界框和当前得分最高的边界框的重叠面积大于预设阈值,则删除该得分最高的边界框;S21: Traverse the remaining bounding boxes except the highest-scoring bounding box. If the overlap area between the bounding box and the current highest-scoring bounding box is greater than the preset threshold, delete the highest-scoring bounding box;

S22:从剩余的边界框中重新选出得分最高的边界框,若只剩余一个边界框则将该边界框输出,若剩余不止一个边界框则进入步骤S21。S22: Re-select the bounding box with the highest score from the remaining bounding boxes. If only one bounding box remains, output the bounding box. If more than one bounding box remains, proceed to step S21.

第四卷积神经网络31用于对脸部图像运算处理,对第三卷积神经网络输出的将脸部框出的边界框在网络内训练以及输出的将眼部框出的边界框在网络内训练。第四卷积神经网络31具体用于根据边界框的回归值对获得的边界框进行校准。根据边界框的回归值将得到的边界框中不正确的边界框排除,将其中包含特征的概率较小的边界框排除。第四卷积神经网络31还具体用于使用非极大值抑制法合并重叠的边界框,这样排除其中重叠的边界框,以获得更准确的将特征区域框出的边界框。使用非极大值抑制法合并重叠的边界框的方法可参考上文描述的过程。The fourth convolutional neural network 31 is used for facial image processing, and the bounding box that frames the face output by the third convolutional neural network is trained in the network and the bounding box that frames the eyes output is trained in the network. Internal training. The fourth convolutional neural network 31 is specifically used to calibrate the obtained bounding box according to the regression value of the bounding box. According to the regression value of the bounding box, incorrect bounding boxes in the resulting bounding boxes are excluded, and bounding boxes with a smaller probability of containing features are excluded. The fourth convolutional neural network 31 is also specifically used to merge overlapping bounding boxes using a non-maximum suppression method, thereby excluding overlapping bounding boxes to obtain a more accurate bounding box that frames the feature area. The method of merging overlapping bounding boxes using non-maximum suppression can refer to the process described above.

在一种具体实例中,第三卷积神经网络包括依次级联的1个卷积池化层、卷积层和池化层,用于生成2个用于分类的特征图、4个用于边界框判断的特征图和10个用于判断脸部特征点的特征图;第四卷积神经网络包括依次级联的2个卷积池化层、池化层和全连接层,用于生成2个用于分类的特征图、4个用于边界框判断的特征图和10个用于判断脸部特征点的特征图。可以采用多任务级联卷积神经网络(Multi-task Cascaded ConvolutionalNetwork,MTCNN)来检测人脸以及脸部特征点部位,由MTCNN模型中的P-Net作为第三卷积神经网络,由MTCNN模型中的R-Net作为第四卷积神经网络。In a specific example, the third convolutional neural network includes a convolutional pooling layer, a convolutional layer and a pooling layer cascaded in sequence to generate 2 feature maps for classification and 4 feature maps for classification. The feature map for bounding box judgment and 10 feature maps for judging facial feature points; the fourth convolutional neural network includes 2 convolutional pooling layers, a pooling layer and a fully connected layer cascaded in sequence, which are used to generate 2 feature maps for classification, 4 feature maps for bounding box determination, and 10 feature maps for determining facial feature points. Multi-task Cascaded Convolutional Network (MTCNN) can be used to detect faces and facial feature points. P-Net in the MTCNN model is used as the third convolutional neural network. R-Net serves as the fourth convolutional neural network.

输入的脸部图像经过第三卷积神经网络、第四卷积神经网络运算处理,最终输出的图像标记出了将脸部框出的边界框以及将眼部框出的边界框。The input facial image is processed by the third convolutional neural network and the fourth convolutional neural network, and the final output image is marked with a bounding box that frames the face and a bounding box that frames the eyes.

S11:使用第一卷积神经网络对所述眼部区域图像提取特征,运算出眼部位置,使用第二卷积神经网络对所述眼部区域图像提取特征,得到所述眼部区域图像的特征图,并根据得到的眼部位置从所述特征图中获取眼部特征。S11: Use the first convolutional neural network to extract features from the eye area image, calculate the eye position, use the second convolutional neural network to extract features from the eye area image, and obtain the eye area image. Feature map, and obtain eye features from the feature map according to the obtained eye position.

可参考图3所示,图3为本实施例中根据眼部区域图像获得眼部特征的处理流程图。将眼部区域图像输入到第一卷积神经网络40内处理,检测出眼部位置,用以获得眼部的精确位置。并将眼部区域图像输入到第二卷积神经网络41内处理,得到特征图。本方法使用第二卷积神经网络这一深层网络得到一个粗糙的分割结果图即注意力图,得到一个眼部的大概位置,然后第一卷积神经网络这一浅层网络只需要关注这些大概位置,预测出精细的位置,不关注图像中其他部分,从而降低了学习难度。Reference may be made to FIG. 3 , which is a processing flow chart for obtaining eye features based on eye region images in this embodiment. The eye region image is input into the first convolutional neural network 40 for processing, and the eye position is detected to obtain the precise position of the eye. The eye region image is input into the second convolutional neural network 41 for processing to obtain a feature map. This method uses the second convolutional neural network, a deep network, to obtain a rough segmentation result map, that is, an attention map, to obtain an approximate position of the eye, and then the first convolutional neural network, a shallow network, only needs to pay attention to these approximate positions. , predicting fine positions without paying attention to other parts of the image, thereby reducing the difficulty of learning.

S12:对获得的眼部特征进行反卷积处理,根据反卷积处理得到的图像运算出指示眼部遮挡情况的结果。S12: Perform deconvolution processing on the obtained eye features, and calculate a result indicating eye occlusion based on the image obtained by the deconvolution processing.

优选的,对获得的眼部特征进行反卷积处理包括:对获得的眼部特征进行反卷积处理而得到与原图像大小相同的图像,而后对得到的图像进行滑动步长为预设值的卷积处理,得到图像大小大于原图像的图像。使用卷积神经网络对图像进行运算处理会损失图像包含的特征,本实施例中使用上述的上采样反卷积处理方法,能够扩大图像尺寸,通过填充图像内容使得图像内容变得丰富。Preferably, performing deconvolution processing on the obtained eye features includes: performing deconvolution processing on the obtained eye features to obtain an image with the same size as the original image, and then sliding the obtained image with a step size of a preset value. The convolution process produces an image whose size is larger than the original image. Using a convolutional neural network to perform image processing will lose the features contained in the image. In this embodiment, the above-mentioned upsampling and deconvolution processing method is used to expand the image size and enrich the image content by filling the image content.

进一步,根据反卷积处理得到的图像运算出指示眼部遮挡情况的结果包括:首先对反卷积处理得到的图像提取特征,得到用于描述图像特征的特征向量,然后将得到的特征向量输入到预先训练好的分类器内,由所述分类器输出眼部是否遮挡的结果。预先训练好的分类器根据输入的特征向量计算眼部被遮挡的概率,根据遮挡概率输出眼部是否被遮挡的判断结果。Further, calculating the result indicating eye occlusion based on the image obtained by deconvolution processing includes: first extracting features from the image obtained by deconvolution processing, obtaining a feature vector used to describe the image features, and then inputting the obtained feature vector. into a pre-trained classifier, and the classifier outputs the result of whether the eyes are blocked. The pre-trained classifier calculates the probability that the eyes are blocked based on the input feature vector, and outputs the judgment result of whether the eyes are blocked based on the occlusion probability.

本实施例眼部遮挡检测方法,根据获取的用户脸部图像能够检测出用户眼部的遮挡情况,与现有技术相比可以不用测试人员提示用户遮挡眼部,从而降低了测试人员的工作量。The eye occlusion detection method of this embodiment can detect the occlusion of the user's eyes based on the acquired user's face image. Compared with the existing technology, the tester does not need to prompt the user to cover the eyes, thereby reducing the tester's workload. .

相应的,本发明实施例还提供一种眼部遮挡检测系统,用于执行以上所述的眼部遮挡检测方法。Correspondingly, embodiments of the present invention also provide an eye occlusion detection system for performing the above-mentioned eye occlusion detection method.

本实施例眼部遮挡检测系统,首先从获取的脸部图像中获取眼部区域图像,然后使用第一卷积神经网络对眼部区域图像提取特征,运算出眼部位置,使用第二卷积神经网络对眼部区域图像提取特征,得到眼部区域图像的特征图,并根据得到的眼部位置从特征图中获取眼部特征,进一步对获得的眼部特征进行反卷积处理,根据反卷积处理得到的图像运算出指示眼部遮挡情况的结果。本实施例眼部遮挡检测系统,根据获取的用户脸部图像能够检测出用户眼部的遮挡情况,与现有技术相比可以不用测试人员提示用户遮挡眼部,从而降低了测试人员的工作量。The eye occlusion detection system in this embodiment first obtains the eye area image from the acquired facial image, then uses the first convolutional neural network to extract features of the eye area image, calculates the eye position, and uses the second convolution The neural network extracts features from the eye area image to obtain a feature map of the eye area image, and obtains eye features from the feature map based on the obtained eye position. The obtained eye features are further deconvolved, and based on the inverse The image obtained by the convolution process is calculated to produce a result indicating eye occlusion. The eye occlusion detection system of this embodiment can detect the occlusion of the user's eyes based on the acquired user's face image. Compared with the existing technology, the tester does not need to prompt the user to cover the eyes, thereby reducing the workload of the tester. .

以上对本发明所提供的一种眼部遮挡检测方法及系统进行了详细介绍。本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。The eye occlusion detection method and system provided by the present invention have been introduced in detail above. This article uses specific examples to illustrate the principles and implementation methods of the present invention. The description of the above embodiments is only used to help understand the method and the core idea of the present invention. It should be noted that those skilled in the art can make several improvements and modifications to the present invention without departing from the principles of the present invention, and these improvements and modifications also fall within the scope of the claims of the present invention.

Claims (6)

Translated fromChinese
1.一种眼部遮挡检测方法,其特征在于,包括:1. An eye occlusion detection method, characterized by comprising:从获取的脸部图像中获取眼部区域图像;Obtain the eye area image from the acquired facial image;使用第一卷积神经网络对所述眼部区域图像提取特征,运算出眼部位置,使用第二卷积神经网络对所述眼部区域图像提取特征,得到所述眼部区域图像的特征图,并根据得到的眼部位置从所述特征图中获取眼部特征,使用所述第二卷积神经网络这一深层网络得到粗糙的分割结果图即注意力图,得到眼部的大概位置,然后所述第一卷积神经网络这一浅层网络只需要关注这大概位置,预测出精细的位置,不关注图像中其他部分;Use the first convolutional neural network to extract features from the eye area image and calculate the eye position. Use the second convolutional neural network to extract features from the eye area image to obtain a feature map of the eye area image. , and obtain the eye features from the feature map according to the obtained eye position, use the second convolutional neural network, a deep network, to obtain a rough segmentation result map, that is, the attention map, and obtain the approximate position of the eye, and then The first convolutional neural network, a shallow network, only needs to pay attention to the approximate position and predict the fine position, without paying attention to other parts of the image;对获得的眼部特征进行反卷积处理,根据反卷积处理得到的图像运算出指示眼部遮挡情况的结果,对获得的眼部特征进行反卷积处理包括:对获得的眼部特征进行反卷积处理而得到与原图像大小相同的图像,而后对得到的图像进行滑动步长为预设值的卷积处理,得到图像大小大于原图像的图像;Perform deconvolution processing on the obtained eye features, and calculate a result indicating eye occlusion based on the image obtained by the deconvolution process. Deconvolution processing on the obtained eye features includes: performing Deconvolution processing is performed to obtain an image with the same size as the original image, and then the obtained image is subjected to convolution processing with a sliding step size of a preset value to obtain an image with an image size larger than the original image;根据反卷积处理得到的图像运算出指示眼部遮挡情况的结果包括:对反卷积处理得到的图像提取特征,得到用于描述图像特征的特征向量;将得到的特征向量输入到预先训练好的分类器内,由所述分类器输出眼部是否遮挡的结果。Calculating the results indicating eye occlusion based on the image obtained by the deconvolution process includes: extracting features from the image obtained by the deconvolution process to obtain a feature vector used to describe the image features; inputting the obtained feature vector into the pre-trained In the classifier, the classifier outputs the result of whether the eyes are blocked.2.根据权利要求1所述的眼部遮挡检测方法,其特征在于,使用依次级联的第三卷积神经网络和第四卷积神经网络对所述脸部图像处理,从所述脸部图像中获取眼部区域图像;2. The eye occlusion detection method according to claim 1, characterized in that the facial image is processed using a third convolutional neural network and a fourth convolutional neural network cascaded in sequence, from the facial image. Obtain the eye area image from the image;所述第三卷积神经网络用于对所述脸部图像运算处理,根据所述脸部图像计算出一系列将脸部框出的边界框以及一系列将眼部框出的边界框;The third convolutional neural network is used to perform computational processing on the facial image, and calculate a series of bounding boxes that frame the face and a series of bounding boxes that frame the eyes based on the facial image;所述第四卷积神经网络用于对所述脸部图像运算处理,根据所述第三卷积神经网络输出的一系列将脸部框出的边界框筛选出更准确的将脸部框出的边界框,根据所述第三卷积神经网络输出的一系列将眼部框出的边界框筛选出更准确的将眼部框出的边界框。The fourth convolutional neural network is used to process the facial image, and selects a more accurate face-framing frame based on a series of face-framing bounding boxes output by the third convolutional neural network. The bounding box that frames the eye is filtered out according to a series of bounding boxes that frame the eye output by the third convolutional neural network to select a more accurate bounding box that frames the eye.3.根据权利要求2所述的眼部遮挡检测方法,其特征在于,所述第三卷积神经网络包括依次级联的1个卷积池化层、卷积层和池化层,用于生成2个用于分类的特征图、4个用于边界框判断的特征图和10个用于判断脸部特征点的特征图;3. The eye occlusion detection method according to claim 2, characterized in that the third convolutional neural network includes a convolutional pooling layer, a convolutional layer and a pooling layer cascaded in sequence for Generate 2 feature maps for classification, 4 feature maps for bounding box determination, and 10 feature maps for determining facial feature points;所述第四卷积神经网络包括依次级联的2个卷积池化层、池化层和全连接层,用于生成2个用于分类的特征图、4个用于边界框判断的特征图和10个用于判断脸部特征点的特征图。The fourth convolutional neural network includes 2 convolutional pooling layers, a pooling layer and a fully connected layer cascaded in sequence, and is used to generate 2 feature maps for classification and 4 features for bounding box judgment. Figure and 10 feature maps used to determine facial feature points.4.根据权利要求2所述的眼部遮挡检测方法,其特征在于,所述第三卷积神经网络具体用于根据边界框的回归值对获得的边界框进行校准,所述第四卷积神经网络具体用于根据边界框的回归值对获得的边界框进行校准。4. The eye occlusion detection method according to claim 2, characterized in that the third convolutional neural network is specifically used to calibrate the obtained bounding box according to the regression value of the bounding box, and the fourth convolution The neural network is specifically used to calibrate the obtained bounding box based on the regression value of the bounding box.5.根据权利要求2所述的眼部遮挡检测方法,其特征在于,所述第三卷积神经网络具体用于使用非极大值抑制法合并重叠的边界框,所述第四卷积神经网络具体用于使用非极大值抑制法合并重叠的边界框。5. The eye occlusion detection method according to claim 2, wherein the third convolutional neural network is specifically used to merge overlapping bounding boxes using a non-maximum suppression method, and the fourth convolutional neural network The network is specifically designed to merge overlapping bounding boxes using non-maximum suppression.6.一种眼部遮挡检测系统,其特征在于,用于执行权利要求1-5任一项所述的眼部遮挡检测方法。6. An eye occlusion detection system, characterized in that it is used to perform the eye occlusion detection method according to any one of claims 1-5.
CN201910343779.XA2019-04-262019-04-26Eye shielding detection method and systemActiveCN110084191B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910343779.XACN110084191B (en)2019-04-262019-04-26Eye shielding detection method and system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910343779.XACN110084191B (en)2019-04-262019-04-26Eye shielding detection method and system

Publications (2)

Publication NumberPublication Date
CN110084191A CN110084191A (en)2019-08-02
CN110084191Btrue CN110084191B (en)2024-02-23

Family

ID=67416957

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910343779.XAActiveCN110084191B (en)2019-04-262019-04-26Eye shielding detection method and system

Country Status (1)

CountryLink
CN (1)CN110084191B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20210150751A1 (en)*2019-11-142021-05-20Nec Laboratories America, Inc.Occlusion-aware indoor scene analysis
CN112929638B (en)*2019-12-052023-12-15北京芯海视界三维科技有限公司Eye positioning method and device and multi-view naked eye 3D display method and device
US12354407B2 (en)*2020-03-272025-07-08Nec CorporationImage processing system, imaging system, image processing method, and non-transitory computer-readable medium
CN111598018A (en)*2020-05-192020-08-28北京嘀嘀无限科技发展有限公司Wearing detection method, device, equipment and storage medium for face shield

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170262695A1 (en)*2016-03-092017-09-14International Business Machines CorporationFace detection, representation, and recognition
CN107633204A (en)*2017-08-172018-01-26平安科技(深圳)有限公司Face occlusion detection method, apparatus and storage medium
CN107784300A (en)*2017-11-302018-03-09西安科锐盛创新科技有限公司Anti- eye closing photographic method and its system
CN107871134A (en)*2016-09-232018-04-03北京眼神科技有限公司A kind of method for detecting human face and device
CN109344763A (en)*2018-09-262019-02-15汕头大学 A strabismus detection method based on convolutional neural network
CN109657591A (en)*2018-12-122019-04-19东莞理工学院Face recognition method and device based on cascade convolution neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107240102A (en)*2017-04-202017-10-10合肥工业大学Malignant tumour area of computer aided method of early diagnosis based on deep learning algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170262695A1 (en)*2016-03-092017-09-14International Business Machines CorporationFace detection, representation, and recognition
CN107871134A (en)*2016-09-232018-04-03北京眼神科技有限公司A kind of method for detecting human face and device
CN107633204A (en)*2017-08-172018-01-26平安科技(深圳)有限公司Face occlusion detection method, apparatus and storage medium
CN107784300A (en)*2017-11-302018-03-09西安科锐盛创新科技有限公司Anti- eye closing photographic method and its system
CN109344763A (en)*2018-09-262019-02-15汕头大学 A strabismus detection method based on convolutional neural network
CN109657591A (en)*2018-12-122019-04-19东莞理工学院Face recognition method and device based on cascade convolution neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Real-time image-based driver fatigue detection and monitoring system for monitoring driver vigilance;Xinxing Tang et al.;2016 35th Chinese Control Conference (CCC);4188-4193*
人机交互中的人脸表情识别研究进展;薛雨丽等;中国图象图形学报;第14卷(第5期);764-772*
图像人脸检测及超分辨率处理;刘欢喜;中国优秀硕士学位论文全文数据库 (信息科技辑);I138-432*

Also Published As

Publication numberPublication date
CN110084191A (en)2019-08-02

Similar Documents

PublicationPublication DateTitle
CN110084191B (en)Eye shielding detection method and system
CN109598287B (en)Appearance flaw detection method for resisting network sample generation based on deep convolution generation
CN106803067B (en)Method and device for evaluating quality of face image
CN108305260B (en) Method, device and device for detecting corner points in an image
CN104268591B (en)A kind of facial critical point detection method and device
CN111833306A (en) Defect detection method and model training method for defect detection
CN106530271B (en)A kind of infrared image conspicuousness detection method
CN110705558A (en)Image instance segmentation method and device
CN108229268A (en)Expression recognition and convolutional neural network model training method and device and electronic equipment
CN107633237B (en)Image background segmentation method, device, equipment and medium
CN108647625A (en)A kind of expression recognition method and device
CN103679168A (en)Detection method and detection device for character region
CN113989196B (en)Visual-sense-based method for detecting appearance defects of earphone silica gel gasket
CN110580466A (en)infant quilt kicking behavior recognition method and device, computer equipment and storage medium
CN114169419A (en) A detection method, device, computer equipment and storage medium for a target object
CN115138059A (en)Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system
CN109993021A (en) Face detection method, device and electronic device
CN103955949A (en)Moving target detection method based on Mean-shift algorithm
CN114219936A (en)Object detection method, electronic device, storage medium, and computer program product
CN110458790A (en)A kind of image detecting method, device and computer storage medium
CN111415339A (en)Image defect detection method for complex texture industrial product
CN118706336A (en) Soft package sealing detection equipment and method based on vibration and infrared image fusion
CN112101185A (en)Method for training wrinkle detection model, electronic device and storage medium
CN115223123A (en) Road target detection method based on computer vision recognition
CN112560584A (en)Face detection method and device, storage medium and terminal

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp