技术领域Technical Field
本发明涉及一种基于双目时空数据协同约束的人脸防伪检测方法,同时也涉及相应的人脸防伪检测系统,属于人脸识别技术领域。The present invention relates to a face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraint, and also relates to a corresponding face anti-counterfeiting detection system, belonging to the technical field of face recognition.
背景技术Background technique
随着信息技术的飞速发展,传统的基于密码或证件的身份验证方式已难以满足实际应用对身份认证的安全性和便捷性的要求,而基于生物特征识别的方式以其良好的适应性和安全性已得到越来越广泛的应用。相比其他生物识别技术,例如指纹或虹膜等,人脸识别作为一项便捷、友好和低成本的生物特征识别技术目前在安防、金融和交通等多个领域中已得到广泛应用。With the rapid development of information technology, traditional identity authentication methods based on passwords or certificates can no longer meet the security and convenience requirements of identity authentication in practical applications, while biometric identification methods have been increasingly widely used due to their good adaptability and security. Compared with other biometric identification technologies, such as fingerprints or irises, face recognition, as a convenient, friendly and low-cost biometric identification technology, has been widely used in many fields such as security, finance and transportation.
在现有技术中,人脸防伪检测的准确度和对检测环境的适应性是非配合式人脸识别系统的两个重要性能指标。通常,该人脸识别系统针对在开放式环境下获取的人脸视频流进行检测,而待识别人脸目标往往存在大角度变换、遮挡和光照不均匀等问题,会极大的影响人脸检测的精度与效果,同时,人脸识别系统还需要面对虚假人脸攻击的准确识别,例如视频攻击、照片攻击等。因此,如何提高人脸检测的准确度以及对检测环境的适应性来满足实际应用的需求,始终是本领域非常重要的一个技术研究课题。In the prior art, the accuracy of face anti-counterfeiting detection and the adaptability to the detection environment are two important performance indicators of non-cooperative face recognition systems. Usually, the face recognition system detects the face video stream obtained in an open environment, and the face target to be identified often has problems such as large angle changes, occlusion and uneven lighting, which will greatly affect the accuracy and effect of face detection. At the same time, the face recognition system also needs to accurately identify fake face attacks, such as video attacks, photo attacks, etc. Therefore, how to improve the accuracy of face detection and the adaptability to the detection environment to meet the needs of practical applications has always been a very important technical research topic in this field.
在专利号为ZL 202010748989.X的中国发明专利中,公开了一种基于双目视觉的人脸检测识别方法。该方法包括如下步骤:(1)通过双目摄像头获取左右两张人脸图片;(2)通过hog特征检测两张图片,从两张图像中找出对应的两张人脸图片;(3)在获取的两张包含人脸的图片中提取人脸特征;(4)通过双目视觉测距方法获得人脸特征点深度信息,从而解算出人脸三维模型;(5)使用统计学方法对解算后的结果通过支持向量机对三维人脸模型进行分类,从而实现人脸识别。In the Chinese invention patent with patent number ZL 202010748989.X, a face detection and recognition method based on binocular vision is disclosed. The method includes the following steps: (1) obtaining two left and right face images through a binocular camera; (2) detecting the two images through the hog feature, and finding the corresponding two face images from the two images; (3) extracting face features from the two acquired pictures containing faces; (4) obtaining the depth information of face feature points through the binocular vision ranging method, thereby solving the three-dimensional face model; (5) using the statistical method to classify the three-dimensional face model through the support vector machine of the solved result, thereby realizing face recognition.
发明内容Summary of the invention
本发明所要解决的首要技术问题在于提供一种基于双目时空数据协同约束的人脸防伪检测方法。The primary technical problem to be solved by the present invention is to provide a face anti-counterfeiting detection method based on the collaborative constraint of binocular spatiotemporal data.
本发明所要解决的另一技术问题在于提供一种基于双目时空数据协同约束的人脸防伪检测系统。Another technical problem to be solved by the present invention is to provide a face anti-counterfeiting detection system based on binocular spatiotemporal data collaborative constraints.
为了实现上述目的,本发明采用以下的技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
根据本发明实施例的第一方面,提供一种基于双目时空数据协同约束的人脸防伪检测方法,包括如下步骤:According to a first aspect of an embodiment of the present invention, a face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraint is provided, comprising the following steps:
(1)获取应用场景中的左右相机的原始视频流数据;(1) Obtain the original video stream data of the left and right cameras in the application scene;
(2)对左右相机进行立体标定,分别得到左右相机的内参数矩阵以及两相机间的外参数矩阵;(2) Perform stereo calibration on the left and right cameras to obtain the intrinsic parameter matrices of the left and right cameras and the extrinsic parameter matrix between the two cameras;
(3)根据所述内参数矩阵和所述外参数矩阵,对左右相机的所述原始视频流进行畸变矫正和立体矫正,得到无畸变正视的图像视频流;(3) performing distortion correction and stereo correction on the original video streams of the left and right cameras according to the intrinsic parameter matrix and the extrinsic parameter matrix to obtain a distortion-free front-view image video stream;
(4)利用RetinaFace网络模型对左右相机的所述图像视频流进行人脸检测,推理分别得到左右相机用于人脸检测的特征点集合;(4) performing face detection on the image video streams of the left and right cameras using the RetinaFace network model, and inferring feature point sets for face detection of the left and right cameras respectively;
(5)基于左右相机的所述特征点集合以及左右同名特征点相互对应关系,以及左右相机的所述内参数矩阵和所述外参数矩阵,利用双目三角化方法重建得到人脸特征点云集合;(5) Based on the feature point sets of the left and right cameras and the correspondence between the feature points with the same name on the left and right cameras, as well as the intrinsic parameter matrices and the extrinsic parameter matrices of the left and right cameras, a binocular triangulation method is used to reconstruct a face feature point cloud set;
(6)基于所述人脸特征点云集合,建立人脸空间拓扑结构信息表征向量,通过计算得到空间拓扑相似度,并构建空间关联约束的评价函数;(6) Based on the face feature point cloud set, a face spatial topological structure information representation vector is established, the spatial topological similarity is obtained by calculation, and an evaluation function of the spatial association constraint is constructed;
(7)利用连续人脸图像帧间所对应的关键特征的灰度一致性假设,构建基于人脸特征时间相关的跟踪数学模型,通过计算得到相邻人脸图像帧间的单应变换矩阵,并构建人脸特征时间序列下的单目投影误差关联函数;(7) Using the grayscale consistency assumption of the key features corresponding to consecutive facial image frames, a tracking mathematical model based on the temporal correlation of facial features is constructed. The homography transformation matrix between adjacent facial image frames is obtained by calculation, and the monocular projection error correlation function under the facial feature time series is constructed.
(8)基于左右相机的所述内参数矩阵和所述外参数矩阵,通过双目人脸识别系统的光流初始化,构建左右相机间的基础矩阵,得到对应的双目投影关联函数;(8) Based on the intrinsic parameter matrix and the extrinsic parameter matrix of the left and right cameras, the basic matrix between the left and right cameras is constructed by initializing the optical flow of the binocular face recognition system to obtain the corresponding binocular projection correlation function;
(9)基于人脸特征时间序列下的所述单目投影误差关联函数和所述双目投影关联函数,构建时间维度下的约束评价函数;(9) constructing a constraint evaluation function in the time dimension based on the monocular projection error correlation function and the binocular projection correlation function in the face feature time series;
(10)基于时间维度下的所述约束评价函数以及空间关联约束的所述评价函数及其相互匹配关系,构建双目时空数据协同约束的人脸防伪检测模型,实现对人脸检测的实时判断与识别。(10) Based on the constraint evaluation function under the time dimension and the evaluation function of the spatial correlation constraint and their mutual matching relationship, a face anti-counterfeiting detection model with coordinated constraints of binocular spatiotemporal data is constructed to achieve real-time judgment and recognition of face detection.
其中较优地,步骤(4)中,用于人脸检测的特征点包括左眼、右眼、鼻尖、左嘴角和右嘴角。Preferably, in step (4), the feature points used for face detection include the left eye, the right eye, the tip of the nose, the left corner of the mouth and the right corner of the mouth.
其中较优地,步骤(5)中,所述人脸特征点云集合P3d满足如下公式:Preferably, in step (5), the face feature point cloud set P3d satisfies the following formula:
P3d=f(A1,A2,R,T,M1,M2)P3d =f(A1 ,A2 ,R,T,M1 ,M2 )
其中,A1和A2分别为左相机和右相机的内参数矩阵;R为外参数旋转矩阵,T为外参数平移矩阵;M1和M2分别为左相机和右相机的特征点集合。Among them,A1 andA2 are the intrinsic parameter matrices of the left camera and the right camera respectively; R is the extrinsic parameter rotation matrix, T is the extrinsic parameter translation matrix;M1 andM2 are the feature point sets of the left camera and the right camera respectively.
其中较优地,步骤(6)中,所述空间拓扑相似度S满足如下公式:Preferably, in step (6), the spatial topological similarity S satisfies the following formula:
S=cos<N1,Ns>S=cos<N1 ,Ns >
其中,N1为人脸空间拓扑结构信息表征向量,Ns为标准人脸的空间拓扑结构信息表征向量,<N1,Ns>表示两个向量间的夹角。Among them, N1 is the representation vector of the spatial topological structure information of the face, Ns is the representation vector of the spatial topological structure information of the standard face, and <N1 ,Ns > represents the angle between the two vectors.
其中较优地,步骤(7)中,基于人脸特征时间相关的所述跟踪数学模型满足如下公式:Preferably, in step (7), the tracking mathematical model based on the time correlation of facial features satisfies the following formula:
I(x,y,t)=I(x+Δx,y+Δy,t+Δt)I(x,y,t)=I(x+Δx,y+Δy,t+Δt)
其中,I(x,y,t)表示第t时刻下人脸特征坐标点(x,y)处的灰度值;I(x+Δx,y+Δy,t+Δt)表示第(t+Δt)时刻下对应的人脸特征坐标点(x+Δx,y+Δy)处的灰度值;Δt表示图像帧间的间隔时间,Δx和Δy分别表示坐标的变化量。Among them, I(x,y,t) represents the grayscale value of the facial feature coordinate point (x,y) at the t-th time; I(x+Δx,y+Δy,t+Δt) represents the grayscale value of the facial feature coordinate point (x+Δx,y+Δy) at the (t+Δt)-th time; Δt represents the interval time between image frames, and Δx and Δy represent the changes in the coordinates respectively.
相邻人脸图像帧间的所述单应变换矩阵Ht满足如下公式:The homography transformation matrixHt between adjacent face image frames satisfies the following formula:
Xt+1=HtXtXt+1=HtXt
其中,Xt和Xt+1分别为t时刻和(t+1)时刻下的人脸特征点集合,t时刻和(t+1)时刻分别为相邻人脸图像帧的时间,二者间隔时间为Δt。Wherein,Xt and Xt+1 are the sets of facial feature points at time t and time (t+1), respectively. Time t and time (t+1) are the time of adjacent facial image frames, respectively, and the interval time between the two is Δt.
其中较优地,步骤(8)中,所述基础矩阵Ft满足如下公式:Preferably, in step (8), the basic matrixFt satisfies the following formula:
IL(x,y)=Ft*IR(x,y)IL (x,y)=Ft *IR (x,y)
其中,IL(x,y)为左相机人脸的光流特征点,IR(x,y)为右相机人脸的光流特征点。Among them,IL (x, y) is the optical flow feature point of the left camera face, andIR (x, y) is the optical flow feature point of the right camera face.
其中较优地,步骤(9)中,时间维度下的所述约束评价函数T(t)满足如下公式:Preferably, in step (9), the constraint evaluation function T(t) in the time dimension satisfies the following formula:
T(t)=h(Ht)+μ(Ft)T(t)=h(Ht )+μ(Ft )
其中,h(Ht)为单目投影误差关联函数;μ(Ft)为双目投影关联函数。Wherein, h(Ht ) is the monocular projection error correlation function; μ(Ft ) is the binocular projection correlation function.
其中较优地,步骤(10)中,双目时空数据协同约束的所述人脸防伪检测模型的输出值F满足如下公式:Preferably, in step (10), the output value F of the face anti-counterfeiting detection model with binocular spatiotemporal data collaborative constraints satisfies the following formula:
其中,F为检测模型的输出值;T(t)为时间维度下的约束评价函数;g(P3d,S)为人脸特征空间关联约束的评价函数;α和β分别是时间序列和空间序列的关联参数;min为目标函数;‖·‖为模运算。Among them, F is the output value of the detection model; T(t) is the constraint evaluation function under the time dimension; g(P3d ,S) is the evaluation function of the spatial association constraint of facial features; α and β are the association parameters of the time series and space series respectively; min is the objective function; ‖·‖ is the modular operation.
根据本发明实施例的第二方面,提供一种基于双目时空数据协同约束的人脸防伪检测系统,包括处理器和存储器,所述处理器和所述存储器耦接;其中,According to a second aspect of an embodiment of the present invention, there is provided a face anti-counterfeiting detection system based on binocular spatiotemporal data collaborative constraint, comprising a processor and a memory, wherein the processor and the memory are coupled; wherein:
所述存储器用于存储计算机程序;The memory is used to store computer programs;
所述处理器用于运行存储在所述存储器中的计算机程序,执行上述基于双目时空数据协同约束的人脸防伪检测方法。The processor is used to run the computer program stored in the memory to execute the above-mentioned face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraints.
与现有技术相比较,本发明提供的基于双目时空数据协同约束的人脸防伪检测方法,通过在双目视觉系统的图像视频流上获取人脸关键特征并进行时空特征的对齐表征,采用双目多帧人脸特征图像间的空间约束防止平面假体攻击,同时采用时间序列下连续图像帧间光流约束,对假体人脸是否具有微小动态特征进行辨识,构建双目时空数据协同约束的人脸防伪检测模型,实现对人脸检测的实时准确判断与防伪识别,提升了双目活体人脸检测系统的检测准确性和环境适应性。Compared with the prior art, the face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraints provided by the present invention obtains the key features of the face on the image video stream of the binocular vision system and aligns the spatiotemporal features, adopts the spatial constraints between binocular multi-frame face feature images to prevent planar prosthesis attacks, and adopts the optical flow constraints between continuous image frames in a time series to identify whether the prosthetic face has tiny dynamic features, constructs a face anti-counterfeiting detection model with binocular spatiotemporal data collaborative constraints, realizes real-time and accurate judgment and anti-counterfeiting identification of face detection, and improves the detection accuracy and environmental adaptability of the binocular living face detection system.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明实施例提供的一种基于双目时空数据协同约束的人脸防伪检测方法的流程图;FIG1 is a flow chart of a face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraint provided by an embodiment of the present invention;
图2为本发明实施例提供的一种基于双目时空数据协同约束的人脸防伪检测系统的结构示意图。FIG2 is a schematic diagram of the structure of a face anti-counterfeiting detection system based on binocular spatiotemporal data collaborative constraints provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施例对本发明的技术内容进行详细具体的说明。The technical content of the present invention is described in detail below with reference to the accompanying drawings and specific embodiments.
如图1所示,本发明实施例提供的一种基于双目时空数据协同约束的人脸防伪检测方法,至少包括如下步骤:As shown in FIG1 , a face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraint provided by an embodiment of the present invention at least comprises the following steps:
S1:获取应用场景中的左右相机的原始视频流数据。S1: Obtain the original video stream data of the left and right cameras in the application scene.
其中,应用于双目立体视觉场景中的双目相机即为左相机和右相机,实际应用中二者同时采集人脸数据信息形成原始视频流。Among them, the binocular cameras used in binocular stereo vision scenes are the left camera and the right camera. In actual applications, the two cameras simultaneously collect facial data information to form an original video stream.
S2:对左右相机进行立体标定,分别得到左相机的内参数矩阵A1、右相机的内参数矩阵A2,以及两相机间的外参数矩阵[R,T]。其中,R为旋转矩阵,T为平移矩阵。S2: Perform stereo calibration on the left and right cameras to obtain the intrinsic parameter matrix A1 of the left camera, the intrinsic parameter matrix A2 of the right camera, and the extrinsic parameter matrix [R, T] between the two cameras, where R is the rotation matrix and T is the translation matrix.
S3:根据内参数矩阵和外参数矩阵,对左右相机的原始视频流进行畸变矫正和立体矫正,得到无畸变正视的图像视频流。S3: According to the intrinsic parameter matrix and the extrinsic parameter matrix, the original video streams of the left and right cameras are subjected to distortion correction and stereo correction to obtain a distortion-free orthographic image video stream.
以上步骤S1-S3是针对双目相机的原始视频流进行的数据获取和数据处理。本发明下面将对人脸识别的空间特征信息进行提取和表征判断。The above steps S1-S3 are data acquisition and data processing for the original video stream of the binocular camera. The present invention will extract and characterize the spatial feature information of face recognition.
S4:利用RetinaFace网络模型对左右相机的图像视频流进行人脸检测,推理得到左相机用于人脸检测的特征点集合M1,右相机用于人脸检测的特征点集合M2。S4: Use the RetinaFace network model to perform face detection on the image video streams of the left and right cameras, and infer a feature point set M1 for face detection of the left camera and a feature point set M2 for face detection of the right camera.
在本发明的一个实施例中,用于人脸检测的关键特征点可以包括左眼、右眼、鼻尖、左嘴角和右嘴角。In one embodiment of the present invention, key feature points used for face detection may include a left eye, a right eye, a nose tip, a left corner of the mouth, and a right corner of the mouth.
S5:基于左右相机的特征点集合以及左右同名特征点相互对应关系,以及左右相机的内参数矩阵和外参数矩阵,利用双目三角化方法重建得到人脸特征点云集合p3d。S5: Based on the feature point sets of the left and right cameras and the correspondence between the feature points of the same name on the left and right cameras, as well as the intrinsic parameter matrices and extrinsic parameter matrices of the left and right cameras, a binocular triangulation method is used to reconstruct a face feature point cloud set p3d .
人脸特征点云集合P3d的表达式如下:The expression of the face feature point cloud set P3d is as follows:
P3d=f(A1,A2,R,T,M1,M2) (1)P3d =f(A1 ,A2 ,R,T,M1 ,M2 ) (1)
其中,A1和A2分别为左相机和右相机的内参数矩阵;R为外参旋转矩阵,T为外参平移矩阵;M1和M2分别为左相机和右相机的特征点集合。Among them,A1 andA2 are the intrinsic parameter matrices of the left camera and the right camera respectively; R is the extrinsic rotation matrix, T is the extrinsic translation matrix;M1 andM2 are the feature point sets of the left camera and the right camera respectively.
S6:基于人脸特征点云集合P3d,建立人脸空间拓扑结构信息表征向量N1,通过计算得到空间拓扑相似度S,并构建空间关联约束的评价函数g(P3d,S)。S6: Based on the face feature point cloud set P3d , establish the face space topological structure information representation vector N1 , obtain the spatial topological similarity S through calculation, and construct the evaluation function g(P3d ,S) of the spatial association constraint.
将人脸空间拓扑结构信息表征向量N1与标准人脸的空间拓扑结构信息表征向量Ns进行夹角余弦值的计算,得到空间拓扑相似度S。The cosine value of the angle between the face spatial topological structure information representation vectorN1 and the standard face spatial topological structure information representation vectorNs is calculated to obtain the spatial topological similarity S.
空间拓扑相似度S的计算如下:The calculation of spatial topological similarity S is as follows:
S=cos<N1,Ns> (2)S=cos<N1 ,Ns > (2)
通过计算得到的空间拓扑相似度S与设定的相似度阈值相比较,可以实现对视频或者图像等虚假人脸攻击的识别。By comparing the calculated spatial topological similarity S with the set similarity threshold, false face attacks such as videos or images can be identified.
进一步,基于人脸特征点云集合P3d和空间拓扑相似度S构建空间关联约束的评价函数g(P3d,S),用于防止平面人脸假体攻击。Furthermore, based on the facial feature point cloud set P3d and the spatial topological similarity S, an evaluation function g(P3d ,S) with spatial association constraints is constructed to prevent the attack of planar facial prosthesis.
以上步骤S4-S6是针对人脸识别的空间特征信息进行提取和表征判断,在上述人脸空间辨识的基础上,为提高对虚假头模等对象攻击的识别,下面本发明将在二维视频流时间序列下,对人脸关键特征进行实时提取和快速跟踪,实现时间序列下的人脸识别和判断。The above steps S4-S6 are for extracting and characterizing the spatial feature information of face recognition. On the basis of the above face spatial recognition, in order to improve the recognition of attacks by objects such as fake head models, the present invention will perform real-time extraction and rapid tracking of key face features in a two-dimensional video stream time series to achieve face recognition and judgment in a time series.
S7:利用连续人脸图像帧间所对应的关键特征的灰度一致性假设,构建基于人脸特征时间相关的跟踪数学模型,通过计算得到相邻人脸图像帧间的单应变换矩阵Ht,并构建人脸特征时间序列下的单目投影误差关联函数h(Ht)。S7: Using the gray consistency assumption of the key features corresponding to the continuous face image frames, a tracking mathematical model based on the time correlation of face features is constructed, the homography transformation matrix Ht between adjacent face image frames is obtained by calculation, and the monocular projection error correlation function h(Ht ) under the face feature time series is constructed.
基于人脸特征时间相关的跟踪数学模型如下:The mathematical model for tracking based on the temporal correlation of facial features is as follows:
I(x,y,t)=I(x+Δx,y+Δy,t+Δt) (3)I(x,y,t)=I(x+Δx,y+Δy,t+Δt) (3)
其中,I(x,y,t)表示第t时刻下人脸特征坐标点(x,y)处的灰度值;I(x+Δx,y+Δy,t+Δt)表示第(t+Δt)时刻下对应的人脸特征坐标点(x+Δx,y+Δy)处的灰度值;Δt表示图像帧间的间隔时间,Δx和Δy分别表示坐标的变化量。Among them, I(x,y,t) represents the grayscale value of the facial feature coordinate point (x,y) at the t-th time; I(x+Δx,y+Δy,t+Δt) represents the grayscale value of the facial feature coordinate point (x+Δx,y+Δy) at the (t+Δt)-th time; Δt represents the interval time between image frames, and Δx and Δy represent the changes in the coordinates respectively.
将公式(3)中I(x+Δx,y+Δy,t+Δt)项按一阶泰勒级数进行展开,得到如下表达式:Expand the term I(x+Δx,y+Δy,t+Δt) in formula (3) according to the first-order Taylor series to obtain the following expression:
其中,为该人脸特征点在t时刻的运动方向矢量,通过选取特征点I(x,y)周围的(2w+1)×(2w+1)邻域窗口,求解可得到该人脸特征点I(x,y)在t时刻下的运动方向矢量。其中,(2w+1)为邻域宽度,w是正整数,是表示邻域宽度的参数。in, is the moving direction vector of the facial feature point at time t. By selecting the (2w+1)×(2w+1) neighborhood window around the feature point I(x,y), the moving direction vector of the facial feature point I(x,y) at time t can be obtained. Among them, (2w+1) is the neighborhood width, and w is a positive integer, which is a parameter representing the neighborhood width.
基于人脸特征点的运动方向矢量和图像帧间的间隔时间Δt,求解得到相邻帧对应的人脸特征坐标点(x+Δx,y+Δy)。Based on the motion direction vector of the facial feature points and the interval time Δt between image frames, the facial feature coordinate points (x+Δx, y+Δy) corresponding to adjacent frames are obtained.
然后分别构建t时刻和(t+1)时刻下的人脸特征点集合Xt和Xt+1,求解得到相邻人脸图像帧间的单应变换矩阵Ht:Then construct the facial feature point setsXt and Xt+1 at time t and (t+1) respectively, and solve the homography transformation matrixHt between adjacent facial image frames:
Xt+1=HtXt (5)Xt+1 =HtXt (5 )
其中,t时刻和(t+1)时刻分别为相邻人脸图像帧的时间,二者间隔时间为Δt。Among them, time t and time (t+1) are the times of adjacent face image frames, and the interval time between them is Δt.
通过上述计算,可以得到人脸特征时间序列下的单目投影误差关联函数h(Ht)。Through the above calculation, the monocular projection error correlation function h(Ht ) under the time series of facial features can be obtained.
S8:基于左右相机的内参数矩阵A1和A2及外参数矩阵[R,T],通过双目人脸识别系统的光流初始化,构建左右相机间的基础矩阵Ft,得到对应的双目投影关联函数μ(Ft)。S8: Based on the intrinsic parameter matrices A1 and A2 of the left and right cameras and the extrinsic parameter matrix [R, T], the basic matrix Ft between the left and right cameras is constructed through the optical flow initialization of the binocular face recognition system, and the corresponding binocular projection correlation function μ(Ft ) is obtained.
基础矩阵Ft的表达式如下:The expression of the fundamental matrixFt is as follows:
IL(x,y)=Ft*IR(x,y) (6)IL (x,y)=Ft *IR (x,y) (6)
其中,IL(x,y)为左相机人脸的光流特征点,IR(x,y)为右相机人脸的光流特征点。Among them,IL (x, y) is the optical flow feature point of the left camera face, andIR (x, y) is the optical flow feature point of the right camera face.
通过双目人脸图像关键特征的匹配情况,可以实时计算时间序列下的基础矩阵Ft。基于时间序列下的基础矩阵Ft,以及真实人脸在一定时间维度下存在微小波动的假设,对时间序列下的基础矩阵Ft进行统计分析,得到对应的双目投影关联函数μ(Ft)。具体是采用奇异值分解法得到该基础矩阵对应的特征向量,通过统计一定时间范围内特征向量模长变化的方差值,判断是否为真实人脸,如其模长变化量小于设定方差阈值,则认为人脸模型没有发生变化,是假人脸,否则为真人脸。Through the matching of key features of binocular face images, the basic matrixFt in time series can be calculated in real time. Based on the basic matrixFt in time series and the assumption that real faces have small fluctuations in a certain time dimension, the basic matrixFt in time series is statistically analyzed to obtain the corresponding binocular projection correlation function μ(Ft ). Specifically, the singular value decomposition method is used to obtain the eigenvector corresponding to the basic matrix, and the variance value of the change in the modulus of the eigenvector within a certain time range is counted to determine whether it is a real face. If the change in the modulus is less than the set variance threshold, it is considered that the face model has not changed and is a fake face, otherwise it is a real face.
S9:基于人脸特征时间序列下的单目投影误差关联函数h(Ht)和双目投影关联函数μ(Ft),构建时间维度下的约束评价函数T(t)。S9: Based on the monocular projection error correlation function h(Ht ) and the binocular projection correlation function μ(Ft ) in the time series of facial features, a constraint evaluation function T(t) in the time dimension is constructed.
时间维度下的约束评价函数T(t)的表达式为:The expression of the constraint evaluation function T(t) in the time dimension is:
T(t)=h(Ht)+μ(Ft) (7)T(t)=h(Ht )+μ(Ft ) (7)
其中,h(Ht)为单目投影误差关联函数;μ(Ft)为双目投影关联函数。Wherein, h(Ht ) is the monocular projection error correlation function; μ(Ft ) is the binocular projection correlation function.
S10:基于时间维度下的约束评价函数T(t)以及空间关联约束的评价函数g(P3d,S)及其相互匹配关系,构建双目时空数据协同约束的人脸防伪检测模型,实现对人脸检测的实时判断与识别。S10: Based on the constraint evaluation function T(t) in the time dimension and the evaluation function g(P3d ,S) of the spatial correlation constraint and their matching relationship, a face anti-counterfeiting detection model with coordinated constraints of binocular spatiotemporal data is constructed to realize real-time judgment and recognition of face detection.
该检测模型的输出表达式为:The output expression of the detection model is:
其中,F为检测模型的输出值;T(t)为时间维度下的约束评价函数;g(P3d,S)为人脸特征空间关联约束的评价函数;α和β分别是时间序列和空间序列的关联参数;min为目标函数;‖·‖为模运算。Among them, F is the output value of the detection model; T(t) is the constraint evaluation function under the time dimension; g(P3d ,S) is the evaluation function of the spatial association constraint of facial features; α and β are the association parameters of the time series and space series respectively; min is the objective function; ‖·‖ is the modular operation.
通过公式(7)求解在一定时空序列下的全局最优值的关联参数α和β,进而得到相应的防伪检测模型的输出值F,将该模型输出值与设定的阈值相比较,实现人脸识别及其对应活体检测的最终判断。Formula (7) is used to solve the global optimal value of the associated parameters α and β in a certain time-space sequence, and then the output value F of the corresponding anti-counterfeiting detection model is obtained. The output value of the model is compared with the set threshold to achieve the final judgment of face recognition and its corresponding liveness detection.
以上对本发明提供的一种基于双目时空数据协同约束的人脸防伪检测方法进行了详细说明,该方法通过实时对左右相机的图像视频流进行人脸空间序列和时间序列的特征提取和跟踪检测,实现了双目时空数据协同约束的人脸防伪检测。The above is a detailed description of a face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraints provided by the present invention. The method realizes face anti-counterfeiting detection based on binocular spatiotemporal data collaborative constraints by performing feature extraction and tracking detection of face spatial sequences and time series on the image video streams of the left and right cameras in real time.
在该人脸防伪检测方法的基础上,本发明实施例进一步提供一种基于双目时空数据协同约束的人脸防伪检测系统,如图2所示,该检测系统包括一个或多个处理器和存储器。其中,存储器与处理器耦接,用于存储一个或多个计算机程序,当一个或多个计算机程序被一个或多个处理器执行,使得一个或多个处理器实现如上述实施例中基于双目时空数据协同约束的人脸防伪检测方法。On the basis of the face anti-counterfeiting detection method, the embodiment of the present invention further provides a face anti-counterfeiting detection system based on binocular spatiotemporal data collaborative constraints, as shown in Figure 2, the detection system includes one or more processors and a memory. The memory is coupled to the processor and is used to store one or more computer programs. When the one or more computer programs are executed by one or more processors, the one or more processors implement the face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraints in the above embodiment.
其中,处理器用于控制该基于双目时空数据协同约束的人脸防伪检测系统的整体操作,以完成上述基于双目时空数据协同约束的人脸防伪检测方法的全部或部分步骤。该处理器模块可以是中央处理器(CPU)、图形处理器(GPU)、现场可编程逻辑门阵列(FPGA)、专用集成电路(ASIC)、数字信号处理(DSP)芯片等。存储器用于存储各种类型的数据以支持在该基于双目时空数据协同约束的人脸防伪检测系统上的操作,这些数据例如可以包括用于基于双目时空数据协同约束的人脸防伪检测系统操作的任何应用程序或方法的指令,以及应用程序相关的数据。该存储器模块可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,例如静态随机存取存储器(SRAM)、电可擦除可编程只读存储器(EEPROM)、可擦除可编程只读存储器(EPROM)、可编程只读存储器(PROM)、只读存储器(ROM)、磁存储器、快闪存储器等。Among them, the processor is used to control the overall operation of the face anti-counterfeiting detection system based on binocular spatiotemporal data collaborative constraints to complete all or part of the steps of the face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraints. The processor module can be a central processing unit (CPU), a graphics processing unit (GPU), a field programmable gate array (FPGA), an application-specific integrated circuit (ASIC), a digital signal processing (DSP) chip, etc. The memory is used to store various types of data to support operations on the face anti-counterfeiting detection system based on binocular spatiotemporal data collaborative constraints. These data may, for example, include instructions for any application or method for operating the face anti-counterfeiting detection system based on binocular spatiotemporal data collaborative constraints, and application-related data. The memory module can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as a static random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, etc.
基于双目时空数据协同约束的人脸防伪检测系统,通过实时对左右相机的图像视频流进行人脸空间序列和时间序列的特征提取和跟踪检测,实现了双目时空数据协同约束的人脸防伪检测,可以解决人脸识别过程中易受照片、视频等人脸假体攻击,进而存在的误识别等安全隐患问题。The face anti-counterfeiting detection system based on the collaborative constraint of binocular spatiotemporal data realizes the face anti-counterfeiting detection based on the collaborative constraint of binocular spatiotemporal data by extracting and tracking the features of the face space sequence and time series of the image video streams of the left and right cameras in real time. It can solve the security risks such as the face recognition process being vulnerable to attacks by face prostheses such as photos and videos, and the subsequent misidentification.
综上所述,与现有技术相比较,本发明提供的基于双目时空数据协同约束的人脸防伪检测方法,通过在双目视觉系统的图像视频流上获取人脸关键特征并进行时空特征的对齐表征,采用双目多帧人脸特征图像间的空间约束防止平面假体攻击,同时采用时间序列下连续图像帧间光流约束,对假体人脸是否具有微小动态特征进行辨识,构建双目时空数据协同约束的人脸防伪检测模型,实现对人脸检测的实时准确判断与防伪识别,提升了双目活体人脸检测系统的检测准确性和环境适应性。To sum up, compared with the prior art, the face anti-counterfeiting detection method based on binocular spatiotemporal data collaborative constraints provided by the present invention obtains the key features of the face on the image video stream of the binocular vision system and aligns the spatiotemporal features, adopts the spatial constraints between binocular multi-frame face feature images to prevent planar prosthesis attacks, and at the same time adopts the optical flow constraints between continuous image frames in a time series to identify whether the prosthetic face has tiny dynamic features, constructs a face anti-counterfeiting detection model with binocular spatiotemporal data collaborative constraints, realizes real-time and accurate judgment and anti-counterfeiting identification of face detection, and improves the detection accuracy and environmental adaptability of the binocular living face detection system.
上面对本发明提供的基于双目时空数据协同约束的人脸防伪检测方法和系统进行了详细的说明。对本领域的一般技术人员而言,在不背离本发明实质内容的前提下对它所做的任何显而易见的改动,都将构成对本发明专利权的侵犯,将承担相应的法律责任。The above is a detailed description of the face anti-counterfeiting detection method and system based on binocular spatiotemporal data collaborative constraint provided by the present invention. For those skilled in the art, any obvious changes made to it without departing from the essence of the present invention will constitute an infringement of the patent right of the present invention and will bear the corresponding legal liability.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410369690.1ACN118366229B (en) | 2024-03-28 | 2024-03-28 | Face anti-fake detection method and system based on binocular space-time data cooperative constraint |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410369690.1ACN118366229B (en) | 2024-03-28 | 2024-03-28 | Face anti-fake detection method and system based on binocular space-time data cooperative constraint |
| Publication Number | Publication Date |
|---|---|
| CN118366229Atrue CN118366229A (en) | 2024-07-19 |
| CN118366229B CN118366229B (en) | 2025-04-29 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410369690.1AActiveCN118366229B (en) | 2024-03-28 | 2024-03-28 | Face anti-fake detection method and system based on binocular space-time data cooperative constraint |
| Country | Link |
|---|---|
| CN (1) | CN118366229B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102622588A (en)* | 2012-03-08 | 2012-08-01 | 无锡数字奥森科技有限公司 | Dual-certification face anti-counterfeit method and device |
| CN110309782A (en)* | 2019-07-02 | 2019-10-08 | 四川大学 | A live face detection method based on infrared and visible light binocular system |
| CN111291730A (en)* | 2020-03-27 | 2020-06-16 | 深圳阜时科技有限公司 | Face anti-counterfeiting detection method, server and storage medium |
| CN111881841A (en)* | 2020-07-30 | 2020-11-03 | 河海大学常州校区 | A face detection and recognition method based on binocular vision |
| CN113536843A (en)* | 2020-04-16 | 2021-10-22 | 上海大学 | Anti-counterfeiting face recognition system based on multi-mode fusion convolutional neural network |
| CN115798002A (en)* | 2022-11-24 | 2023-03-14 | 北京的卢铭视科技有限公司 | Face detection method, system, electronic device and storage medium |
| US20230206700A1 (en)* | 2021-12-29 | 2023-06-29 | Elm | Biometric facial recognition and liveness detector using ai computer vision |
| CN116798130A (en)* | 2023-06-15 | 2023-09-22 | 广州朗国电子科技股份有限公司 | Face anti-counterfeiting method, device and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102622588A (en)* | 2012-03-08 | 2012-08-01 | 无锡数字奥森科技有限公司 | Dual-certification face anti-counterfeit method and device |
| CN110309782A (en)* | 2019-07-02 | 2019-10-08 | 四川大学 | A live face detection method based on infrared and visible light binocular system |
| CN111291730A (en)* | 2020-03-27 | 2020-06-16 | 深圳阜时科技有限公司 | Face anti-counterfeiting detection method, server and storage medium |
| CN113536843A (en)* | 2020-04-16 | 2021-10-22 | 上海大学 | Anti-counterfeiting face recognition system based on multi-mode fusion convolutional neural network |
| CN111881841A (en)* | 2020-07-30 | 2020-11-03 | 河海大学常州校区 | A face detection and recognition method based on binocular vision |
| US20230206700A1 (en)* | 2021-12-29 | 2023-06-29 | Elm | Biometric facial recognition and liveness detector using ai computer vision |
| CN115798002A (en)* | 2022-11-24 | 2023-03-14 | 北京的卢铭视科技有限公司 | Face detection method, system, electronic device and storage medium |
| CN116798130A (en)* | 2023-06-15 | 2023-09-22 | 广州朗国电子科技股份有限公司 | Face anti-counterfeiting method, device and storage medium |
| Title |
|---|
| 沈超 等: "基于纹理特征增强和轻量级网络的人脸防伪算法", 计算机科学, 30 June 2022 (2022-06-30), pages 1 - 7* |
| Publication number | Publication date |
|---|---|
| CN118366229B (en) | 2025-04-29 |
| Publication | Publication Date | Title |
|---|---|---|
| Souvenir et al. | Learning the viewpoint manifold for action recognition | |
| US10650260B2 (en) | Perspective distortion characteristic based facial image authentication method and storage and processing device thereof | |
| CN105005755B (en) | Three-dimensional face identification method and system | |
| CN110363116B (en) | Irregular face correction method, system and medium based on GLD-GAN | |
| CN111160232B (en) | Front face reconstruction method, device and system | |
| Campo et al. | Multimodal stereo vision system: 3D data extraction and algorithm evaluation | |
| KR101647803B1 (en) | Face recognition method through 3-dimension face model projection and Face recognition system thereof | |
| WO2022041627A1 (en) | Living body facial detection method and system | |
| CN105335722A (en) | A detection system and method based on depth image information | |
| CN111144366A (en) | A stranger face clustering method based on joint face quality assessment | |
| CN111563924B (en) | Image depth determination method, living body identification method, circuit, device, and medium | |
| CN111325828B (en) | Three-dimensional face acquisition method and device based on three-dimensional camera | |
| CN113298158B (en) | Data detection method, device, equipment and storage medium | |
| CN110647782A (en) | Three-dimensional face reconstruction and multi-pose face recognition method and device | |
| CN115330992B (en) | Indoor positioning method, device, equipment and storage medium based on multi-visual feature fusion | |
| CN114882537A (en) | Finger new visual angle image generation method based on nerve radiation field | |
| WO2021026281A1 (en) | Adaptive hand tracking and gesture recognition using face-shoulder feature coordinate transforms | |
| CN112818874B (en) | Image processing method, device, equipment and storage medium | |
| Chen et al. | 3d face mask anti-spoofing via deep fusion of dynamic texture and shape clues | |
| CN111881841B (en) | A face detection and recognition method based on binocular vision | |
| Tian et al. | Automatic visible and infrared face registration based on silhouette matching and robust transformation estimation | |
| CN116524606A (en) | Face living body recognition method, device, electronic equipment and storage medium | |
| CN111383255A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
| CN111814567A (en) | Method, device, device and storage medium for face liveness detection | |
| Ali | A 3D-based pose invariant face recognition at a distance framework |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |