Disclosure of Invention
In order to overcome the defects of the prior art, one of the purposes of the invention is to provide a face authentication method based on a gyroscope, which can solve the problems that the prior art can not identify and distinguish the living face authentication of a user and verify the living face authentication through a face mask, a screen face image or video and the like.
The second objective of the present invention is to provide an electronic device, which can solve the problem that the face authentication of the user cannot be identified and verified by a face mask, a screen face image or video, etc. in the prior art.
It is a third object of the present invention to provide a computer-readable storage medium, which can solve the problem that the prior art cannot recognize and obtain the authentication of the face of the user and verify the authentication through a face mask, a screen face image or video.
One of the purposes of the invention is realized by adopting the following technical scheme:
the face authentication method based on the gyroscope comprises the following steps:
And (3) establishing an identification model: establishing an identification model for completing each action when each user face is authenticated;
Face authentication: obtaining a video image of each action completed during the current user face authentication, and comparing the video image with the recognition model to obtain a first comparison result;
And equipment authentication: acquiring gyroscope data of the handheld device which completes each action when the face of the current user is authenticated, and comparing the gyroscope data with system pre-stored data to obtain a second comparison result;
authentication judgment step: judging whether the face authentication of the current user passes or not according to the first comparison result and the second comparison result;
The facial authentication step and the equipment authentication step are executed in no sequence.
Further, the face authentication step further includes:
gray scale processing step: acquiring a plurality of corresponding images according to the video image of which each action is completed by the current user, and carrying out gray processing on each image to obtain a corresponding gray image;
The extraction step: extracting the feature vector of the gray level image of each image to obtain the feature vector of each image;
And (3) comparing: and comparing the feature vector of each image of which the current user finishes each action with the identification model to obtain a first comparison result.
Further, the process of obtaining the video image of each action completed by the current user further comprises: firstly, obtaining video images when all actions are completed when the face of the current user is authenticated, and then dividing the video images into video images when the current user completes each action according to the time when the current user completes each action.
Further, the identification model building step specifically includes the following steps:
step A1: shooting a plurality of images in the process of completing each action of each user, and classifying and sorting the images according to the actions;
step A2: carrying out gray scale processing on each image by using a weighted average algorithm to obtain a gray scale image of each image;
Step A3: dividing a gray level image of each image into M x N square areas, calculating the ratio of the number of points in each area of each image to the total number of points of the corresponding image according to the different outlines and colors of each part of the human face and the different color density distribution conditions of the points in the images of each part of the human face in different action states and at different angles, and extracting the feature vector of each image; wherein M is greater than 0, N is greater than 0, and M and N are natural numbers;
Step A4: and taking the characteristic vector of each image as input, taking the action corresponding to each image as output, and repeatedly training by using a convolutional neural network to establish an identification model for each user to finish each action.
Further, the authentication judging step further includes: when the first comparison result is verification passing and the second comparison result is verification passing, the face authentication of the current user passes.
Further, the device authentication step further includes: comparing the gyroscope data with a system preset threshold value to obtain a second comparison result;
Or comparing the gyroscope data with the gyroscope data of the handheld device which is stored in the system in advance and is used for completing corresponding actions by each user, and obtaining a second comparison result.
Further, the handheld device is a mobile phone, a tablet and a face authentication mobile terminal.
Further, the gyroscope is a three-axis gyroscope.
The second purpose of the invention is realized by adopting the following technical scheme:
an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the gyroscope based face authentication method as described in one of the objects of the invention when the program is executed.
The third purpose of the invention is realized by adopting the following technical scheme:
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the steps of a gyroscope-based face authentication method as described in one of the objects of the invention.
Compared with the prior art, the invention has the beneficial effects that:
Firstly, establishing a recognition model according to an image of facial feature change completing a preset action during user face verification, and recording gyroscope data built in a handheld device completing the preset action during user face verification; then, during verification, the facial feature change of the face of the user is combined with the data of the gyroscope built in the handheld device when the face of the user is verified, so that the problem of fraudulent face verification through a face mask, a screen face and other prefabricated video images can be effectively prevented.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and detailed description, wherein it is to be understood that, on the premise of no conflict, the following embodiments or technical features may be arbitrarily combined to form new embodiments.
According to the invention, the gyroscope data of the intelligent equipment such as the intelligent mobile phone and the tablet personal computer are used for the user to perform face recognition authentication operation, for example, the three-axis gyroscope of the intelligent equipment is used for acquiring the moving acceleration and angle change information of the intelligent equipment when the user performs face recognition authentication operation, and the face recognition judgment is performed by combining the instruction completion condition when the user performs authentication, namely, the face authentication and the equipment authentication are combined, so that the safe attack behavior of deception authentication by copying and stealing video images can be effectively prevented.
The invention discloses a first embodiment, as shown in fig. 1, of a face authentication method based on a gyroscope, which comprises face authentication and equipment authentication, wherein the face authentication comprises the following steps:
Step S11: and starting the face authentication and acquiring video images of all actions of the current user according to prompts given by a preset verification instruction of the system. Such as a video image of a series of all actions taken by the current user in accordance with the prompts given by the preset verification instructions of the system.
When video living face authentication is started, firstly, a current user can use handheld equipment, such as a smart phone and the like, and shoot a face image of the current user by directly facing the face, and the system can give a series of prompt actions according to a preset verification instruction to indicate that the current user is finished. For example, the mobile phone is opened, the shooting state is kept, and the mobile phone is translated to the left or the right; extend the tongue, keep shooting the state and translate the mobile phone to the front; rolling up the tongue, and keeping the shooting state to translate the mobile phone to the right or the left; the tongue is retracted. Wherein, every suggestion action all stops to need to hang down for several seconds and combines the angle of parallel movement handheld device to keep continuously shooting state. When the user finishes all the actions, the user can obtain a video image of the preset verification instruction, wherein the video image comprises videos of all prompt actions in the preset verification instruction. By acquiring a video image of a user completing a preset operation process, the video image includes, for example, a motion video image of a mouth, a tongue, and the like.
Step S12: and acquiring each action when the current user completes the preset verification instruction according to the preset verification instruction of the system. For example, for each preset verification instruction, including multiple verification instructions may include multiple actions. When the current user finishes the action, the system can call equipment such as a camera and the like to record the action finished by the current user, wherein the record comprises images of all actions finished by the current user.
Step S13: and dividing the video image according to the completion time of each action and obtaining the video image of each action completed by the current user. Because the current user pauses for a few seconds to confirm when completing each action and then executes the next action, the video image can be divided according to the completion time of each action, and the video image of the current user completing each action can be obtained.
In addition, in the actual use process, for obtaining the video image of each action completed by the current user, the video image of the action can be stored directly after each action is completed when the action is completed, so that the whole video image does not need to be segmented.
For example, the preset verification command requires the user to complete three actions: action a, action B, and action C, then when generating a video image:
The method comprises the following steps: when the user continuously completes three actions, the video images are stored, so that the video images are divided according to the time period when each action is completed, and the video images for completing each action can be obtained;
And two,: and storing the video images when the user finishes one action, and respectively storing the video images according to the finishing action.
Step S14: and obtaining an action change image of the current user for completing each action according to the video image of the current user for completing each action, comparing the action change image with the identification model, and obtaining a second comparison result.
When different users finish each action, the face changes of the faces are different, so that the invention carries out classification labeling processing according to the video image of each action of which each user finishes a preset verification instruction in advance, and trains the video image through a convolutional neural network to obtain the recognition model of each action. In this way, during authentication, only the video image which completes each action during the face authentication of the current user is matched with the recognition model to obtain a matching result, and finally whether authentication passes or not can be judged according to the matching result.
In addition, in order to verify whether the motion change image of each motion meets the requirement when the user completes the preset verification instruction, the invention also needs to pre-establish a recognition model. The recognition model is established by acquiring stereo images of a plurality of angles according to different angle changes of the photographed facial features of the user by using the video photographing handheld device. For example, in the translation process of the handheld device, the plane image can be inclined and thinned due to the bias change of the shooting angle, so that the stereoscopic images of the mouth, tongue and other parts corresponding to other shooting angles cannot be obtained; if the user himself/herself is photographed, since the photographing subject is stereoscopic, stereoscopic images of various angles of the mouth, tongue, and the like of the user's face can be photographed, and thus authentication of fraudulent attacks of the planar image can be excluded by using the established recognition model. For example, the video images of the actions of each part of the face of the user are collected and sorted in advance when the corresponding preset verification instruction is completed, the key frame images corresponding to the video images of each action are extracted respectively and input into a convolutional neural network for training, and the recognition model of each action is built through repeated training. When in verification and judgment, the key frame image corresponding to each action of the verification instruction in the acquired video image is input into the convolutional neural network to be compared with the template sample of each action in the established identification model, and the correctness of completion of each action of the user is judged, so that whether the user correctly completes the preset verification instruction or not can be known.
The invention also provides a process for establishing the identification model by way of example, which comprises the following steps:
Step A1: for example, images of all angles of front translation to left side, front translation to right side and the like are shot for each action or action completion process of the mouth opening state, tongue extending state, tongue rolling state and the like (for example, the images are translated to left side or right side by using a mobile phone when verification actions of mouth opening, tongue extending, tongue rolling state and the like are correspondingly performed), and the images are shot under different light rays in a large quantity, collected and corresponding actions are classified and sorted and stored.
Step A2: and carrying out gray scale processing on each image by using a weighted average algorithm to obtain a gray scale image of each image.
Step A3: dividing the image into M x N square areas, calculating the ratio of the number of points in each song playing area to the total number of points in the image according to the outline and the color of the mouth and the tongue part and the distribution condition of different color densities of the points in the image under different action states and different angles, and extracting the feature vector of each image. Wherein M and N are natural numbers.
Step A4: and taking the obtained characteristic vector of each image as input, taking the action corresponding to each image as output, and repeatedly training by using a convolutional neural network to establish an identification model of each action.
By the method, the recognition model for completing each action by the user can be established, so that the face of the current verification user can be verified by the video image for completing each action of the current verification user and the recognition model.
In addition, in order to prevent face masks or pre-recorded videos, images and the like from being fraudulently attacked to perform face authentication, the invention further verifies by adding data of a gyroscope built in the handheld device when a user completes each action.
Further, the device authentication further includes: step S21: and acquiring the data of the triaxial gyroscope built in the handheld device when the current user completes each action. Because the real user needs to carry out the verification of the handheld device, such as a handheld smart phone, an intelligent tablet, an authentication terminal device and the like, when the user completes each action, a gyroscope built in the handheld device can generate corresponding data, such as moving acceleration, angle change data and the like; if the mobile device is not a real user, for example, the mobile device cannot move when the face authentication is performed through the pre-recorded video image, that is, the built-in gyroscope cannot generate corresponding data. Therefore, when the face authentication is carried out, the data of the gyroscope built in the handheld device is added into the authentication judgment, so that the authentication of a real user during the face authentication of the user is ensured.
In general, in a handheld device, such as a smart phone, a built-in gyroscope is a tri-axis gyroscope, so that when a user completes each action, tri-axis gyroscope data built in the handheld device is recorded, and then the tri-axis gyroscope data is consistent with tri-axis gyroscope data pre-stored in a system when each user completes each action, if yes, authentication is considered to pass. Namely, the method comprises the following steps: step S22: and comparing the triaxial gyroscope data built in the handheld device with system preset data when the current user completes each action, and obtaining a second comparison result.
The tri-axis gyroscope data includes movement acceleration and angle change data. Judging whether the position of the handheld device is changed or not through moving the acceleration and the angle, and if the acceleration and the angle are matched with the corresponding preset verification instructions in the change time period.
In the actual use process, for example, there are two acquisition modes for the video image of each action completed by the current user, and there are two modes for the corresponding gyroscope data: for example, when the user continuously performs all actions of a preset verification instruction, continuously recording the data of the gyroscope; for example, when video images of the user who completes each action are stored separately, gyroscope data of the user who completes each action are also stored separately.
In practice, if the authentication uses a pre-recorded video image, the triaxial gyroscope data built in the intelligent device should be unchanged when the preset authentication command is completed, and consistent with the pre-stored data in the system. That is, because the main body in the video changes the angle and the position when the video with various angles of the recorded mouth, tongue and the like is in a deceptive attack, the three-axis gyroscope built-in the mobile phone or the shielding computer is used for verifying that the shot mobile phone or the shielding computer is in a static state, the three-axis gyroscope built-in the mobile phone or the shielding computer cannot generate angle or acceleration change data, and only when a user finishes a video image recorded when a preset verification instruction is finished for holding the mobile phone or the tablet computer in real time, the change condition of the mouth and the tongue is obtained through the position and the angle change of the handheld device, and the three-axis gyroscope built-in the handheld device correspondingly generates angle or acceleration change data.
Step S3: judging whether the authentication is the face authentication of the current user or not according to the first comparison result and the second comparison result, and if so, indicating that the authentication of the current user passes.
In the judging process, only when the first comparison result and the second comparison result meet the preset conditions at the same time, the face authentication of the user is considered to pass. Namely, the method comprises the following steps: only when the current user completes the preset verification instruction and the data of the three-axis gyroscope built in the handheld device passes the verification, the authentication is considered as the face authentication of the current user, but not the authentication of other fraudulent attacks by using prerecorded videos and the like, otherwise, the authentication is not passed.
For example, the first comparison result is that the facial motion is matched with the recognition model, the matching result reaches more than 85%, the angle judges that the left-right translation reaches more than 15 degrees, and the acceleration range is larger than the set threshold value, namely the face authentication is considered to pass. The threshold value of the acceleration range can be set arbitrarily according to the practice, because if the recorded video fraudulent authentication is used, the photographing device has no displacement data, that is, the acceleration is realized as long as the acceleration is not mainly 0 in the case of the true authentication.
Further, the device authentication step further includes step S23: and comparing the triaxial gyroscope data built in the handheld device when the current user finishes each action with the triaxial gyroscope data built in the handheld device when the corresponding action in the system is finished, and obtaining a third comparison result. Further, the invention further comprises a step S4: judging whether the authentication is the face authentication of the current user or not according to the first comparison result and the third comparison result, and if so, indicating that the authentication of the current user passes.
That is, the verification of the gyroscope data of the handheld device for completing each action by the current user is to collect the gyroscope data built in the handheld device when different users complete each action in advance and bind the gyroscope data with the recognition model for completing the corresponding action by the corresponding user. Namely: before face authentication, the recognition model of each action needs to be stored in the system, and simultaneously, the three-axis gyroscope data of the handheld device when the user completes each action is also stored in the system, and the three-axis gyroscope data are associated according to each action completed by the user.
And during authentication, matching and comparing the gyroscope data built in the handheld device of which each action is completed by the current user with the gyroscope data built in the handheld device of which each action is completed by the current user in the system, so as to obtain a third comparison result.
In this way, only when the facial authentication is passed, then the gyroscope data built in the handheld device of which the current user completes each action is consistent with the corresponding gyroscope data pre-stored in the system by the current user, the authentication is considered to be passed.
The invention combines the facial authentication with the equipment authentication to achieve the living body authentication of the user face, and can effectively prevent the safe attack of deception authentication by copying and embezzling the video image.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed implements the steps of the gyroscope-based face authentication method as described herein.
The invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of a gyroscope based face authentication method as described above.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention are intended to be within the scope of the present invention as claimed.