Movatterモバイル変換


[0]ホーム

URL:


CN103914676A - Method and apparatus for use in face recognition - Google Patents

Method and apparatus for use in face recognition
Download PDF

Info

Publication number
CN103914676A
CN103914676ACN201210592215.8ACN201210592215ACN103914676ACN 103914676 ACN103914676 ACN 103914676ACN 201210592215 ACN201210592215 ACN 201210592215ACN 103914676 ACN103914676 ACN 103914676A
Authority
CN
China
Prior art keywords
image
face
user image
standard
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210592215.8A
Other languages
Chinese (zh)
Other versions
CN103914676B (en
Inventor
李晓燕
李鹏
胡光龙
陈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yixian Advanced Technology Co Ltd
Original Assignee
Hangzhou Langhe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Langhe Technology Co LtdfiledCriticalHangzhou Langhe Technology Co Ltd
Priority to CN201210592215.8ApriorityCriticalpatent/CN103914676B/en
Publication of CN103914676ApublicationCriticalpatent/CN103914676A/en
Application grantedgrantedCritical
Publication of CN103914676BpublicationCriticalpatent/CN103914676B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

An embodiment of the invention provides a method for use in face recognition. The method includes: obtaining a coordinate of facial feature points after detection of the facial feature points of a user image; calculating the angular difference between a human face and a horizontal direction based on the coordinate of facial feature points, and rotating the user image until the angular difference between the human face and the horizontal direction matches a preset standard angle; calculating a size ratio between the human face and a preset standard human face according to the coordinate of facial feature points, and scaling the user image according to the size ratio; and according to the location of the facial feature points in the user image, cropping the user image after rotation and scaling into a standard area size so that the facial feature points are in the standard position of the standard area, and obtaining an aligned facial image of a consistent size through pre-processing of the user image so that the facial features of the same user are more consistent and the effects of low-quality images on face recognition are eliminated. In addition, the invention also provides an apparatus for use in face recognition.

Description

Method and device used in face recognition
Technical Field
The embodiment of the invention relates to the field of identity recognition, in particular to a method and a device used in face recognition.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Thus, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
In daily life, the need for identifying the identity of a person is widely existed in various industries, such as financial services, customs entry and exit, national security and other fields, and the identity of the person needs to be frequently identified. Common identification means include signatures, passwords, manual photo comparisons, etc., but these methods have their own drawbacks. For example, signatures are easily forged, passwords can be stolen, and manual photo comparison is time-consuming and labor-consuming. With the development of science and technology, the advantages of biometrics in the field of identity recognition are more and more obvious, wherein face recognition is a research direction which develops rapidly in recent years.
In the prior art, some face recognition technologies have appeared, for example, a face attendance application technology, collects a user image through a camera, extracts face features of the user image when a face is detected, compares the face features with feature information stored in a database, and further realizes face recognition of the user.
Disclosure of Invention
However, due to the fact that the environment of the practical application of face recognition is complex and the quality of the acquired user image is unstable, the characteristics of the user image extracted by the prior art cannot guarantee consistency and cannot meet the application of face recognition.
Therefore, in the prior art, the face recognition based on the user image with unstable quality is a very annoying process.
Therefore, an improved technique used in face recognition is needed to eliminate the influence of low-quality user images and ensure that high-quality user images enter the face recognition module, so as to reduce the recognition difficulty and improve the recognition accuracy.
In this context, embodiments of the present invention are intended to provide a method and apparatus for use in face recognition.
In a first aspect of embodiments of the present invention, there is provided a method for use in face recognition, which may include:
detecting the user image through the face characteristic points to obtain face characteristic point coordinates;
calculating the angle difference between the face and the horizontal direction according to the coordinates of the feature points of the face, and rotating the user image until the angle difference between the face and the horizontal direction meets a preset standard angle;
calculating the size ratio of the human face to a preset standard human face according to the coordinates of the characteristic points of the human face, and scaling the user image according to the size ratio;
the rotated and scaled user image is cropped to a standard region size and the face feature point is at a standard position of the standard region according to the position of the face feature point in the user image.
Optionally, the face feature point coordinates may be left-eye feature point coordinates and right-eye feature point coordinates.
Optionally, the angle difference between the human face and the horizontal direction is calculated according to the coordinates of the human face feature points, specifically, the angle difference between a connection line of the two feature points and the horizontal direction may be calculated according to the coordinates of the left-eye feature points and the coordinates of the right-eye feature points;
the user image is rotated until the angle difference between the human face and the horizontal direction meets a preset standard angle, and specifically, the user image can be rotated until the angle difference between the connecting line of the two characteristic points and the horizontal direction is zero degree.
Optionally, the size ratio of the face to the preset standard face is calculated according to the coordinates of the face feature points, and specifically, the ratio of the distance between the two feature points to the preset first standard distance may be calculated according to the coordinates of the left-eye feature points and the coordinates of the right-eye feature points.
Optionally, the cropping of the rotated and scaled user image to the standard region size and the standard position of the standard region according to the position of the human face feature point in the user image may specifically be cropping of the rotated and scaled user image to the standard region size and the standard position of the standard region according to the positions of the two feature points in the user image;
the outer frame of the user image is a preset standard size, and specifically, the height and width of the user image can be preset standard sizes.
Optionally, the user image may also be obtained by:
extracting image features of the user image;
judging whether the image features are within a standard threshold range;
if so, obtaining the user image.
Optionally, the image feature may specifically be a gray level histogram of a left face and a right face of the user image;
the standard threshold range may be an illumination threshold range.
Optionally, the extracting of the image feature of the user image may specifically be calculating an image quality evaluation index scale of the user image by using a gradient operator;
the standard threshold range may be specifically an image quality evaluation index threshold range.
Optionally, the image feature may specifically be a brightness distribution ratio of a gray level histogram of a human face image;
the standard threshold range may specifically be a standard ratio threshold range.
Optionally, before determining whether the image feature is within the standard threshold range, the method may further include: determining whether the user image is used for registration or authentication;
if the image feature is used for registration, the determining whether the image feature is within the standard threshold range may be determining whether the image feature is within a first standard threshold range for registration;
if used for authentication, the determination of whether the image feature is within the standard threshold range may be a determination of whether the image feature is within a second standard threshold range for authentication.
Optionally, the method may further include:
performing gamma conversion on the cut image;
and filtering the high-frequency part and the low-frequency part by using a filter to obtain an updated user image.
Optionally, the method may further include:
the cropped user image is overlaid with a standard face template,
and intercepting the covered part of the user image in the effective area of the preset template to be used as an updated cut user image.
Optionally, the method may further include: and extracting Gabor characteristics, LBP characteristics and HOG characteristics of the cut user image as face characteristic information.
Optionally, the method may further include: and adopting an AdaBoost algorithm to select Gabor characteristics, LBP characteristics and HOG characteristics as face characteristic information.
Optionally, the method may further include: and obtaining user registration information, judging whether the state of the user registration information meets an updating condition, and if so, replacing original face feature information contained in the user registration information with the face feature information of the cut user image.
In a second aspect of embodiments of the present invention, there is provided an apparatus for use in face recognition, which may include:
a face detection unit: the face characteristic point detection method comprises the steps of configuring a user image to obtain face characteristic point coordinates through face characteristic point detection;
a pretreatment unit: the image processing device is configured and used for calculating the angle difference between the human face and the horizontal direction according to the coordinates of the characteristic points of the human face and rotating the user image until the angle difference between the human face and the horizontal direction meets a preset standard angle; calculating the size ratio of the human face to a preset standard human face according to the coordinates of the characteristic points of the human face, and scaling the user image according to the size ratio; the rotated and scaled user image is cropped to a standard region size and the face feature point is at a standard position of the standard region according to the position of the face feature point in the user image.
Optionally, the face detection unit: the method can be particularly configured for obtaining the left-eye feature point coordinates and the right-eye feature point coordinates of the user image through human face feature point detection.
Optionally, the pre-processing unit: specifically, the method may be configured to calculate an angle difference between a connection line of the two feature points and the horizontal direction according to the left-eye feature point coordinate and the right-eye feature point coordinate, and rotate the user image until the angle difference between the connection line of the two feature points and the horizontal direction is zero.
Optionally, the pre-processing unit: the method can be specifically configured to calculate a ratio of a distance between the two feature points to a preset first standard distance according to the left-eye feature point coordinate and the right-eye feature point coordinate.
Optionally, the pre-processing unit: the method may be specifically configured to crop the rotated and scaled user image to a standard region size according to the positions of the two feature points in the user image, and the two feature points are located at the standard positions of the standard region.
Optionally, the apparatus may further include:
an image quality evaluation unit: and the image characteristic extraction module is configured for extracting the image characteristic of the user image, judging whether the image characteristic is in a standard threshold range, and if so, obtaining the user image.
Optionally, the image quality evaluation unit: the method may specifically be configured to extract grayscale histograms of a left face and a right face of the user image, determine whether the grayscale histograms are within a standard illumination threshold range, and if so, obtain the user image.
Optionally, the image quality evaluation unit: specifically, the method can be configured to calculate an image quality evaluation index scale of the user image by using a gradient operator, determine whether the image quality evaluation index scale is within an image quality evaluation index scale threshold range, and if so, obtain the user image.
Optionally, the image quality evaluation unit: the method can be specifically configured to extract the brightness distribution ratio of the gray histogram of the face image of the user image, judge whether the ratio is within the range of a standard ratio threshold value, and if so, obtain the user image.
Optionally, the image quality evaluation unit: the determining whether the image feature is within the standard threshold range may be determining whether the image feature is within a first standard threshold range for registration, if used, and determining whether the image feature is within a second standard threshold range for authentication, if used, may be determining whether the image feature is within the standard threshold range.
Optionally, the apparatus may further include: an illumination processing unit: the system is configured to perform a gamma transformation on the cropped image, and filter out high and low frequency portions using a filter to obtain an updated user image.
Optionally, the apparatus may further include: an interference removal unit: and the system is configured to cover the cut user image by using a standard face template, and intercept the part of the covered user image in the preset template effective area as an updated cut user image.
Optionally, the apparatus may further include: a feature extraction unit: and configuring Gabor characteristics, LBP characteristics and HOG characteristics for extracting the cut user image as face characteristic information.
Optionally, the feature extraction unit: the method can also be configured to adopt an AdaBoost algorithm to select Gabor characteristics, LBP characteristics and HOG characteristics as the face characteristic information.
Optionally, the apparatus may further include: an update unit: and the system is configured to obtain user registration information, judge whether the state of the user registration information meets an updating condition, and if so, replace original face feature information contained in the user registration information with the face feature information of the cut user image.
Through the description of the technical scheme, the invention has the following beneficial effects:
according to the method and the device provided by the embodiment of the invention, after the user image is subjected to the face characteristic point detection to obtain the face characteristic point coordinates, the user image is subjected to normalized face processing including angle adjustment, size adjustment, face position adjustment and image cutting according to the position of the face characteristic point coordinates in the user image, so that the user image with unstable image quality can obtain the face images with consistent and aligned sizes after being processed, the face characteristics extracted from the same user image are closer to consistent, the influence of the low-quality image on face recognition is eliminated, the subsequent face recognition difficulty is reduced, and the recognition accuracy is improved.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically illustrates a block diagram of an exemplary computing system 100 suitable for implementing embodiments of the present invention;
FIG. 2 schematically illustrates a user image under an application scenario of the present invention;
FIG. 3 schematically illustrates a flow chart of a method for use in face recognition according to an embodiment of the present invention;
fig. 4 schematically shows a schematic diagram of a left-eye feature point according to an embodiment of the present invention;
FIG. 5 schematically illustrates a diagram of a standard face template according to an embodiment of the invention;
fig. 6 schematically shows a composition diagram of an apparatus used in face recognition according to an embodiment of the present invention;
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 illustrates a block diagram of an exemplary computing system 100 suitable for implementing embodiments of the present invention. As shown in fig. 1, computing system 100 may include: a Central Processing Unit (CPU) 101, a Random Access Memory (RAM) 102, a Read Only Memory (ROM) 103, a system bus 104, a hard disk controller 105, a keyboard controller 106, a serial interface controller 107, a parallel interface controller 108, a display controller 109, a hard disk 110, a keyboard 111, a serial external device 112, a parallel external device 113, and a display 114. Among these devices, coupled to the system bus 104 are a CPU101, a RAM102, a ROM103, a hard disk controller 105, a keyboard controller 106, a serial controller 107, a parallel controller 108, and a display controller 109. The hard disk 110 is coupled to the hard disk controller 105, the keyboard 111 is coupled to the keyboard controller 106, the serial external device 112 is coupled to the serial interface controller 107, the parallel external device 113 is coupled to the parallel interface controller 108, and the display 114 is coupled to the display controller 109. It should be understood that the block diagram of the architecture depicted in FIG. 1 is for purposes of illustration only and is not intended to limit the scope of the present invention. In some cases, certain devices may be added or subtracted as the case may be.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, method or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or a combination of hardware and software, and is referred to herein generally as a "circuit," module "or" system. Furthermore, in some embodiments, the invention may also be embodied in the form of a computer program product in one or more computer-readable media having computer-readable program code embodied in the medium.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive example) of the computer readable storage medium may include, for example: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Embodiments of the present invention will be described below with reference to flowchart illustrations of methods and block diagrams of apparatuses (or systems) of embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
According to the embodiment of the invention, a method and a device used in face recognition are provided.
In this document, it is to be understood that any number of elements in the figures are provided by way of illustration and not limitation, and any nomenclature is used for differentiation only and not in any limiting sense.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
The inventor finds that in face recognition, the unstable user image quality is a main cause of recognition failure, and the unstable user image quality is mainly reflected in that the angle, the size, the position and the image size of a face in a user image are inconsistent, so that if the problem of consistency of the face in the user image can be solved, the consistency of the features of the user images with various qualities can be greatly improved.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
Referring to fig. 2 first, fig. 2 is a user image for face recognition, where the image quality is poor, and the embodiment of the present invention may improve the consistency of the user image features in the application scenario.
Exemplary method
In connection with the application scenario of fig. 2, a method for this application scenario according to an exemplary embodiment of the present invention is described below with reference to fig. 3. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
Referring to fig. 3, an exemplary flowchart of a method used in face recognition according to the present invention is shown, and the exemplary method may include, for example:
s301, detecting the face characteristic points of the user image to obtain face characteristic point coordinates;
s302, calculating the angle difference between the face and the horizontal direction according to the coordinates of the face characteristic points, and rotating the user image until the angle difference between the face and the horizontal direction meets a preset standard angle;
s303, calculating the size ratio of the human face to a preset standard human face according to the coordinates of the characteristic points of the human face, and scaling the user image according to the size ratio;
and S304, according to the position of the face characteristic point in the user image, the rotated and scaled user image is cut to the size of the standard area, and the face characteristic point is located at the standard position of the standard area.
By applying the embodiment method, after the user image is subjected to face characteristic point detection to obtain the face characteristic point coordinates, the face in the image is subjected to normalized face processing including angle adjustment, size adjustment, face position adjustment and image cutting, so that the user image with unstable image quality can obtain aligned face images with consistent size through processing, the face characteristics extracted from the same user image are closer to consistent, the influence of the low-quality image on face identification is eliminated, the subsequent face identification difficulty is reduced, and the identification precision is improved.
It should be noted that the method of the present invention may be applied to two-dimensional face recognition, and may also be applied to three-dimensional face recognition, and specifically, according to implementation needs, the two-dimensional image or the three-dimensional image may be correspondingly processed as an input of the method of the present invention, and the present invention is not limited in this respect.
In the following, the detailed description will be made on the facial feature point coordinates obtained by the user image through the facial feature point detection in step S301, and in an embodiment of the present invention, the facial feature point coordinates can be obtained through the following steps, for example:
converting the user image into a gray scale image;
calling a face detection module of an OpenCV image processing library to perform face detection on the gray level image, wherein the face detection algorithm based on Haar features and cascade AdaBoost classification is used in the OpenCV library;
when a face is detected, determining coordinates of four vertexes of a face rectangular area in the whole gray level image;
and processing the four vertex coordinates and the gray level image as the input of a face feature point detection model to obtain the coordinates of the face feature points, wherein the face feature point detection model can be obtained by training the user image marked with the face feature points by adopting an ASM (automatic scanning model) algorithm in advance.
The training of the user image marked with the face feature points by using the ASM algorithm specifically includes the following steps:
establishing a data set comprising N training images, each image having to contain a human face, wherein the acquisition condition of the training images is as similar as possible to the acquisition condition which may be encountered in practical use, such as: the training image dataset contains various possible lighting conditions, user expressions, user head angles, wearing or not wearing glasses, etc.;
for each image in the training dataset the following is performed:
manually labeling M personal face feature points in each image, wherein the M personal face feature points include four vertexes of a rectangular human face region, and obtaining coordinates of the feature points in the whole image, for example, M may be 68 personal face feature points, and the manually labeled human face feature points may include a left eye feature point, a right eye feature point, a nasion feature point, a nose tip feature point, a mouth corner feature point, a cheek contour feature point, and the like;
detecting the face in each image by using OpenCV (open source computer vision library) to obtain coordinates of four vertexes of a face rectangular area in the image;
and inputting the training data set and all coordinate information into an ASM algorithm for training to obtain a feature point detection model.
It should be noted that, in step S301, the user image is subjected to face feature point detection to obtain the coordinates of the face feature points, where the face feature points may specifically be left-eye feature points and right-eye feature points, and may also be other face feature points such as nose root feature points and nose tip feature points, which are specifically selected according to implementation requirements, and steps S302 to S304 of the present invention are described in detail below with the face feature points as the left-eye feature points and the right-eye feature points as examples:
in step S302, an angle difference between the human face and the horizontal direction is calculated according to the coordinates of the feature points of the human face, specifically, an angle difference between a connection line of the two feature points and the horizontal direction is calculated according to the coordinates of the feature points of the left eye and the coordinates of the feature points of the right eye;
in step S302, the user image is rotated until the angle difference between the human face and the horizontal direction meets a preset standard angle, specifically, the angle difference between the connection line of the two feature points and the horizontal direction is zero;
in step S303, the size ratio of the face to the preset standard face is calculated according to the coordinates of the feature points of the face, specifically, the ratio of the distance between the two feature points to the preset standard distance is calculated according to the coordinates of the feature points of the left eye and the coordinates of the feature points of the right eye;
in step S303, scaling the user image according to a size ratio, specifically, scaling may be performed until a distance between the two feature points is equal to a preset standard distance, or scaling may be performed until a ratio between the distance between the two feature points and the preset standard distance is equal to a preset ratio, which may be preset according to implementation requirements;
in step S304, the rotated and scaled user image is cropped to the size of the standard region according to the position of the human face feature point in the user image, and the human face feature point is located at the standard position of the standard region, specifically, the rotated and scaled user image is cropped to the size of the standard region and the two feature points are located at the standard position of the standard region according to the positions of the two feature points in the user image, for example, the distance from one of the cropped two feature points to at least two non-parallel edges of the user image is a preset standard distance, for example, as shown in fig. 3, the frame distance a and the left frame distance b from the left eye feature point to the user image are the preset standard distances;
the cutting to the standard area size in step S304 may be specifically to cut the outer frame of the user image to a preset standard size, and specifically may be as shown in fig. 4, where the height H and the width W of the user image are preset standard sizes.
The following describes steps S302 to S304 in detail, taking the face feature points as the nose root feature points and the nose tip feature points as examples:
in step S302, an angle difference between the face and the horizontal direction is calculated according to the coordinates of the feature points of the face, specifically, an angle difference between a connection line of two feature points and the horizontal direction is calculated according to the coordinates of the feature points of the nose root and the nose tip;
in step S302, the user image is rotated until the angle difference between the human face and the horizontal direction meets a preset standard angle, specifically, the angle difference between the connection line of the two feature points and the horizontal direction is rotated to 90 degrees;
in step S303, the size ratio of the face to the preset standard face is calculated according to the coordinates of the feature points of the face, specifically, the ratio of the distance between the two feature points to the preset standard distance is calculated according to the coordinates of the feature points of the nose root and the nose tip;
in step S303, scaling the user image according to a size ratio, specifically, scaling may be performed until a distance between the two feature points is equal to a preset standard distance, or scaling may be performed until a ratio between the distance between the two feature points and the preset standard distance is equal to a preset ratio, which may be preset according to implementation requirements;
in step S304, the rotated and scaled user image is cropped to the size of the standard region and the face feature point is located at the standard position of the standard region according to the position of the face feature point in the user image, specifically, the rotated and scaled user image is cropped to the size of the standard region and the two feature points are located at the standard position of the standard region according to the positions of the two feature points in the user image, for example, the rotated and scaled user image is cropped to have the top frame distance a and the left frame distance b from the nose tip feature point to the user image as preset standard distances;
it should be noted that the user image described in the present invention may be acquired directly after being acquired by a camera, or may be acquired after being subjected to other preprocessing, and in order to improve the success rate of face recognition, the present invention further provides a method for acquiring a user image with qualified quality and screening out an image with unqualified quality through the following quality evaluation preprocessing steps, for example, the quality evaluation preprocessing steps may include:
extracting image features of the user image;
judging whether the image features are within a standard threshold range;
if so, obtaining the user image.
The extracted image features of the user image may be specifically set according to implementation requirements, and may be different from the face feature information extracted during face recognition in the following embodiments, and the face feature information may include: for example, comparing the gray histogram distribution of the left and right face regions for the illumination effect, and selecting a reasonable illumination threshold range to screen out an image that is too bright or too dark, for example, the method specifically includes:
the image features may specifically be gray level histograms of a left face and a right face of the user image;
the standard threshold range may be an illumination threshold range.
For another example, for defocus blur, motion blur, and too low resolution, a gradient operator may be used to calculate an image quality evaluation index scale of the image, such as a degree of sharpening, and a reasonable image quality evaluation index threshold may be selected to screen out the blurred image, for example, specifically, the method may include:
the extracting of the image characteristics of the user image can specifically be calculating image quality evaluation index scale of the user image by adopting a gradient operator;
the standard threshold range may be specifically an image quality evaluation index threshold range.
For another example, the non-qualified images smaller than the standard ratio threshold may be screened out to screen out the images of the left and right yin-yang faces according to the light-dark ratio of the left and right faces, and for example, the method specifically includes:
the image characteristics can be the brightness distribution ratio of a gray level histogram of a human face image;
the standard threshold range may specifically be a standard ratio threshold range.
It can be understood that, if the image feature is not within the threshold range, a message such as the following may be returned to the user according to the content of the extracted image feature and the content of the determined standard threshold range, for example:
the face area is too bright! If the ambient light is too bright, the ambient light is required to be adjusted and then collected again;
or,
the illumination of the face area is too dark! If the ambient light is too dark, please adjust and then collect again;
or,
left and right yin-yang face! Adjusting light to collect the face again when the face is not illuminated uniformly;
or,
too low resolution, defocus blur or motion blur! Too low or shaking of camera pixels causes image blurring.
Therefore, the user image with reliable quality can be obtained through the quality evaluation preprocessing steps, and the success rate of face recognition is effectively improved. In addition, in consideration of the different requirements of the user image quality in the registration stage and the authentication stage of face recognition, for example, in order to extract more complete effective face feature information in the registration stage, the obtained user image is generally required to have higher image quality, and in order to reduce the difficulty of user authentication in the authentication stage, the obtained user image is generally required to have slightly lower image quality, so the present invention proposes that before determining whether the image feature is within the standard threshold range, the present invention may further include:
determining whether the user image is used for registration or authentication;
if the image feature is used for registration, the determining whether the image feature is within the standard threshold range may be determining whether the image feature is within a first standard threshold range for registration;
if used for authentication, the determination of whether the image feature is within the standard threshold range may be a determination of whether the image feature is within a second standard threshold range for authentication.
In addition, in order to reduce the influence of the illumination in the acquisition environment on the pixel value of the acquired user image and keep the user image in stable quality in various illumination environments, the present invention further provides an illumination preprocessing method for the user image, so that the user image acquired under different illumination has approximate illumination distribution after being processed, and the influence of the illumination on the user image is reduced to the maximum extent, specifically, for example, the present invention may further include:
performing gamma transformation on the cropped user image;
and filtering the high-frequency part and the low-frequency part by using a filter to obtain an updated user image.
However, since the face area is not a strict rectangle, in general, the user image processed by the above embodiments introduces a part of background in the lower left corner and the lower right corner, and sometimes the forehead area is interfered by hair, therefore, in an embodiment of the present invention, in order to further improve the effective face area in the user image, it is further proposed to remove the background interference outside the face part in the user image by the following method, specifically, for example, the method may include:
covering the cropped user image with a standard face template, for example, the standard face template is the template shown in fig. 5;
and intercepting the covered part of the user image in the effective area of the preset template to be used as an updated cut user image.
Therefore, the face feature information of the user image obtained after the processing of the above embodiments is extracted, the consistency of the face feature information can be ensured, and the success rate of face recognition is improved.
In the present invention, in order to effectively describe the face features by the extracted face feature information, a large number of experiments are performed to perform contrastive analysis, and finally, the Gabor feature, the LBP feature and the HOG feature of the cropped user image are determined and extracted as the face feature information, and the following three features are explained in detail:
among them, Gabor features are features closest to human visual characteristics in the field of face recognition. In the invention, the multi-scale and multi-directional characteristics of the Gabor features are mainly used for extracting effective information of a face region, and the effective information is specifically described as follows:
the two-dimensional Gabor wavelet is defined as:
<math> <mrow> <mi>&psi;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>k</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mfrac> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mfrac> <mrow> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>k</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>z</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>]</mo> <mo>&CenterDot;</mo> <mo>[</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>ikz</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mfrac> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mn>2</mn> </mfrac> <mo>]</mo> <mo>]</mo> </mrow></math>
in the formula: σ is a constant related to the wavelet frequency bandwidth; z = (x, y) spatial position coordinates; kappa determines the direction and scale of the Gabor kernel, and when 8 directions and 5 scales of sampling are used, kappa in the directions and scales can be written asWherein κv=κmax/fvFor the sampling scale, v ∈ {0,1, …,4} is the scale label; phiμPi μ/8 is the sampling direction, μ ∈ {0,1, …,7} is the direction index, KmaxFor maximum frequency, f is the kernel spacing factor in the frequency domain, where let parameter Kmax=π/2、Sigma =2 pi, better wavelet characterization and discrimination effect can be obtained, and Gabor changesThe convolution with the Gabor kernel is:
Jk(z)=I(z)*Ψ(k,z);
let Jk(z) has an amplitude and a phase of AkAnd phikThen, thenCombining J of different dimensions and orientationsk(z) constructing a Gabor feature vector of the image at the z-position;
the similarity of Gabor features J and J' without considering the phase difference is defined as:
<math> <mrow> <msub> <mi>S</mi> <mi>A</mi> </msub> <mrow> <mo>(</mo> <mi>J</mi> <mo>,</mo> <msup> <mi>J</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mi>K</mi> </munder> <msub> <mi>A</mi> <mi>K</mi> </msub> <msup> <msub> <mi>A</mi> <mi>K</mi> </msub> <mo>&prime;</mo> </msup> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mi>K</mi> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>A</mi> <mi>K</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <munder> <mi>&Sigma;</mi> <mi>K</mi> </munder> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>A</mi> <mi>K</mi> </msub> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mrow></math>
when extracting Gabor features of face images, multiple Gabor filters in different scales and directions are generally adopted to form a filter bank, and parameters are selected according to characteristics of the images and neurophysiological conclusions, and research is generally carried out on the Gabor features of face images, wherein the filter bank comprises 8 directions (n = 8; mu-0, 1, …, 7) and 5 scales (K) in totalmaxPi/2; f = 2; v =0,1,2,3, 4) and let σ = π, the bandwidth of the filter is about 1 octave.
The Gabor features have good spatial locality and direction selectivity, and have certain robustness to illumination and gestures, so that the Gabor features are successfully applied to face recognition.
In the invention, the face image can be normalized to 80 × 80, and then 5-scale and 8-direction Gabor features are extracted to obtain a 40 × 80= 256000-dimensional feature vector.
The LBP (local Binary Pattern) feature adopts texture features to represent texture information of an image, and a basic LBP operator uses local Binary pattern values of surrounding eight neighborhood points as a new pixel value of a central point, but in view of the fact that the LBP feature only extracts a single-scale LBP histogram feature aiming at a uniform block image, the LBP feature of a uniform grid cannot effectively adapt to a human face with offset. In the invention, a face image is normalized to 130 × 150, then the image is divided by adopting a method of overlapping 3 pixels in 5 scales (respectively 10 × 11, 11 × 13, 13 × 15, 15 × 18 and 18 × 21) to obtain 8409 sub-regions, and finally P (8, 2) and uniform LBP histogram vectors (59 dimensions) of each sub-region are extracted to obtain 8409 =496131 dimensional histogram vectors. For two images to be compared, the chi-squared distance between the corresponding subregion LBP histogram vectors can be calculated as its similarity measure.
Wherein, the HOG (histogram of oriented gradient) feature adopts the histogram of oriented gradient feature to express the appearance feature and the orientation information of the image. In the invention, a face image is normalized to 80 × 80, then the image is divided by adopting a method of 5 scales (respectively 4 × 4, 6 × 6, 8 × 8, 10 × 10 and 12 × 12) and overlapping by 50%, 2876 sub-regions are obtained in total, and finally HOG features of each sub-region are extracted to obtain 2876 × 16=46016 dimensional feature vectors. For two images to be compared, the chi-squared distance between the HOG histogram vectors of the corresponding sub-regions can be calculated as the similarity measure thereof.
For the three extracted features, a high-dimensional feature vector is obtained, for example, in the above description of the embodiment, the Gabor feature is 2560000 dimensions, the LBP feature is 496131 dimensions, and the HOG feature is 46016 dimensions, and if the three high-dimensional feature vectors are directly used for identification, the overhead is large in time and space, and the method is not convenient for practical use. Therefore, the invention also provides that the AdaBoost algorithm is adopted to select Gabor features, LBP features and HOG features as face feature information, feature dimension reduction is realized, the features most beneficial to classification and recognition are obtained, the face feature extraction speed in the verification stage is improved, specifically, the Adaboost method can be utilized to train and recognize the three features respectively, and finally the top 100-dimensional Gabor features most beneficial to recognition, the LBP features corresponding to 100 sub-regions and the HOG features corresponding to 100 sub-regions are selected respectively. The AdaBoost algorithm is a supervised machine learning algorithm and is mainly used for binary classification. The algorithm can automatically generate a plurality of weak classifiers in an online manner according to a training set (mainly including characteristic values and corresponding categories), and then combine the weak classifiers into a final strong classifier. When the method is used on line, AdaBoost judges the characteristic value of an input sample and gives a classification result. In addition, AdaBoost can also realize feature dimension reduction, and automatically select a few features with the strongest discrimination from a plurality of features.
It should be noted that the face feature information obtained after the processing in the foregoing embodiments may be used to register a user, and may also be used to authenticate a user:
if the face feature information is used for registering the user, the face feature information can be stored in the database.
If used to authenticate a user, the authentication process may include:
obtaining the identity of a user statement;
collecting the image of the user through a camera;
after the processing of the embodiments, the face feature information is obtained;
deriving facial feature information corresponding to the identity from a database;
comparing the face feature information obtained after the processing of the embodiments with the face feature information corresponding to the identity;
it should be noted that there may be multiple pieces of registered face feature information, and therefore, comparison and determination may be performed in sequence.
If the comparison result is consistent, the authentication is successful;
and if the comparison result is inconsistent, the authentication fails.
The specific implementation of the face feature information comparison may be as follows:
after the processing of the above embodiments, the face feature information is obtained and used as the vectorRepresents;
deriving from the database, using the vectors, facial feature information corresponding to said identityAnd (4) showing.
ComputingWherein,reflects the difference between two sets of characteristic values, and thus can be determined according to the givenJudging whether the two are consistent or inconsistent, in the invention, the two can be combinedIn the embodiment of the invention, the binary classifier generated while feature dimension reduction is realized by using an AdaBoost algorithm is input to obtain a judgment result.
In addition, in the present invention, in order to solve the influence of the change of the face on the face recognition, automatic update of the face feature information is also proposed, for example, the present invention may further include: the method comprises the steps of obtaining user registration information, judging whether the state of the user registration information meets an updating condition, and if so, replacing original face feature information contained in the user registration information with face feature information of a cut user image, wherein the state of the user registration information can be specifically the time interval from the user registration time to the current time, and the updating condition can be specifically that the time interval is equal to a preset timing updating time interval.
Exemplary devices
Having described the method of an exemplary embodiment of the present invention, an apparatus for use in face recognition of an exemplary embodiment of the present invention is described next with reference to fig. 6, and as shown, the apparatus may include:
the face detection unit 601: the face characteristic point detection method comprises the steps of configuring a user image to obtain face characteristic point coordinates through face characteristic point detection;
the preprocessing unit 602: the face feature point coordinate calculating device is configured to calculate the angle difference between the face and the horizontal direction according to the face feature point coordinates; calculating the size ratio of the human face to a preset standard human face according to the coordinates of the characteristic points of the human face, and scaling the user image according to the size ratio; the rotated and scaled user image is cropped to a standard region size and the face feature point is at a standard position of the standard region according to the position of the face feature point in the user image.
In an embodiment of the present invention, the face detection unit 601: the method can be particularly configured for obtaining the left-eye feature point coordinates and the right-eye feature point coordinates of the user image through human face feature point detection.
Accordingly, the preprocessing unit 602: specifically, the method may be configured to calculate an angle difference between a connection line of the two feature points and the horizontal direction according to the left-eye feature point coordinate and the right-eye feature point coordinate, and rotate the user image until the angle difference between the connection line of the two feature points and the horizontal direction is zero.
The preprocessing unit 602: the method can be specifically configured to calculate a ratio of a distance between the two feature points to a preset standard distance according to the left-eye feature point coordinate and the right-eye feature point coordinate.
The preprocessing unit 602: the method may be specifically configured to crop the rotated and scaled user image to a standard region size according to the positions of the two feature points in the user image, and the two feature points are located at the standard positions of the standard region.
In another embodiment of the present invention, the apparatus may further include: image quality evaluation unit 603: and the image characteristic extraction module is configured for extracting the image characteristic of the user image, judging whether the image characteristic is in a standard threshold range, and if so, obtaining the user image.
Wherein the image quality evaluation unit 603: the method may specifically be configured to extract grayscale histograms of a left face and a right face of the user image, determine whether the grayscale histograms are within a standard illumination threshold range, and if so, obtain the user image.
Wherein the image quality evaluation unit 603: specifically, the method can be configured to calculate an image quality evaluation index scale of the user image by using a gradient operator, determine whether the image quality evaluation index scale is within an image quality evaluation index scale threshold range, and if so, obtain the user image.
Wherein the image quality evaluation unit 603: the method can be specifically configured to extract the brightness distribution ratio of the gray histogram of the face image of the user image, judge whether the ratio is within the range of a standard ratio threshold value, and if so, obtain the user image.
Wherein the image quality evaluation unit 603: the determining whether the image feature is within the standard threshold range may be determining whether the image feature is within a first standard threshold range for registration, if used, and determining whether the image feature is within a second standard threshold range for authentication, if used, may be determining whether the image feature is within the standard threshold range.
In another embodiment of the present invention, the apparatus may further include: the illumination processing unit 604: the system is configured to perform a gamma transformation on the cropped image, and filter out high and low frequency portions using a filter to obtain an updated user image.
In still another embodiment of the present invention, the apparatus may further include: interference removal unit 605: and the system is configured to cover the cut user image by using a standard face template, and intercept the part of the covered user image in the preset template effective area as an updated cut user image.
The apparatus according to the foregoing embodiments may further include: the feature extraction unit 606: the Gabor feature, the LBP feature, and the HOG feature used for extracting the cropped user image may be specifically configured as the face feature information.
Wherein the feature extraction unit 606: the method can also be configured to adopt an AdaBoost algorithm to select Gabor characteristics, LBP characteristics and HOG characteristics as the face characteristic information.
As a preferred embodiment, the apparatus of the present invention may further include: and the updating unit 607 is configured to obtain the user registration information, determine whether the state of the user registration information meets the updating condition, and if so, replace the original face feature information contained in the user registration information with the face feature information of the cropped user image.
It should be noted that although in the above detailed description reference is made to a subunit of an apparatus for use in face recognition, such division is not mandatory only. Indeed, the features and functions of two or more of the units described above may be embodied in one unit, according to embodiments of the invention. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Use of the verbs "comprise", "comprise" and their conjugations in this application does not exclude the presence of elements or steps other than those stated in this application. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (30)

a pretreatment unit: the image processing device is configured and used for calculating the angle difference between the human face and the horizontal direction according to the coordinates of the characteristic points of the human face and rotating the user image until the angle difference between the human face and the horizontal direction meets a preset standard angle; calculating the size ratio of the human face to a preset standard human face according to the coordinates of the characteristic points of the human face, and scaling the user image according to the size ratio; the rotated and scaled user image is cropped to a standard region size and the face feature point is at a standard position of the standard region according to the position of the face feature point in the user image.
CN201210592215.8A2012-12-302012-12-30A kind of method and apparatus used in recognition of faceActiveCN103914676B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201210592215.8ACN103914676B (en)2012-12-302012-12-30A kind of method and apparatus used in recognition of face

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201210592215.8ACN103914676B (en)2012-12-302012-12-30A kind of method and apparatus used in recognition of face

Publications (2)

Publication NumberPublication Date
CN103914676Atrue CN103914676A (en)2014-07-09
CN103914676B CN103914676B (en)2017-08-25

Family

ID=51040346

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201210592215.8AActiveCN103914676B (en)2012-12-302012-12-30A kind of method and apparatus used in recognition of face

Country Status (1)

CountryLink
CN (1)CN103914676B (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104268539A (en)*2014-10-172015-01-07中国科学技术大学High-performance human face recognition method and system
CN105117692A (en)*2015-08-052015-12-02福州瑞芯微电子股份有限公司Real-time face identification method and system based on deep learning
CN105809415A (en)*2016-03-042016-07-27腾讯科技(深圳)有限公司Human face recognition based check-in system, method and device
CN106022313A (en)*2016-06-162016-10-12湖南文理学院Scene-automatically adaptable face recognition method
CN106126572A (en)*2016-06-172016-11-16中国科学院自动化研究所Image search method based on area validation
CN106778925A (en)*2016-11-032017-05-31五邑大学A kind of super complete face automatic registration method of the attitude of recognition of face and its device
CN107423684A (en)*2017-06-092017-12-01湖北天业云商网络科技有限公司A kind of fast face localization method and system applied to driver fatigue detection
CN107610370A (en)*2017-08-292018-01-19深圳怡化电脑股份有限公司Not plug-in card ATM and not plug-in card financial trade method
CN107633209A (en)*2017-08-172018-01-26平安科技(深圳)有限公司Electronic installation, the method and storage medium of dynamic video recognition of face
CN107729879A (en)*2017-11-142018-02-23北京进化者机器人科技有限公司Face identification method and system
CN108010009A (en)*2017-12-152018-05-08北京小米移动软件有限公司A kind of method and device for removing interference figure picture
CN108492344A (en)*2018-03-302018-09-04中国科学院半导体研究所A kind of portrait-cartoon generation method
CN108549487A (en)*2018-04-232018-09-18网易(杭州)网络有限公司Virtual reality exchange method and device
CN108898628A (en)*2018-06-212018-11-27北京纵目安驰智能科技有限公司Three-dimensional vehicle object's pose estimation method, system, terminal and storage medium based on monocular
CN108921148A (en)*2018-09-072018-11-30北京相貌空间科技有限公司Determine the method and device of positive face tilt angle
CN109040842A (en)*2018-08-162018-12-18上海哔哩哔哩科技有限公司Video spectators' emotional information capturing analysis method, device, system and storage medium
CN109359508A (en)*2018-08-272019-02-19贵阳朗玛信息技术股份有限公司A kind of head portrait processing method and processing device
CN109429519A (en)*2017-06-302019-03-05北京嘀嘀无限科技发展有限公司System and method for verifying the authenticity of certificate photograph
CN109492540A (en)*2018-10-182019-03-19北京达佳互联信息技术有限公司Face exchange method, apparatus and electronic equipment in a kind of image
CN110046652A (en)*2019-03-182019-07-23深圳神目信息技术有限公司Face method for evaluating quality, device, terminal and readable medium
CN110197137A (en)*2019-05-142019-09-03苏州沃柯雷克智能系统有限公司A kind of method, apparatus, equipment and the storage medium of determining palm posture
CN110781473A (en)*2019-10-102020-02-11浙江大华技术股份有限公司Method for recognizing and preprocessing face picture
CN110807403A (en)*2019-10-292020-02-18中新智擎科技有限公司User identity identification method and device and electronic equipment
CN110832498A (en)*2017-06-292020-02-21皇家飞利浦有限公司 Blur the facial features of objects in an image
CN110852293A (en)*2019-11-182020-02-28业成科技(成都)有限公司Face depth map alignment method and device, computer equipment and storage medium
CN110909618A (en)*2019-10-292020-03-24泰康保险集团股份有限公司Pet identity recognition method and device
CN110969189A (en)*2019-11-062020-04-07杭州宇泛智能科技有限公司Face detection method and device and electronic equipment
CN111028251A (en)*2019-12-272020-04-17四川大学Dental picture cutting method, system, equipment and storage medium
CN111179174A (en)*2019-12-272020-05-19成都品果科技有限公司Image stretching method and device based on face recognition points
CN111401242A (en)*2020-03-162020-07-10Oppo广东移动通信有限公司 ID photo detection method, device, electronic device and storage medium
CN111768511A (en)*2020-07-072020-10-13湖北省电力装备有限公司Staff information recording method and device based on cloud temperature measurement equipment
CN112053381A (en)*2020-07-132020-12-08北京迈格威科技有限公司Image processing method, image processing device, electronic equipment and storage medium
CN112070013A (en)*2020-09-082020-12-11安徽兰臣信息科技有限公司Method and device for detecting facial feature points of children and storage medium
CN112183421A (en)*2020-10-092021-01-05江苏提米智能科技有限公司 A face image evaluation method, device, electronic device and storage medium
CN113127658A (en)*2019-12-312021-07-16浙江宇视科技有限公司Method, device, medium and electronic equipment for initializing identity recognition database
CN113626786A (en)*2021-08-112021-11-09深圳市宝泽科技有限公司 Device management method, device, storage medium and device in local area wireless network
CN113763348A (en)*2021-09-022021-12-07北京格灵深瞳信息技术股份有限公司Image quality determination method and device, electronic equipment and storage medium
CN113762060A (en)*2021-05-262021-12-07腾讯科技(深圳)有限公司 Face image detection method, device, readable medium and electronic device
CN114022763A (en)*2021-10-282022-02-08国家能源集团广西电力有限公司 A foreign object detection method, device and readable storage medium for high-voltage overhead line
CN114638018A (en)*2022-03-292022-06-17润芯微科技(江苏)有限公司Method for protecting privacy of vehicle owner driving recorder based on facial recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1885310A (en)*2006-06-012006-12-27北京中星微电子有限公司Human face model training module and method, human face real-time certification system and method
US20070098229A1 (en)*2005-10-272007-05-03Quen-Zong WuMethod and device for human face detection and recognition used in a preset environment
CN102332086A (en)*2011-06-152012-01-25夏东Facial identification method based on dual threshold local binary pattern
CN102663361A (en)*2012-04-012012-09-12北京工业大学Face image reversible geometric normalization method facing overall characteristics analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20070098229A1 (en)*2005-10-272007-05-03Quen-Zong WuMethod and device for human face detection and recognition used in a preset environment
CN1885310A (en)*2006-06-012006-12-27北京中星微电子有限公司Human face model training module and method, human face real-time certification system and method
CN102332086A (en)*2011-06-152012-01-25夏东Facial identification method based on dual threshold local binary pattern
CN102663361A (en)*2012-04-012012-09-12北京工业大学Face image reversible geometric normalization method facing overall characteristics analysis

Cited By (54)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104268539B (en)*2014-10-172017-10-31中国科学技术大学A kind of high performance face identification method and system
CN104268539A (en)*2014-10-172015-01-07中国科学技术大学High-performance human face recognition method and system
CN105117692A (en)*2015-08-052015-12-02福州瑞芯微电子股份有限公司Real-time face identification method and system based on deep learning
US10664580B2 (en)2016-03-042020-05-26Tencent Technology (Shenzhen) Company LimitedSign-in system, method, apparatus and server based on facial recognition
CN105809415A (en)*2016-03-042016-07-27腾讯科技(深圳)有限公司Human face recognition based check-in system, method and device
CN105809415B (en)*2016-03-042020-04-21腾讯科技(深圳)有限公司Check-in system, method and device based on face recognition
CN106022313A (en)*2016-06-162016-10-12湖南文理学院Scene-automatically adaptable face recognition method
CN106126572A (en)*2016-06-172016-11-16中国科学院自动化研究所Image search method based on area validation
CN106126572B (en)*2016-06-172019-06-14中国科学院自动化研究所 Image retrieval method based on region verification
CN106778925A (en)*2016-11-032017-05-31五邑大学A kind of super complete face automatic registration method of the attitude of recognition of face and its device
CN107423684A (en)*2017-06-092017-12-01湖北天业云商网络科技有限公司A kind of fast face localization method and system applied to driver fatigue detection
CN110832498A (en)*2017-06-292020-02-21皇家飞利浦有限公司 Blur the facial features of objects in an image
CN110832498B (en)*2017-06-292024-04-12皇家飞利浦有限公司 Blur the facial features of objects in an image
CN109429519A (en)*2017-06-302019-03-05北京嘀嘀无限科技发展有限公司System and method for verifying the authenticity of certificate photograph
CN107633209A (en)*2017-08-172018-01-26平安科技(深圳)有限公司Electronic installation, the method and storage medium of dynamic video recognition of face
CN107610370A (en)*2017-08-292018-01-19深圳怡化电脑股份有限公司Not plug-in card ATM and not plug-in card financial trade method
CN107729879A (en)*2017-11-142018-02-23北京进化者机器人科技有限公司Face identification method and system
CN108010009A (en)*2017-12-152018-05-08北京小米移动软件有限公司A kind of method and device for removing interference figure picture
CN108010009B (en)*2017-12-152021-12-21北京小米移动软件有限公司Method and device for removing interference image
CN108492344A (en)*2018-03-302018-09-04中国科学院半导体研究所A kind of portrait-cartoon generation method
CN108549487A (en)*2018-04-232018-09-18网易(杭州)网络有限公司Virtual reality exchange method and device
CN108898628A (en)*2018-06-212018-11-27北京纵目安驰智能科技有限公司Three-dimensional vehicle object's pose estimation method, system, terminal and storage medium based on monocular
CN109040842A (en)*2018-08-162018-12-18上海哔哩哔哩科技有限公司Video spectators' emotional information capturing analysis method, device, system and storage medium
CN109359508A (en)*2018-08-272019-02-19贵阳朗玛信息技术股份有限公司A kind of head portrait processing method and processing device
CN108921148A (en)*2018-09-072018-11-30北京相貌空间科技有限公司Determine the method and device of positive face tilt angle
CN109492540A (en)*2018-10-182019-03-19北京达佳互联信息技术有限公司Face exchange method, apparatus and electronic equipment in a kind of image
CN109492540B (en)*2018-10-182020-12-25北京达佳互联信息技术有限公司Face exchange method and device in image and electronic equipment
CN110046652A (en)*2019-03-182019-07-23深圳神目信息技术有限公司Face method for evaluating quality, device, terminal and readable medium
CN110197137A (en)*2019-05-142019-09-03苏州沃柯雷克智能系统有限公司A kind of method, apparatus, equipment and the storage medium of determining palm posture
CN110781473A (en)*2019-10-102020-02-11浙江大华技术股份有限公司Method for recognizing and preprocessing face picture
CN110781473B (en)*2019-10-102021-11-16浙江大华技术股份有限公司Method for recognizing and preprocessing face picture
CN110909618A (en)*2019-10-292020-03-24泰康保险集团股份有限公司Pet identity recognition method and device
CN110807403A (en)*2019-10-292020-02-18中新智擎科技有限公司User identity identification method and device and electronic equipment
CN110909618B (en)*2019-10-292023-04-21泰康保险集团股份有限公司Method and device for identifying identity of pet
CN110807403B (en)*2019-10-292022-12-02中新智擎科技有限公司User identity identification method and device and electronic equipment
CN110969189A (en)*2019-11-062020-04-07杭州宇泛智能科技有限公司Face detection method and device and electronic equipment
CN110969189B (en)*2019-11-062023-07-25杭州宇泛智能科技有限公司Face detection method and device and electronic equipment
CN110852293B (en)*2019-11-182022-10-18业成科技(成都)有限公司Face depth map alignment method and device, computer equipment and storage medium
CN110852293A (en)*2019-11-182020-02-28业成科技(成都)有限公司Face depth map alignment method and device, computer equipment and storage medium
CN111028251A (en)*2019-12-272020-04-17四川大学Dental picture cutting method, system, equipment and storage medium
CN111179174B (en)*2019-12-272023-11-03成都品果科技有限公司Image stretching method and device based on face recognition points
CN111179174A (en)*2019-12-272020-05-19成都品果科技有限公司Image stretching method and device based on face recognition points
CN113127658A (en)*2019-12-312021-07-16浙江宇视科技有限公司Method, device, medium and electronic equipment for initializing identity recognition database
WO2021184966A1 (en)*2020-03-162021-09-23Oppo广东移动通信有限公司Identification photograph checking method and apparatus, electronic device, and storage medium
CN111401242A (en)*2020-03-162020-07-10Oppo广东移动通信有限公司 ID photo detection method, device, electronic device and storage medium
CN111768511A (en)*2020-07-072020-10-13湖北省电力装备有限公司Staff information recording method and device based on cloud temperature measurement equipment
CN112053381A (en)*2020-07-132020-12-08北京迈格威科技有限公司Image processing method, image processing device, electronic equipment and storage medium
CN112070013A (en)*2020-09-082020-12-11安徽兰臣信息科技有限公司Method and device for detecting facial feature points of children and storage medium
CN112183421A (en)*2020-10-092021-01-05江苏提米智能科技有限公司 A face image evaluation method, device, electronic device and storage medium
CN113762060A (en)*2021-05-262021-12-07腾讯科技(深圳)有限公司 Face image detection method, device, readable medium and electronic device
CN113626786A (en)*2021-08-112021-11-09深圳市宝泽科技有限公司 Device management method, device, storage medium and device in local area wireless network
CN113763348A (en)*2021-09-022021-12-07北京格灵深瞳信息技术股份有限公司Image quality determination method and device, electronic equipment and storage medium
CN114022763A (en)*2021-10-282022-02-08国家能源集团广西电力有限公司 A foreign object detection method, device and readable storage medium for high-voltage overhead line
CN114638018A (en)*2022-03-292022-06-17润芯微科技(江苏)有限公司Method for protecting privacy of vehicle owner driving recorder based on facial recognition

Also Published As

Publication numberPublication date
CN103914676B (en)2017-08-25

Similar Documents

PublicationPublication DateTitle
CN103914676B (en)A kind of method and apparatus used in recognition of face
US10789465B2 (en)Feature extraction and matching for biometric authentication
US11527055B2 (en)Feature density object classification, systems and methods
US10956719B2 (en)Depth image based face anti-spoofing
EP3588364B1 (en)Face verification within documents
Tapia et al.Gender classification from iris images using fusion of uniform local binary patterns
KR102290392B1 (en)Method and apparatus for registering face, method and apparatus for recognizing face
WO2020000908A1 (en)Method and device for face liveness detection
WO2016149944A1 (en)Face recognition method and system, and computer program product
WO2019061658A1 (en)Method and device for positioning eyeglass, and storage medium
JP6351243B2 (en) Image processing apparatus and image processing method
US20160379038A1 (en)Valid finger area and quality estimation for fingerprint imaging
CN106650623A (en)Face detection-based method for verifying personnel and identity document for exit and entry
WO2019014814A1 (en) Method for quantitatively detecting face headline and intelligent terminal
Whitelam et al.Accurate eye localization in the short waved infrared spectrum through summation range filters
Lin et al.A gender classification scheme based on multi-region feature extraction and information fusion for unconstrained images
Poursaberi et al.Modified multiscale vesselness filter for facial feature detection
Lestriandoko et al.Multi-resolution face recognition: The behaviors of local binary pattern at different frequency bands
Tandon et al.An efficient age-invariant face recognition
CN110119691A (en)A kind of portrait localization method that based on local 2D pattern and not bending moment is searched
CN119646786B (en) Online education platform user authentication method and system
Gowthamam et al.Recognition of Occluded Facial Images Using Texture Features at SURF Keypoints
Haider et al.Facial Based Gender Classification for Real Time Applications
CN120126167A (en) Document text recognition method, device, electronic device and storage medium
Nayak et al.Enhancement of the Face Recognition Using Gabor Filter

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20190626

Address after:311215 Room 102, 6 Blocks, C District, Qianjiang Century Park, Xiaoshan District, Hangzhou City, Zhejiang Province

Patentee after:Hangzhou Yixian Advanced Technology Co., Ltd.

Address before:310013 Room 604-605, 6th floor, 18 Jiaogong Road, Xihu District, Hangzhou City, Zhejiang Province

Patentee before:Hangzhou Langhe Technology Limited


[8]ページ先頭

©2009-2025 Movatter.jp