Movatterモバイル変換


[0]ホーム

URL:


CN104408402A - Face identification method and apparatus - Google Patents

Face identification method and apparatus
Download PDF

Info

Publication number
CN104408402A
CN104408402ACN201410592275.9ACN201410592275ACN104408402ACN 104408402 ACN104408402 ACN 104408402ACN 201410592275 ACN201410592275 ACN 201410592275ACN 104408402 ACN104408402 ACN 104408402A
Authority
CN
China
Prior art keywords
faces
distance
picture
feature
distance weighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410592275.9A
Other languages
Chinese (zh)
Other versions
CN104408402B (en
Inventor
陈志军
张波
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi IncfiledCriticalXiaomi Inc
Priority to CN201410592275.9ApriorityCriticalpatent/CN104408402B/en
Publication of CN104408402ApublicationCriticalpatent/CN104408402A/en
Application grantedgrantedCritical
Publication of CN104408402BpublicationCriticalpatent/CN104408402B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The embodiments of the invention disclose a face identification method and apparatus. First-type characteristics are obtained from pictures comprising human faces, then a distance weight coefficient between two faces is obtained according to the first-type characteristics, a weight similarity between the two faces is obtained by use of the distance between the two faces and the corresponding distance weight coefficient, and finally, according to the weight similarity between the two faces, clustering is performed on the pictures, the pictures belonging to the same person are clustered to form a cluster, and the pictures of the different persons are classified into different clusters. According to the face identification method, the similarity of five sense organs and facial forms of the two faces is taken into consideration, and measurement is also carried out on the similarity of the faces according to the first-type characteristics included in the pictures, such that the face similarity of the same person under different conditions is increased, and the similarity between the faces of different persons can be reduced. Therefore, the face identification method and apparatus can improve the face identification accuracy.

Description

Face recognition method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a face recognition method and apparatus.
Background
By the face recognition method, the face pictures belonging to the same person in the album can be aggregated in one cluster. However, in the process of aggregation, differences in human face posture, expression, illumination and the like often occur in the human face pictures, which leads to a reduction in similarity between feature vectors extracted from human face features of the same person, and further, the human face pictures which belong to the same person cannot be aggregated into a cluster, thereby greatly reducing the accuracy of human face recognition.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a face recognition method and device to improve the accuracy of face recognition. The embodiment of the disclosure discloses the following technical scheme:
according to a first aspect of the embodiments of the present disclosure, there is provided a face recognition method, including:
acquiring a first type of feature from a picture containing a human face;
acquiring distance weighting coefficients between any two faces respectively belonging to the two pictures according to the first type of characteristics;
acquiring the distance between the two faces;
obtaining the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient;
judging whether the two faces belong to the same person or not by using the weighted similarity between the two faces;
the first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the obtaining, according to the first class of features, distance weighting coefficients between any two faces respectively belonging to two pictures includes:
acquiring a characteristic value corresponding to the first type of characteristic;
calculating an absolute difference value between the feature values of the first class features of the two faces;
normalizing the absolute difference value to a preset interval to obtain a normalized feature difference value;
and calculating the distance weighting coefficient between the two human faces according to the normalized feature difference value of the first class of features and the corresponding feature coefficient obtained in advance.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the calculating the distance weighting coefficient between the two faces according to the normalized feature difference and a corresponding feature coefficient obtained in advance includes:
calculating the product between the normalized feature difference value corresponding to the same first type feature and the corresponding feature coefficient;
and when the first class of features comprises a plurality of different features, obtaining a distance weighting coefficient between the two faces according to the accumulated sum of the products corresponding to the plurality of features.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the distance weighting coefficients between the two faces include a first distance weighting coefficient and a second distance weighting coefficient; the first distance weighting coefficient is greater than or equal to 0 and less than or equal to a first preset value, and the first preset value is greater than 1; the second distance weighting coefficient is greater than or equal to-1 and less than or equal to 1;
the obtaining of the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient includes:
obtaining the weighted similarity between the two faces according to the product of the distance between the two faces and the corresponding first distance weighting coefficient;
or,
and obtaining the weighted similarity between the two faces according to the sum of the distance between the two faces and the corresponding second distance weighting coefficient.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, when the first type of feature includes a position of a face in a picture, the obtaining the first type of feature from the picture including the face includes:
when the number of faces contained in two pictures containing the faces is judged to be the same, and the difference value between the shooting time of the two pictures is within a first preset range, the positions of the faces contained in the two pictures containing the faces are obtained;
and obtaining a characteristic value corresponding to the position of the face in the picture according to the distance between the positions of the faces respectively belonging to the two pictures.
According to a second aspect of the embodiments of the present disclosure, there is provided a face recognition apparatus including:
the first acquisition unit is used for acquiring a first type of characteristics from a picture containing a human face;
the second acquisition unit is used for acquiring distance weighting coefficients between any two faces respectively belonging to the two pictures according to the first class of characteristics;
a third obtaining unit, configured to obtain a distance between the two faces;
the fourth acquisition unit is used for acquiring the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient;
the judging unit is used for judging whether the two faces belong to the same person or not by utilizing the weighted similarity between the two faces;
the first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the second obtaining unit includes:
the first obtaining subunit is used for obtaining a characteristic value corresponding to the first class of characteristics;
the first calculating subunit is used for calculating the absolute difference value between the characteristic values of the first class of characteristics of the two human faces;
the normalization subunit is used for normalizing the absolute difference value to a preset interval to obtain a normalized feature difference value;
and the second calculating subunit is used for calculating the distance weighting coefficient between the two human faces according to the normalized feature difference of the first class of features and the corresponding feature coefficient obtained in advance.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the second computing subunit includes:
the third calculation subunit is used for calculating the product between the normalized feature difference value corresponding to the same first-class feature and the corresponding feature coefficient;
and the first accumulation subunit is used for obtaining a distance weighting coefficient between the two human faces according to the accumulated sum of the products corresponding to the plurality of characteristics when the first type of characteristics contain a plurality of different characteristics.
With reference to the second aspect, in a third possible implementation manner of the second aspect, the distance weighting coefficients between the two faces include a first distance weighting coefficient and a second distance weighting coefficient; the first distance weighting coefficient is greater than or equal to 0 and less than or equal to a first preset value, and the first preset value is greater than 1; the second distance weighting coefficient is greater than or equal to-1 and less than or equal to 1;
the fourth obtaining unit includes:
the multiplication operation subunit is used for obtaining the weighted similarity between the two faces according to the product of the distance between the two faces and the corresponding first distance weighting coefficient;
or,
and the second accumulation subunit is used for obtaining the weighted similarity between the two faces according to the sum of the distance between the two faces and the corresponding second distance weighting coefficient.
With reference to the second aspect, in a fourth possible implementation manner of the second aspect, when the first type of feature includes a position of a face in a picture, the first obtaining unit includes:
the face position acquiring subunit is configured to acquire the positions of the faces included in the two pictures when it is determined that the numbers of the faces included in the two pictures including the faces are the same and the difference between the shooting times of the two pictures is within a first preset range;
and the distance acquisition subunit is used for acquiring a characteristic value corresponding to the position of the face in the picture according to the distance between the positions of the faces respectively belonging to the two pictures.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to:
acquiring a first type of feature from a picture containing a human face;
acquiring distance weighting coefficients between any two faces respectively belonging to the two pictures according to the first type of characteristics;
acquiring the distance between the two faces;
obtaining the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient;
judging whether the two faces belong to the same person or not by using the weighted similarity between the two faces;
the first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: in the face recognition method provided in this embodiment, a first type of feature is obtained from a picture including a face, and then a distance weighting coefficient between two faces is obtained according to the first type of feature. And finally, clustering the pictures according to the weighted similarity between the two faces to obtain a clustering result, aggregating the pictures belonging to the same person into a cluster, and classifying the pictures of different persons into different clusters. When the face recognition method is used for recognizing the faces in the picture, the similarity between the facial features of the five sense organs and the facial shapes of the two faces is considered, and the similarity of the faces is measured according to the first type of features contained in the picture, so that the similarity of the faces of the same person under different conditions is increased, and the similarity of the faces of different persons is reduced. Therefore, the accuracy of face recognition is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method of face recognition according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating a step S120 according to an exemplary embodiment;
FIG. 3 is a graphical illustration of a normalization function of a capture time shown in accordance with an exemplary embodiment;
FIG. 4 is a graphical illustration of a normalized function of age shown in accordance with an exemplary embodiment;
FIG. 5 is a block diagram illustrating a face recognition apparatus according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating an apparatus for face recognition according to an example embodiment;
fig. 7 is a block diagram illustrating another apparatus for face recognition according to an example embodiment.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a face recognition method according to an exemplary embodiment, which is applied in a mobile terminal (e.g., a smart phone, a tablet computer, etc.) or a server, and as shown in fig. 1, the method may include the following steps:
in step S110, a first type of feature is obtained from a picture containing a human face. The first type of characteristics comprise any one or any combination of a plurality of characteristics of gender, age, race, clothes, glasses information, the position of a human face in a picture, the shooting time of the picture and continuous shooting information.
The gender and age identification technology can be adopted to extract the characteristic information of the gender, age, race and the like of the face in the face picture.
Glasses detection can be adopted to judge whether glasses are worn on the face of the person in the face picture, and if the glasses are worn on the face of the person, glasses identification technology is continuously adopted to identify the types of the glasses, such as sunglasses and other types of glasses.
When the mobile terminal is used for shooting the picture, the system can record the shooting time of the picture, and then whether the picture is shot in a continuous shooting mode can be judged according to the shooting time. If the continuous shooting is carried out, the faces contained in the multiple pictures obtained by continuous shooting are the same.
When the position of the face in the picture is obtained, firstly, whether the shooting time of the two pictures is within a preset time length is judged, if so, whether the number of the figures contained in the two pictures is consistent is continuously judged, and if so, the position information of the figures in the pictures is continuously obtained.
In step S120, according to the first class of features, distance weighting coefficients between any two faces respectively belonging to the two pictures are obtained.
One picture may contain a plurality of faces, and distance weighting coefficients between two faces respectively belonging to the two pictures are calculated.
Using each information of sex, age, race, clothes, glasses information, position of human face in picture, shooting time of picture, position of human face in picture, etc. as a feature XiEach feature XiAre all an influence distance weighting coefficient WijFactor of (A), XiThe ith feature is represented. According to each feature XiDetermining distance weighting factor W according to the characteristic valueij
In step S130, the distance between the two faces is acquired.
Respectively extracting texture features of two faces, such as a gabor feature or an LBP (Local binary pattern) feature, and calculating a distance between the two faces, such as an Euclidean distance, wherein the larger the distance is, the smaller the similarity between the two faces is, the smaller the distance is, and the larger the similarity between the two faces is; or, the distance between the faces can also be represented by cosine similarity, the greater the similarity between the two faces, and the smaller the cosine similarity, the smaller the similarity between the two faces.
In step S140, a weighted similarity between two faces is obtained according to the distance between the two faces and the corresponding distance weighting coefficient.
In one embodiment of the present disclosure, the distance d between two faces can be determinedijAnd corresponding distance weighting factor WijThe weighted similarity D between two faces is obtained by the product between the two facesijAs shown in equation 1:
Di,j=Wijdij(formula 1)
Distance weighting factor W in equation 1ijHas a value range of [0, L]Where L is a first preset value, which may be a value greater than 1. Optionally, a distance weighting factor WijMay take a value around 1, optionally WijCan be in the range of [1-m, 1+ m]Wherein 1+ m is not greater than L. E.g. 0.8, 1.5, 2. If WijThe similarity of two faces is increased when the face number is more than 1; if WijIf the face similarity is less than 1, the similarity of two faces is reduced; if Wij1, indicating that the similarity of the human face is not affected.
In another embodiment disclosed, the distance d between two faces can be further determinedijWith corresponding distance weighting coefficients WijSumming to obtain weighted similarity D between two facesijAs shown in equation 2:
Di,j=Wij+dij(formula 2)
Distance weighting factor W in equation 2ijHas a value range of [ -1, 1 [)]If W isijTaking positive numbers and increasing the similarity between two faces; if WijTaking a negative number, and reducing the similarity between two faces; if WijAnd (5) 0, the similarity of the human face is not influenced.
In step S150, it is determined whether the two faces belong to the same person by using the weighted similarity between the two faces.
And then, clustering the face pictures according to the face recognition result to obtain a clustering result.
And calculating the weighted similarity between every two faces, clustering the pictures by using a clustering algorithm, gathering the pictures belonging to the same person to obtain a cluster, and classifying the pictures of different persons into different clusters. And if the picture contains a plurality of faces, aggregating the picture into clusters corresponding to the faces. For example, if a picture includes a face a, a face B, and a face C, the picture is classified into clusters corresponding to the face a, the face B, and the face C.
The face recognition method provided by this embodiment extracts a first type of feature in a face picture, and then obtains a distance weighting coefficient between two faces according to the first type of feature. The method comprises the steps of obtaining weighted similarity between two faces by using the distance between the faces and corresponding distance weighted coefficients, finally clustering pictures according to the weighted similarity between the two faces to obtain a clustering result, aggregating the pictures belonging to the same person into a cluster, and classifying the pictures of different persons into different clusters.
When the face recognition method is used for recognizing the faces in the picture, the similarity between the facial features of the five sense organs and the facial shapes of the two faces is considered, and the similarity of the faces is measured according to the first type of features contained in the picture, so that the similarity of the faces of the same person under different conditions is increased, and the similarity of the faces of different persons is reduced. Therefore, the accuracy of face recognition is improved.
Fig. 2 is a flowchart illustrating a method of step S120 according to an exemplary embodiment, and as shown in fig. 2, step S120 may be implemented by:
in step S121, a feature value corresponding to the first type of feature is obtained.
Different characteristic XiThe characteristic value of (a) is different, for example, the gender is only male and female, the characteristic value of female is 1, and the characteristic value of male is 0; or the characteristic value of the female is 0, and the characteristic value of the male is 1, so that the absolute difference value between the sex characteristic values corresponding to the two faces is equal.
The age characteristic value may be in the range of [0, 100 ]. The characteristic value of the clothes is that the similarity value range of the clothes is [0, 1 ].
The race is generally yellow, white and black, the three corresponding characteristic values can be respectively 0, 1 and 2, and the specific value can be set according to the requirement.
The glasses information may include the following two types of ways: judging whether the face wears glasses or not, if the face wears the glasses, the corresponding characteristic value is 1, and the corresponding characteristic value without wearing the glasses is 0; alternatively, the characteristic value corresponding to wearing glasses is 0, and the characteristic value corresponding to not wearing glasses is 1.
And the other is to judge the similarity of the two glasses, and the value range of the corresponding characteristic value is [0, 1 ].
The characteristic value corresponding to the position of the face in the picture is the coordinate distance between the positions of the face. And obtaining a characteristic value corresponding to the position of the face in the picture according to the distance between the positions of the faces respectively belonging to the two pictures. For example, picture a includes a face a1 and a face a2, and picture B includes a face B1 and a face B2, where the face a1 corresponds to the face B1. Then the feature values of the positions in the picture corresponding to the face a1 and the face b1 are: the distance between the position of face a1 in picture a and the position of face B1 in picture B; similarly, the feature values of the positions in the picture corresponding to the face a2 and the face b2 are: the distance between the position of face a2 in picture a and the position of face B2 in picture B.
The shooting time is a value obtained by converting the time difference between the shooting time and the same time starting value into seconds, for example, the starting time is 0 minutes 0 seconds at 0 time 1 day 0 of 10 months in 2014, the shooting time of the picture is 0 minutes 0 seconds at 10 months 1 day 10 of 2014, then the characteristic value of the shooting time of the picture is 36000s, in specific implementation, the characteristic value of the shooting time can be a 64-bit binary number, the starting time can be set according to the numerical range of the binary number, of course, for convenience of calculation, the time closer to the shooting time of the picture can be determined as the starting time, and thus, the characteristic value of the shooting time corresponding to the picture is smaller.
In step S122, an absolute difference between feature values of the first class features of the two faces is calculated.
After the feature values of the first type of features are obtained, calculating the absolute difference between the feature values of the first type of features corresponding to the two faces, for example, if the gender of the face A is male and the face B is also male, the absolute difference between the gender feature values of the face A and the gender feature value of the face B is 0; for another example, if the face C is a woman, the absolute difference between the gender feature values of the face a and the face C is 1.
In step S123, the absolute difference is normalized to a preset interval, so as to obtain a normalized feature difference.
The value ranges of the feature values corresponding to different features are different, and the value ranges of the feature value absolute differences of the features obtained by natural calculation are also different, so that the value ranges of the feature value absolute differences corresponding to different features need to be normalized to the same preset interval, for example, the preset interval may be [ -1, 1 ].
For example, a sigmoid function can be used for normalization, and parameters of the sigmoid function corresponding to each feature are different, for example, the correspondence between the ordinate and the abscissa of the sigmoid function corresponding to the shooting time of the picture and the age of the face is different.
As shown in fig. 3, the diagram is a curve diagram of a corresponding normalization function during shooting, and in fig. 3, the vertical axis is a value obtained after shooting time normalization, and the value range is [ -1, 1 ]; the horizontal axis represents the shooting time in units of s, and the numerical values in the figure are only schematic and do not represent the actual value range.
As shown in fig. 4, it is a curve diagram of a normalization function corresponding to age, and in fig. 4, the vertical axis is a value after the age is normalized, and the value range is [ -1, 1 ]; the horizontal axis represents age, and the unit is age, and the numerical values on the horizontal axis in the figure are only illustrative and do not represent actual value ranges.
In step S124, a distance weighting coefficient between the two faces is calculated according to the normalized feature difference of the first type of features and a corresponding feature coefficient obtained in advance.
Different first class features are weighted to distance coefficient WijMay differ in the degree of influence, and therefore, each first class feature X may be determinediInfluence distance weighting factor WijCharacteristic coefficient of (theta)i,θiHas a value range of [0, 1]]。θiThe larger the value of (A), the first type of feature X is indicatediWeighting factor W for distanceijThe greater the effect of (c); thetaiWhen equal to 0, the first type of feature XiWeighting factor W for distanceijThere is no effect.
Wherein the characteristic coefficient thetaiThe value of (A) can be set empirically based on the eigenvalues of each first class of features and the distance weighting factor WijDetermining the characteristic coefficient theta corresponding to each first class of characteristicsi. For example, all θ may be setiThe sum is 1, and if only the influence of gender and age on the distance of the face is considered, if the characteristic coefficient corresponding to gender is 0.6, the characteristic coefficient corresponding to age is 0.4.
In another embodiment of the present disclosure, the characteristic coefficient θiIs also taken to beThe method can be obtained by training sample pictures, and comprises the steps of firstly, collecting pictures of a plurality of persons, wherein each person has a plurality of pictures; calculating the normalized feature difference value of each first-class feature; if two faces are the same person, the picture is a positive sample, and the picture of which two faces are not the same person is a negative sample. Then, the characteristic coefficients corresponding to the characteristics are trained by adopting a linear regression method, a loginstc classification method or an svm method.
Then, based on all the first class features X of the two facesiAbsolute difference between them and characteristic coefficient theta corresponding to the characteristiciCalculating to obtain a distance weighting coefficient Wij. As shown in equation 3:
Wij=θ01x12x2+.....+θixi+.....+θnxn(formula 3)
In the formula 3, θ0For adjusting distance weighting coefficient WijThe value of (A) can be obtained according to experiments; x in formula 3iRepresenting features X of the first kindiCorresponding normalized feature difference values. n represents the number of features of the first class.
Fig. 5 is a block diagram illustrating a face recognition apparatus according to an exemplary embodiment, which is applied to a mobile terminal or a server. As shown in fig. 5, the apparatus includes: a first acquiring unit 510, a second acquiring unit 520, a third acquiring unit 530, a fourth acquiring unit 540, and a judging unit 550.
The first obtaining unit 510 is configured to obtain a first type of feature from a picture containing a human face.
The first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
When the first type of feature includes a position of a human face in a picture, the first obtaining unit 510 may include: a face position obtaining subunit and a distance obtaining subunit.
The face position obtaining subunit is configured to, when it is determined that the numbers of faces included in two pictures including faces are the same and a difference between shooting times of the two pictures is within a first preset range, obtain the positions of the faces included in the two pictures respectively.
The distance obtaining subunit is configured to obtain a feature value corresponding to the position of the face in the picture according to the distance between the positions of the faces respectively belonging to the two pictures.
The second obtaining unit 520 is configured to obtain distance weighting coefficients between any two faces respectively belonging to two pictures according to the first class of features obtained by the first obtaining unit 510.
In an exemplary embodiment of the present disclosure, the second obtaining unit includes: the device comprises a first acquisition subunit, a first calculation subunit, a normalization subunit and a second calculation subunit.
The first obtaining subunit is configured to obtain a feature value corresponding to the first class of feature.
The first calculating subunit is configured to calculate an absolute difference between feature values of the first class features of the two faces obtained by the first obtaining subunit.
The normalization subunit is configured to normalize the absolute difference calculated by the first calculation subunit to a preset interval, so as to obtain a normalized feature difference.
The second calculating subunit is configured to calculate the distance weighting coefficient between the two faces according to the normalized feature difference obtained by the normalizing subunit and a feature coefficient obtained in advance.
Wherein the second calculating subunit includes: a third calculation subunit and a first accumulation subunit.
The third computing subunit is configured to compute a product between the normalized feature difference value and the corresponding feature coefficient corresponding to the same first type of feature.
The first accumulation subunit is configured to, when the first class of features includes a plurality of different features, obtain a distance weighting coefficient between the two faces according to an accumulated sum of products calculated by the third calculation subunit corresponding to the plurality of features.
The third obtaining unit 530 is configured to obtain a distance between the two faces.
The fourth obtaining unit 540 is configured to obtain the weighted similarity between the two faces according to the distance between the two faces obtained by the third obtaining unit 530 and the corresponding distance weighting coefficient obtained by the second obtaining unit 520.
In an exemplary embodiment of the present disclosure, the fourth obtaining unit 540 may include a multiplication subunit.
The multiplication sub-unit is configured to obtain the weighted similarity between the two faces according to the product of the distance between the two faces obtained by the third obtaining unit 530 and the corresponding first distance weighting coefficient.
In this embodiment, the first distance weighting coefficient is greater than or equal to 0 and less than or equal to a first preset value, where the first preset value is a numerical value greater than 1. Alternatively, WijValues around 1 may be taken, for example, 0.8, 1.5, 2. If WijThe similarity of two faces is increased when the face number is more than 1; if WijIf the face similarity is less than 1, the similarity of two faces is reduced; if Wij1, indicating that the similarity of the human face is not affected.
In another exemplary embodiment of the present disclosure, the fourth obtaining unit 540 may include a second accumulation subunit;
the second accumulating subunit is configured to obtain a weighted similarity between the two faces according to the sum of the distance between the two faces obtained by the third obtaining unit 530 and the corresponding second distance weighting coefficient.
The determining unit 550 is configured to determine whether the two faces belong to the same person by using the weighted similarity between the two faces obtained by the fourth obtaining unit 540.
When the face recognition device provided by the embodiment is used for recognizing the faces in the picture, the similarity between the facial features and the facial shapes of the two faces is considered, and the similarity of the faces is measured according to the first type of features contained in the picture, so that the similarity of the faces of the same person under different conditions is increased, and the similarity between the faces of different persons is reduced. Therefore, the accuracy of face recognition is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating an apparatus 800 for face recognition according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
As shown in fig. 6, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a mobile terminal, enable the terminal device to perform a face recognition method, the method comprising:
acquiring a first type of feature from a picture containing a human face;
acquiring distance weighting coefficients between any two faces respectively belonging to the two pictures according to the first type of characteristics;
acquiring the distance between the two faces;
obtaining the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient;
judging whether the two faces belong to the same person or not by using the weighted similarity between the two faces;
the first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
Fig. 7 is a block diagram illustrating an apparatus 1900 for face recognition according to an example embodiment. For example, the apparatus 1900 may be provided as a server. As shown in fig. 7, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the method embodiments illustrated in fig. 1-2 described above.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like, stored in memory 1932.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (11)

CN201410592275.9A2014-10-292014-10-29Face identification method and deviceActiveCN104408402B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201410592275.9ACN104408402B (en)2014-10-292014-10-29Face identification method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410592275.9ACN104408402B (en)2014-10-292014-10-29Face identification method and device

Publications (2)

Publication NumberPublication Date
CN104408402Atrue CN104408402A (en)2015-03-11
CN104408402B CN104408402B (en)2018-04-24

Family

ID=52646033

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410592275.9AActiveCN104408402B (en)2014-10-292014-10-29Face identification method and device

Country Status (1)

CountryLink
CN (1)CN104408402B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104732216A (en)*2015-03-262015-06-24江苏物联网研究发展中心Expression recognition method based on key points and local characteristics
CN105069083A (en)*2015-07-312015-11-18小米科技有限责任公司Determination method and device of associated user
CN105184253A (en)*2015-09-012015-12-23北京旷视科技有限公司Face identification method and face identification system
CN105574512A (en)*2015-12-212016-05-11小米科技有限责任公司Method and device for processing image
CN106446797A (en)*2016-08-312017-02-22腾讯科技(深圳)有限公司Image clustering method and device
CN108022274A (en)*2017-11-292018-05-11广东欧珀移动通信有限公司Image processing method, device, computer equipment and computer-readable recording medium
CN108154099A (en)*2017-12-202018-06-12北京奇艺世纪科技有限公司A kind of character recognition method, device and electronic equipment
CN108319943A (en)*2018-04-252018-07-24北京优创新港科技股份有限公司A method of human face recognition model performance under the conditions of raising is worn glasses
CN108664908A (en)*2018-04-272018-10-16深圳爱酷智能科技有限公司Face identification method, equipment and computer readable storage medium
CN109086720A (en)*2018-08-032018-12-25腾讯科技(深圳)有限公司A kind of face cluster method, apparatus and storage medium
CN109145844A (en)*2018-08-292019-01-04北京旷视科技有限公司Archive management method, device and electronic equipment for city safety monitoring
CN109784898A (en)*2018-12-202019-05-21深圳云天励飞技术有限公司Method of payment and relevant apparatus by bus
CN109815353A (en)*2019-01-212019-05-28成都考拉悠然科技有限公司A kind of face retrieval method and system based on class center
CN109948734A (en)*2019-04-022019-06-28北京旷视科技有限公司 Image clustering method, device and electronic device
CN105488467B (en)*2015-11-262019-07-23小米科技有限责任公司Picture classification method and device
CN111104970A (en)*2019-12-042020-05-05浙江大华技术股份有限公司Face clustering method and device
CN115471893A (en)*2022-09-162022-12-13北京百度网讯科技有限公司 Training face recognition model, method and device for face recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20070071288A1 (en)*2005-09-292007-03-29Quen-Zong WuFacial features based human face recognition method
CN1975759A (en)*2006-12-152007-06-06中山大学Human face identifying method based on structural principal element analysis
CN102004908A (en)*2010-11-302011-04-06汉王科技股份有限公司Self-adapting face identification method and device
US8184914B2 (en)*2007-11-292012-05-22Viewdle Inc.Method and system of person identification by facial image
CN103942705A (en)*2014-03-252014-07-23惠州Tcl移动通信有限公司Advertisement classified match pushing method and system based on human face recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20070071288A1 (en)*2005-09-292007-03-29Quen-Zong WuFacial features based human face recognition method
CN1975759A (en)*2006-12-152007-06-06中山大学Human face identifying method based on structural principal element analysis
US8184914B2 (en)*2007-11-292012-05-22Viewdle Inc.Method and system of person identification by facial image
CN102004908A (en)*2010-11-302011-04-06汉王科技股份有限公司Self-adapting face identification method and device
CN103942705A (en)*2014-03-252014-07-23惠州Tcl移动通信有限公司Advertisement classified match pushing method and system based on human face recognition

Cited By (27)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104732216A (en)*2015-03-262015-06-24江苏物联网研究发展中心Expression recognition method based on key points and local characteristics
US9892314B2 (en)2015-07-312018-02-13Xiaomi Inc.Method and device for determining associated user
CN105069083A (en)*2015-07-312015-11-18小米科技有限责任公司Determination method and device of associated user
EP3125188A1 (en)*2015-07-312017-02-01Xiaomi Inc.Method and device for determining associated user
CN105069083B (en)*2015-07-312019-03-08小米科技有限责任公司The determination method and device of association user
CN105184253A (en)*2015-09-012015-12-23北京旷视科技有限公司Face identification method and face identification system
CN105488467B (en)*2015-11-262019-07-23小米科技有限责任公司Picture classification method and device
CN105574512A (en)*2015-12-212016-05-11小米科技有限责任公司Method and device for processing image
CN106446797A (en)*2016-08-312017-02-22腾讯科技(深圳)有限公司Image clustering method and device
CN106446797B (en)*2016-08-312019-05-07腾讯科技(深圳)有限公司Image clustering method and device
CN108022274A (en)*2017-11-292018-05-11广东欧珀移动通信有限公司Image processing method, device, computer equipment and computer-readable recording medium
CN108154099B (en)*2017-12-202021-04-30北京奇艺世纪科技有限公司Figure identification method and device and electronic equipment
CN108154099A (en)*2017-12-202018-06-12北京奇艺世纪科技有限公司A kind of character recognition method, device and electronic equipment
CN108319943A (en)*2018-04-252018-07-24北京优创新港科技股份有限公司A method of human face recognition model performance under the conditions of raising is worn glasses
CN108664908A (en)*2018-04-272018-10-16深圳爱酷智能科技有限公司Face identification method, equipment and computer readable storage medium
CN109086720A (en)*2018-08-032018-12-25腾讯科技(深圳)有限公司A kind of face cluster method, apparatus and storage medium
CN109086720B (en)*2018-08-032021-05-07腾讯科技(深圳)有限公司Face clustering method, face clustering device and storage medium
CN109145844A (en)*2018-08-292019-01-04北京旷视科技有限公司Archive management method, device and electronic equipment for city safety monitoring
CN109784898A (en)*2018-12-202019-05-21深圳云天励飞技术有限公司Method of payment and relevant apparatus by bus
CN109815353B (en)*2019-01-212021-07-13成都考拉悠然科技有限公司Face retrieval method and system based on class center
CN109815353A (en)*2019-01-212019-05-28成都考拉悠然科技有限公司A kind of face retrieval method and system based on class center
CN109948734A (en)*2019-04-022019-06-28北京旷视科技有限公司 Image clustering method, device and electronic device
CN109948734B (en)*2019-04-022022-03-29北京旷视科技有限公司Image clustering method and device and electronic equipment
CN111104970A (en)*2019-12-042020-05-05浙江大华技术股份有限公司Face clustering method and device
CN111104970B (en)*2019-12-042023-04-18浙江大华技术股份有限公司Face clustering method and device
CN115471893A (en)*2022-09-162022-12-13北京百度网讯科技有限公司 Training face recognition model, method and device for face recognition
CN115471893B (en)*2022-09-162023-11-21北京百度网讯科技有限公司 Training face recognition model, method and device for face recognition

Also Published As

Publication numberPublication date
CN104408402B (en)2018-04-24

Similar Documents

PublicationPublication DateTitle
CN104408402B (en)Face identification method and device
CN104408404B (en)Face identification method and device
US11455491B2 (en)Method and device for training image recognition model, and storage medium
CN105631408B (en)Face photo album processing method and device based on video
US20170154208A1 (en)Image classification method and device
CN108197586B (en)Face recognition method and device
CN105631403B (en)Face identification method and device
RU2577188C1 (en)Method, apparatus and device for image segmentation
CN104850828B (en)Character recognition method and device
CN105654039B (en)The method and apparatus of image procossing
WO2020062969A1 (en)Action recognition method and device, and driver state analysis method and device
CN104156915A (en)Skin color adjusting method and device
CN109190449A (en)Age recognition methods, device, electronic equipment and storage medium
CN105654131A (en)Classification model training method and device
CN109063580A (en)Face identification method, device, electronic equipment and storage medium
CN107463903B (en)Face key point positioning method and device
CN107025441B (en) Skin color detection method and device
CN105654033A (en)Face image verification method and device
CN104077597B (en)Image classification method and device
CN103886284B (en)Character attribute information identifying method, device and electronic equipment
CN105335684A (en)Face detection method and device
CN104573642B (en)Face identification method and device
EP3098765A1 (en)Method and apparatus for recommending cloud card
CN110110742B (en)Multi-feature fusion method and device, electronic equipment and storage medium
CN111783517A (en)Image recognition method and device, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp