Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a face recognition method and device to improve the accuracy of face recognition. The embodiment of the disclosure discloses the following technical scheme:
according to a first aspect of the embodiments of the present disclosure, there is provided a face recognition method, including:
acquiring a first type of feature from a picture containing a human face;
acquiring distance weighting coefficients between any two faces respectively belonging to the two pictures according to the first type of characteristics;
acquiring the distance between the two faces;
obtaining the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient;
judging whether the two faces belong to the same person or not by using the weighted similarity between the two faces;
the first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the obtaining, according to the first class of features, distance weighting coefficients between any two faces respectively belonging to two pictures includes:
acquiring a characteristic value corresponding to the first type of characteristic;
calculating an absolute difference value between the feature values of the first class features of the two faces;
normalizing the absolute difference value to a preset interval to obtain a normalized feature difference value;
and calculating the distance weighting coefficient between the two human faces according to the normalized feature difference value of the first class of features and the corresponding feature coefficient obtained in advance.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the calculating the distance weighting coefficient between the two faces according to the normalized feature difference and a corresponding feature coefficient obtained in advance includes:
calculating the product between the normalized feature difference value corresponding to the same first type feature and the corresponding feature coefficient;
and when the first class of features comprises a plurality of different features, obtaining a distance weighting coefficient between the two faces according to the accumulated sum of the products corresponding to the plurality of features.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the distance weighting coefficients between the two faces include a first distance weighting coefficient and a second distance weighting coefficient; the first distance weighting coefficient is greater than or equal to 0 and less than or equal to a first preset value, and the first preset value is greater than 1; the second distance weighting coefficient is greater than or equal to-1 and less than or equal to 1;
the obtaining of the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient includes:
obtaining the weighted similarity between the two faces according to the product of the distance between the two faces and the corresponding first distance weighting coefficient;
or,
and obtaining the weighted similarity between the two faces according to the sum of the distance between the two faces and the corresponding second distance weighting coefficient.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, when the first type of feature includes a position of a face in a picture, the obtaining the first type of feature from the picture including the face includes:
when the number of faces contained in two pictures containing the faces is judged to be the same, and the difference value between the shooting time of the two pictures is within a first preset range, the positions of the faces contained in the two pictures containing the faces are obtained;
and obtaining a characteristic value corresponding to the position of the face in the picture according to the distance between the positions of the faces respectively belonging to the two pictures.
According to a second aspect of the embodiments of the present disclosure, there is provided a face recognition apparatus including:
the first acquisition unit is used for acquiring a first type of characteristics from a picture containing a human face;
the second acquisition unit is used for acquiring distance weighting coefficients between any two faces respectively belonging to the two pictures according to the first class of characteristics;
a third obtaining unit, configured to obtain a distance between the two faces;
the fourth acquisition unit is used for acquiring the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient;
the judging unit is used for judging whether the two faces belong to the same person or not by utilizing the weighted similarity between the two faces;
the first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the second obtaining unit includes:
the first obtaining subunit is used for obtaining a characteristic value corresponding to the first class of characteristics;
the first calculating subunit is used for calculating the absolute difference value between the characteristic values of the first class of characteristics of the two human faces;
the normalization subunit is used for normalizing the absolute difference value to a preset interval to obtain a normalized feature difference value;
and the second calculating subunit is used for calculating the distance weighting coefficient between the two human faces according to the normalized feature difference of the first class of features and the corresponding feature coefficient obtained in advance.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the second computing subunit includes:
the third calculation subunit is used for calculating the product between the normalized feature difference value corresponding to the same first-class feature and the corresponding feature coefficient;
and the first accumulation subunit is used for obtaining a distance weighting coefficient between the two human faces according to the accumulated sum of the products corresponding to the plurality of characteristics when the first type of characteristics contain a plurality of different characteristics.
With reference to the second aspect, in a third possible implementation manner of the second aspect, the distance weighting coefficients between the two faces include a first distance weighting coefficient and a second distance weighting coefficient; the first distance weighting coefficient is greater than or equal to 0 and less than or equal to a first preset value, and the first preset value is greater than 1; the second distance weighting coefficient is greater than or equal to-1 and less than or equal to 1;
the fourth obtaining unit includes:
the multiplication operation subunit is used for obtaining the weighted similarity between the two faces according to the product of the distance between the two faces and the corresponding first distance weighting coefficient;
or,
and the second accumulation subunit is used for obtaining the weighted similarity between the two faces according to the sum of the distance between the two faces and the corresponding second distance weighting coefficient.
With reference to the second aspect, in a fourth possible implementation manner of the second aspect, when the first type of feature includes a position of a face in a picture, the first obtaining unit includes:
the face position acquiring subunit is configured to acquire the positions of the faces included in the two pictures when it is determined that the numbers of the faces included in the two pictures including the faces are the same and the difference between the shooting times of the two pictures is within a first preset range;
and the distance acquisition subunit is used for acquiring a characteristic value corresponding to the position of the face in the picture according to the distance between the positions of the faces respectively belonging to the two pictures.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to:
acquiring a first type of feature from a picture containing a human face;
acquiring distance weighting coefficients between any two faces respectively belonging to the two pictures according to the first type of characteristics;
acquiring the distance between the two faces;
obtaining the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient;
judging whether the two faces belong to the same person or not by using the weighted similarity between the two faces;
the first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: in the face recognition method provided in this embodiment, a first type of feature is obtained from a picture including a face, and then a distance weighting coefficient between two faces is obtained according to the first type of feature. And finally, clustering the pictures according to the weighted similarity between the two faces to obtain a clustering result, aggregating the pictures belonging to the same person into a cluster, and classifying the pictures of different persons into different clusters. When the face recognition method is used for recognizing the faces in the picture, the similarity between the facial features of the five sense organs and the facial shapes of the two faces is considered, and the similarity of the faces is measured according to the first type of features contained in the picture, so that the similarity of the faces of the same person under different conditions is increased, and the similarity of the faces of different persons is reduced. Therefore, the accuracy of face recognition is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a face recognition method according to an exemplary embodiment, which is applied in a mobile terminal (e.g., a smart phone, a tablet computer, etc.) or a server, and as shown in fig. 1, the method may include the following steps:
in step S110, a first type of feature is obtained from a picture containing a human face. The first type of characteristics comprise any one or any combination of a plurality of characteristics of gender, age, race, clothes, glasses information, the position of a human face in a picture, the shooting time of the picture and continuous shooting information.
The gender and age identification technology can be adopted to extract the characteristic information of the gender, age, race and the like of the face in the face picture.
Glasses detection can be adopted to judge whether glasses are worn on the face of the person in the face picture, and if the glasses are worn on the face of the person, glasses identification technology is continuously adopted to identify the types of the glasses, such as sunglasses and other types of glasses.
When the mobile terminal is used for shooting the picture, the system can record the shooting time of the picture, and then whether the picture is shot in a continuous shooting mode can be judged according to the shooting time. If the continuous shooting is carried out, the faces contained in the multiple pictures obtained by continuous shooting are the same.
When the position of the face in the picture is obtained, firstly, whether the shooting time of the two pictures is within a preset time length is judged, if so, whether the number of the figures contained in the two pictures is consistent is continuously judged, and if so, the position information of the figures in the pictures is continuously obtained.
In step S120, according to the first class of features, distance weighting coefficients between any two faces respectively belonging to the two pictures are obtained.
One picture may contain a plurality of faces, and distance weighting coefficients between two faces respectively belonging to the two pictures are calculated.
Using each information of sex, age, race, clothes, glasses information, position of human face in picture, shooting time of picture, position of human face in picture, etc. as a feature XiEach feature XiAre all an influence distance weighting coefficient WijFactor of (A), XiThe ith feature is represented. According to each feature XiDetermining distance weighting factor W according to the characteristic valueij。
In step S130, the distance between the two faces is acquired.
Respectively extracting texture features of two faces, such as a gabor feature or an LBP (Local binary pattern) feature, and calculating a distance between the two faces, such as an Euclidean distance, wherein the larger the distance is, the smaller the similarity between the two faces is, the smaller the distance is, and the larger the similarity between the two faces is; or, the distance between the faces can also be represented by cosine similarity, the greater the similarity between the two faces, and the smaller the cosine similarity, the smaller the similarity between the two faces.
In step S140, a weighted similarity between two faces is obtained according to the distance between the two faces and the corresponding distance weighting coefficient.
In one embodiment of the present disclosure, the distance d between two faces can be determinedijAnd corresponding distance weighting factor WijThe weighted similarity D between two faces is obtained by the product between the two facesijAs shown in equation 1:
Di,j=Wijdij(formula 1)
Distance weighting factor W in equation 1ijHas a value range of [0, L]Where L is a first preset value, which may be a value greater than 1. Optionally, a distance weighting factor WijMay take a value around 1, optionally WijCan be in the range of [1-m, 1+ m]Wherein 1+ m is not greater than L. E.g. 0.8, 1.5, 2. If WijThe similarity of two faces is increased when the face number is more than 1; if WijIf the face similarity is less than 1, the similarity of two faces is reduced; if Wij1, indicating that the similarity of the human face is not affected.
In another embodiment disclosed, the distance d between two faces can be further determinedijWith corresponding distance weighting coefficients WijSumming to obtain weighted similarity D between two facesijAs shown in equation 2:
Di,j=Wij+dij(formula 2)
Distance weighting factor W in equation 2ijHas a value range of [ -1, 1 [)]If W isijTaking positive numbers and increasing the similarity between two faces; if WijTaking a negative number, and reducing the similarity between two faces; if WijAnd (5) 0, the similarity of the human face is not influenced.
In step S150, it is determined whether the two faces belong to the same person by using the weighted similarity between the two faces.
And then, clustering the face pictures according to the face recognition result to obtain a clustering result.
And calculating the weighted similarity between every two faces, clustering the pictures by using a clustering algorithm, gathering the pictures belonging to the same person to obtain a cluster, and classifying the pictures of different persons into different clusters. And if the picture contains a plurality of faces, aggregating the picture into clusters corresponding to the faces. For example, if a picture includes a face a, a face B, and a face C, the picture is classified into clusters corresponding to the face a, the face B, and the face C.
The face recognition method provided by this embodiment extracts a first type of feature in a face picture, and then obtains a distance weighting coefficient between two faces according to the first type of feature. The method comprises the steps of obtaining weighted similarity between two faces by using the distance between the faces and corresponding distance weighted coefficients, finally clustering pictures according to the weighted similarity between the two faces to obtain a clustering result, aggregating the pictures belonging to the same person into a cluster, and classifying the pictures of different persons into different clusters.
When the face recognition method is used for recognizing the faces in the picture, the similarity between the facial features of the five sense organs and the facial shapes of the two faces is considered, and the similarity of the faces is measured according to the first type of features contained in the picture, so that the similarity of the faces of the same person under different conditions is increased, and the similarity of the faces of different persons is reduced. Therefore, the accuracy of face recognition is improved.
Fig. 2 is a flowchart illustrating a method of step S120 according to an exemplary embodiment, and as shown in fig. 2, step S120 may be implemented by:
in step S121, a feature value corresponding to the first type of feature is obtained.
Different characteristic XiThe characteristic value of (a) is different, for example, the gender is only male and female, the characteristic value of female is 1, and the characteristic value of male is 0; or the characteristic value of the female is 0, and the characteristic value of the male is 1, so that the absolute difference value between the sex characteristic values corresponding to the two faces is equal.
The age characteristic value may be in the range of [0, 100 ]. The characteristic value of the clothes is that the similarity value range of the clothes is [0, 1 ].
The race is generally yellow, white and black, the three corresponding characteristic values can be respectively 0, 1 and 2, and the specific value can be set according to the requirement.
The glasses information may include the following two types of ways: judging whether the face wears glasses or not, if the face wears the glasses, the corresponding characteristic value is 1, and the corresponding characteristic value without wearing the glasses is 0; alternatively, the characteristic value corresponding to wearing glasses is 0, and the characteristic value corresponding to not wearing glasses is 1.
And the other is to judge the similarity of the two glasses, and the value range of the corresponding characteristic value is [0, 1 ].
The characteristic value corresponding to the position of the face in the picture is the coordinate distance between the positions of the face. And obtaining a characteristic value corresponding to the position of the face in the picture according to the distance between the positions of the faces respectively belonging to the two pictures. For example, picture a includes a face a1 and a face a2, and picture B includes a face B1 and a face B2, where the face a1 corresponds to the face B1. Then the feature values of the positions in the picture corresponding to the face a1 and the face b1 are: the distance between the position of face a1 in picture a and the position of face B1 in picture B; similarly, the feature values of the positions in the picture corresponding to the face a2 and the face b2 are: the distance between the position of face a2 in picture a and the position of face B2 in picture B.
The shooting time is a value obtained by converting the time difference between the shooting time and the same time starting value into seconds, for example, the starting time is 0 minutes 0 seconds at 0 time 1 day 0 of 10 months in 2014, the shooting time of the picture is 0 minutes 0 seconds at 10 months 1 day 10 of 2014, then the characteristic value of the shooting time of the picture is 36000s, in specific implementation, the characteristic value of the shooting time can be a 64-bit binary number, the starting time can be set according to the numerical range of the binary number, of course, for convenience of calculation, the time closer to the shooting time of the picture can be determined as the starting time, and thus, the characteristic value of the shooting time corresponding to the picture is smaller.
In step S122, an absolute difference between feature values of the first class features of the two faces is calculated.
After the feature values of the first type of features are obtained, calculating the absolute difference between the feature values of the first type of features corresponding to the two faces, for example, if the gender of the face A is male and the face B is also male, the absolute difference between the gender feature values of the face A and the gender feature value of the face B is 0; for another example, if the face C is a woman, the absolute difference between the gender feature values of the face a and the face C is 1.
In step S123, the absolute difference is normalized to a preset interval, so as to obtain a normalized feature difference.
The value ranges of the feature values corresponding to different features are different, and the value ranges of the feature value absolute differences of the features obtained by natural calculation are also different, so that the value ranges of the feature value absolute differences corresponding to different features need to be normalized to the same preset interval, for example, the preset interval may be [ -1, 1 ].
For example, a sigmoid function can be used for normalization, and parameters of the sigmoid function corresponding to each feature are different, for example, the correspondence between the ordinate and the abscissa of the sigmoid function corresponding to the shooting time of the picture and the age of the face is different.
As shown in fig. 3, the diagram is a curve diagram of a corresponding normalization function during shooting, and in fig. 3, the vertical axis is a value obtained after shooting time normalization, and the value range is [ -1, 1 ]; the horizontal axis represents the shooting time in units of s, and the numerical values in the figure are only schematic and do not represent the actual value range.
As shown in fig. 4, it is a curve diagram of a normalization function corresponding to age, and in fig. 4, the vertical axis is a value after the age is normalized, and the value range is [ -1, 1 ]; the horizontal axis represents age, and the unit is age, and the numerical values on the horizontal axis in the figure are only illustrative and do not represent actual value ranges.
In step S124, a distance weighting coefficient between the two faces is calculated according to the normalized feature difference of the first type of features and a corresponding feature coefficient obtained in advance.
Different first class features are weighted to distance coefficient WijMay differ in the degree of influence, and therefore, each first class feature X may be determinediInfluence distance weighting factor WijCharacteristic coefficient of (theta)i,θiHas a value range of [0, 1]]。θiThe larger the value of (A), the first type of feature X is indicatediWeighting factor W for distanceijThe greater the effect of (c); thetaiWhen equal to 0, the first type of feature XiWeighting factor W for distanceijThere is no effect.
Wherein the characteristic coefficient thetaiThe value of (A) can be set empirically based on the eigenvalues of each first class of features and the distance weighting factor WijDetermining the characteristic coefficient theta corresponding to each first class of characteristicsi. For example, all θ may be setiThe sum is 1, and if only the influence of gender and age on the distance of the face is considered, if the characteristic coefficient corresponding to gender is 0.6, the characteristic coefficient corresponding to age is 0.4.
In another embodiment of the present disclosure, the characteristic coefficient θiIs also taken to beThe method can be obtained by training sample pictures, and comprises the steps of firstly, collecting pictures of a plurality of persons, wherein each person has a plurality of pictures; calculating the normalized feature difference value of each first-class feature; if two faces are the same person, the picture is a positive sample, and the picture of which two faces are not the same person is a negative sample. Then, the characteristic coefficients corresponding to the characteristics are trained by adopting a linear regression method, a loginstc classification method or an svm method.
Then, based on all the first class features X of the two facesiAbsolute difference between them and characteristic coefficient theta corresponding to the characteristiciCalculating to obtain a distance weighting coefficient Wij. As shown in equation 3:
Wij=θ0+θ1x1+θ2x2+.....+θixi+.....+θnxn(formula 3)
In the formula 3, θ0For adjusting distance weighting coefficient WijThe value of (A) can be obtained according to experiments; x in formula 3iRepresenting features X of the first kindiCorresponding normalized feature difference values. n represents the number of features of the first class.
Fig. 5 is a block diagram illustrating a face recognition apparatus according to an exemplary embodiment, which is applied to a mobile terminal or a server. As shown in fig. 5, the apparatus includes: a first acquiring unit 510, a second acquiring unit 520, a third acquiring unit 530, a fourth acquiring unit 540, and a judging unit 550.
The first obtaining unit 510 is configured to obtain a first type of feature from a picture containing a human face.
The first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
When the first type of feature includes a position of a human face in a picture, the first obtaining unit 510 may include: a face position obtaining subunit and a distance obtaining subunit.
The face position obtaining subunit is configured to, when it is determined that the numbers of faces included in two pictures including faces are the same and a difference between shooting times of the two pictures is within a first preset range, obtain the positions of the faces included in the two pictures respectively.
The distance obtaining subunit is configured to obtain a feature value corresponding to the position of the face in the picture according to the distance between the positions of the faces respectively belonging to the two pictures.
The second obtaining unit 520 is configured to obtain distance weighting coefficients between any two faces respectively belonging to two pictures according to the first class of features obtained by the first obtaining unit 510.
In an exemplary embodiment of the present disclosure, the second obtaining unit includes: the device comprises a first acquisition subunit, a first calculation subunit, a normalization subunit and a second calculation subunit.
The first obtaining subunit is configured to obtain a feature value corresponding to the first class of feature.
The first calculating subunit is configured to calculate an absolute difference between feature values of the first class features of the two faces obtained by the first obtaining subunit.
The normalization subunit is configured to normalize the absolute difference calculated by the first calculation subunit to a preset interval, so as to obtain a normalized feature difference.
The second calculating subunit is configured to calculate the distance weighting coefficient between the two faces according to the normalized feature difference obtained by the normalizing subunit and a feature coefficient obtained in advance.
Wherein the second calculating subunit includes: a third calculation subunit and a first accumulation subunit.
The third computing subunit is configured to compute a product between the normalized feature difference value and the corresponding feature coefficient corresponding to the same first type of feature.
The first accumulation subunit is configured to, when the first class of features includes a plurality of different features, obtain a distance weighting coefficient between the two faces according to an accumulated sum of products calculated by the third calculation subunit corresponding to the plurality of features.
The third obtaining unit 530 is configured to obtain a distance between the two faces.
The fourth obtaining unit 540 is configured to obtain the weighted similarity between the two faces according to the distance between the two faces obtained by the third obtaining unit 530 and the corresponding distance weighting coefficient obtained by the second obtaining unit 520.
In an exemplary embodiment of the present disclosure, the fourth obtaining unit 540 may include a multiplication subunit.
The multiplication sub-unit is configured to obtain the weighted similarity between the two faces according to the product of the distance between the two faces obtained by the third obtaining unit 530 and the corresponding first distance weighting coefficient.
In this embodiment, the first distance weighting coefficient is greater than or equal to 0 and less than or equal to a first preset value, where the first preset value is a numerical value greater than 1. Alternatively, WijValues around 1 may be taken, for example, 0.8, 1.5, 2. If WijThe similarity of two faces is increased when the face number is more than 1; if WijIf the face similarity is less than 1, the similarity of two faces is reduced; if Wij1, indicating that the similarity of the human face is not affected.
In another exemplary embodiment of the present disclosure, the fourth obtaining unit 540 may include a second accumulation subunit;
the second accumulating subunit is configured to obtain a weighted similarity between the two faces according to the sum of the distance between the two faces obtained by the third obtaining unit 530 and the corresponding second distance weighting coefficient.
The determining unit 550 is configured to determine whether the two faces belong to the same person by using the weighted similarity between the two faces obtained by the fourth obtaining unit 540.
When the face recognition device provided by the embodiment is used for recognizing the faces in the picture, the similarity between the facial features and the facial shapes of the two faces is considered, and the similarity of the faces is measured according to the first type of features contained in the picture, so that the similarity of the faces of the same person under different conditions is increased, and the similarity between the faces of different persons is reduced. Therefore, the accuracy of face recognition is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating an apparatus 800 for face recognition according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
As shown in fig. 6, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a mobile terminal, enable the terminal device to perform a face recognition method, the method comprising:
acquiring a first type of feature from a picture containing a human face;
acquiring distance weighting coefficients between any two faces respectively belonging to the two pictures according to the first type of characteristics;
acquiring the distance between the two faces;
obtaining the weighted similarity between the two faces according to the distance between the two faces and the corresponding distance weighting coefficient;
judging whether the two faces belong to the same person or not by using the weighted similarity between the two faces;
the first type of characteristics comprise any one or any combination of a sex, an age, a race, clothes, glasses information, a position of a human face in a picture, a shooting time of the picture and continuous shooting information of the person.
Fig. 7 is a block diagram illustrating an apparatus 1900 for face recognition according to an example embodiment. For example, the apparatus 1900 may be provided as a server. As shown in fig. 7, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the method embodiments illustrated in fig. 1-2 described above.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like, stored in memory 1932.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.