A kind of Face datection model training method, device and mediumTechnical field
The present invention relates to technical field of face recognition, more particularly to a kind of Face datection model training method, device and JieMatter.
Background technology
This part is it is intended that the embodiments of the present invention stated in claims provide background or context.HereinDescription recognizes it is prior art not because not being included in this part.
With the popularization of intelligent camera, intelligent camera comes into huge numbers of families, and user wishes that video camera is not only oneThe equipment of individual recorded video, being more desirable to shooting function has some intelligent functions.Most common intelligent function is also user's concernMost is the detection and identification of face, and Face datection is also the first step of recognition of face.It is generally necessary to Face datection mouldType carries out constantly renewal iteration, so that Face datection model adapts to different application scenarios.
Haar+adaboost algorithms are a kind of Face datection algorithms of lightweight, there is many applications in practice.ButIn practical application, in order to improve the accuracy of Face datection, improvement of the people to algorithm focuses primarily upon innovatory algorithm and detectedThe performance of aspect, such as with more complicated feature, or improvement adaboost algorithms.
These modified hydrothermal process improve the degree of accuracy of Face datection to a certain extent, but on do not there is algorithm to be related to pairFace datection model training process optimizes.With the variation of Face datection application scenarios, complicate, it is often necessary to trainOr Face datection model is updated to adapt to different application scenarios.Because the sample size for training is very huge, trainingProcess is time-consuming longer.And after training every time, be required for being tested, with the effect of detection training.If training all consumes every timeThe substantial amounts of time, this is obviously unfavorable for the iteration more new development of Face datection algorithm.
The content of the invention
The embodiment of the present invention provides a kind of Face datection model training method, device and medium, to Face datection mouldType training process is improved, and to reduce the training time needed for Face datection model, improves Face datection algorithm Face datectionThe efficiency of model training.
First aspect, there is provided a kind of Face datection model training method, including:
For given training sample, carry out repeatedly training using preset algorithm and obtain strong classifier and its included weakThe classification results collection of grader merges storage, wherein, training each time updates the weight of each sample in the training sample before;
According to the classification results set of storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition, then increaseAfter adding the Weak Classifier quantity for training strong classifier, new strong classifier is obtained using the training sample re -training,Until obtained strong classifier meets first preparatory condition;
Cascade classifier is updated using obtained strong classifier;
Terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise, renewal trainingSample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
Wherein, for given training sample, carry out repeatedly training using preset algorithm and obtain strong classifier and its wrappedThe classification results collection of the Weak Classifier contained merges storage, specifically includes:
For given training sample, extract N number of Haar features corresponding to the training sample and obtain and every Haar spiesN number of Weak Classifier corresponding to sign difference, wherein, N is the integer more than or equal to 1;
Each Weak Classifier is utilized respectively the training sample is classified to obtain corresponding classification results;
Determine that this trains obtained optimal Weak Classifier according to the classification results;
Obtained optimal Weak Classifier is added in strong classifier, and added in the classification results set of strong classifierThe classification results for the optimal Weak Classifier that this training obtains;
If the quantity of the optimal Weak Classifier included in obtained strong classifier is less than the second predetermined threshold value, according to mostThe classification results of excellent Weak Classifier update the weight of each sample in the training sample, and re -training obtain it is new optimal weakGrader simultaneously stores classification results, until to reach described second pre- for the quantity of optimal Weak Classifier included in the strong classifierIf threshold value.
Alternatively, judge whether the strong classifier meets the first preparatory condition according to below scheme:
The classification thresholds according to corresponding to classification results set corresponding to the strong classifier determines the strong classifier;
The false drop rate according to corresponding to the classification thresholds determine the strong classifier;
If the false drop rate is not more than default false drop rate threshold value, it is determined that the strong classifier meets the first default barPart, if the false drop rate is more than the default false drop rate threshold value, it is determined that the strong classifier is unsatisfactory for the first preparatory condition.
Alternatively, for each sample, Haar features corresponding to the sample are extracted in accordance with the following methods:
F=(Sumb-Sumw)/(Sumb+Sumw), wherein:
F represents any Haar features corresponding to the sample;
SumbRepresent first area in pixel and;
SumwRepresent second area in pixel and.
Second aspect, there is provided a kind of Face datection model training apparatus, including:
First training unit, for for given training sample, carrying out repeatedly training using preset algorithm and being divided by forceThe classification results collection of class device and its Weak Classifier included merges storage, wherein, update the training before training each timeThe weight of each sample in sample;
Second training unit, for the classification results set according to storage, if it is judged that the strong classifier is unsatisfactory forFirst preparatory condition, then after increasing the Weak Classifier quantity for training strong and weak grader, instructed again using the training sampleNew strong classifier is got, until obtained strong classifier meets first preparatory condition;
Updating block, for updating cascade classifier using obtained strong classifier;
3rd training unit, for if the false drop rate of the cascade classifier be not more than the first predetermined threshold value if terminate to instructPractice, otherwise, update training sample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
Alternatively, first training unit, including:
Subelement is extracted, for for given training sample, extracting N number of Haar features corresponding to the training sample and obtainingTo N number of Weak Classifier corresponding with every Haar features difference, wherein, N is the integer more than or equal to 1;
Classify subelement, the training sample is classified for being utilized respectively each Weak Classifier and divided accordinglyClass result;
Determination subelement, for determining that this trains obtained optimal Weak Classifier according to the classification results;
Subelement is added, for obtained optimal Weak Classifier to be added in strong classifier, and in point of strong classifierThe classification results for the optimal Weak Classifier that this training obtains are added in class results set;
Subelement is trained, if to be less than second pre- for the quantity of the optimal Weak Classifier for being included in obtained strong classifierIf threshold value, then the weight of each sample in the training sample is updated according to the classification results of optimal Weak Classifier, and instructed againGet new optimal Weak Classifier and store classification results, until the number of the optimal Weak Classifier included in the strong classifierAmount reaches second predetermined threshold value.
Alternatively, a kind of Face datection model training apparatus, in addition to:
First determining unit, the strong classifier pair is determined for the classification results set according to corresponding to the strong classifierThe classification thresholds answered;
Second determining unit, for false drop rate corresponding to determining the strong classifier according to the classification thresholds;
3rd determining unit, if being not more than default false drop rate threshold value for the false drop rate, it is determined that the strong classificationDevice meets the first preparatory condition, if the false drop rate is more than the default false drop rate threshold value, it is determined that the strong classifier is notMeet the first preparatory condition.
Alternatively, the extraction subelement, specifically for for each sample, it is corresponding to extract the sample in accordance with the following methodsHaar features:F=(Sumb-Sumw)/(Sumb+Sumw), wherein:
F represents any Haar features corresponding to the sample;
SumbRepresent first area in pixel and;
SumwRepresent second area in pixel and.
The third aspect, there is provided a kind of computing device, including at least one processing unit and at least one memory cell,Wherein, the memory cell is stored with computer program, when described program is performed by the processing unit so that the processingUnit performs the either step described in above-mentioned Face datection model training method.
Fourth aspect, there is provided a kind of computer-readable medium, it is stored with the computer program that can be performed by computing device,When described program is run on the computing device so that the computing device is performed described in above-mentioned Face datection model training methodEither step.
Face datection model training method, device and model provided in an embodiment of the present invention, by depositing in the training processThe classification results of obtained optimal Weak Classifier are trained in storage each time, so, without every time during strong classifier is trainedTraining computes repeatedly the classification results of identical optimal Weak Classifier, so as to reduce the training time of Face datection model, carriesThe training effectiveness of high Face datection model.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specificationObtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages can be by the explanations writeSpecifically noted structure is realized and obtained in book, claims and accompanying drawing.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, forms the part of the present invention, this hairBright schematic description and description is used to explain the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 a are two rectangular characteristic schematic diagrames in the embodiment of the present invention;
Fig. 1 b are three rectangular characteristic schematic diagrames in the embodiment of the present invention;
Fig. 1 c are three rectangular characteristic schematic diagrames in the embodiment of the present invention;
Fig. 2 is the implementation process diagram according to the Face datection model training method of embodiment of the present invention;
Fig. 3 is to obtain the schematic flow sheet of optimal Weak Classifier classification results set according to embodiment of the present invention;
Fig. 4 is the Face datection model training method schematic flow sheet according to another embodiment of the present invention;
Fig. 5 is the structural representation according to the Face datection model training apparatus of mode of the embodiment of the present invention;
Fig. 6 is the structural representation according to the computing device of mode of the embodiment of the present invention.
Embodiment
In order to the time required to reducing Face datection model training, improve the efficiency of Face datection model training, the present invention is realApply example and provide a kind of Face datection model training method, device and associated media.
The preferred embodiments of the present invention are illustrated below in conjunction with Figure of description, it will be appreciated that described hereinPreferred embodiment is merely to illustrate and explain the present invention, and is not intended to limit the present invention, and in the case where not conflicting, this hairThe feature in embodiment and embodiment in bright can be mutually combined.
Herein, it is to be understood that in involved term:
The verification and measurement ratio and false drop rate of strong classifier:Verification and measurement ratio and false drop rate can preassign as needed.According to detectionRate can determine the classification thresholds of strong classifier, and the false drop rate of this strong classifier is then determined according to this classification thresholds.It is thus determined that whether strong classifier meets to require, it is only necessary to contrasts the false drop rate of strong classifier and preassigned false drop rate thresholdValue.
Wherein, grader corresponding to Haar features is Weak Classifier.With Adaboost algorithm train come grader (ifDry optimal weak classifier set) it can be described as strong classifier or Adaboost graders.Strong classifier is corresponding with Weak Classifier,Wherein, the classification capacity of Weak Classifier is weaker, typically only than random guess (i.e. correct classification rate 0.5) height little by little;The classification capacity of strong classifier is stronger, and it is more to be higher by comparison than random guess.Cascade classifier:It is made up of strong classifier.
Assuming that if several optimal are categorized as:H1, h2 ... hm, the then output of strong classifier are:R=a1h1+a2h2+Amhm, wherein, a1, a2, am are respectively the weight of Weak Classifier, and weight is relevant with the classification rate of each Weak Classifier.
Assuming that it is respectively fd and fa to specify the verification and measurement ratio of strong classifier and false drop rate threshold value, then strong classification is determined according to fdThe classification thresholds T1 of device, the false drop rate fa_ of strong classifier is can obtain according to T1 and negative sample, if fa_<Fa, then strong classifier expireFoot requires that otherwise strong classifier is unsatisfactory for requirement, it is necessary to increase Weak Classifier number, re -training, obtains new strong classifier,Same new strong classifier is also required to by above-mentioned judgement.
Cascade classifier:The generalization ability of one strong classifier is not strong, and several strong classifiers of generally use are joined togetherForm a cascade classifier.
In addition, any number of elements in accompanying drawing is used to example and unrestricted, and any name is only used for distinguishing,Without any restrictions implication.
Inventor has found, in the existing method that Face datection is carried out based on Haar+adaboost algorithms, in order to allow featureIt is more representative, Haar features are extended, it is proposed that extension Haar features:Haar features are lifted by 5 original classesTo 14 classes so that Haar features can preferably characterize face in the case of various visual angles.The face of Haar+adaboost algorithms is examinedSurvey and various visual angles etc., but the very rare innovatory algorithm for being related to Face datection model training process have been generalized to by positive face.
In view of this, the embodiments of the invention provide a kind of Face datection model training method, by reducing Face datectionThe time that model training is consumed, to improve the efficiency of Face datection model training.
AdaBoost algorithms are a kind of iterative algorithms, for one group of training set, by the distribution for changing wherein each sampleProbability, and obtain different training set Si, be trained for each Si so as to obtain a Weak Classifier Hi, then by theseIf grader gets up according to different weighed combinations, strong classifier has just been obtained.
When training for the first time, the training sample included in each training set is to be uniformly distributed, and is obtained by trainingGrader H0, in the training set, classification is correct, just reduces its distribution probability;Classification error, it is general just to improve its distributionRate, the new training set S1 so obtained is just primarily directed to the sample poorly classified.Reuse S1 to be trained, obtainGrader H1, iteration continues successively ..., if iterations is T, then can obtain T grader.For each graderWeights, its classification accuracy is higher, and weights are higher.
Haar features are the templates of some rectangular characteristics, and respectively as shown in Fig. 1 a, Fig. 1 b and Fig. 1 c, it is respectively two rectanglesFeature, three rectangular characteristics and four rectangular characteristic schematic diagrames.For given 24X24 window, according to different positions, withAnd different scalings, more than 160,000 features can be produced.One Weak Classifier, actually this 160,000+'sA feature is chosen in feature, it is non-face to distinguish face or with this feature, and error rate is relatively low.
Based on this, the embodiments of the invention provide a kind of Face datection model training method, as shown in Fig. 2 can includeFollowing steps:
S21, the training sample for giving, carry out repeatedly training using preset algorithm and obtaining strong classifier and its being includedWeak Classifier classification results collection merge storage, wherein, each time training before update each sample in the training sampleWeight.
S22, the classification results set according to storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition,After then increasing the Weak Classifier quantity for training strong classifier, new strong classification is obtained using the training sample re -trainingDevice, until obtained strong classifier meets first preparatory condition.
S23, utilize obtained strong classifier renewal cascade classifier.
S24, terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise, renewalTraining sample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
In the step s 21, for given training sample, repeatedly training can be carried out according to the flow shown in Fig. 3 and is obtainedStrong classifier and the classification results for storing its multiple Weak Classifier included:
S31, obtain given training sample.
Wherein, several face samples (positive sample) and several non-face sample (negative samples can be included in training sampleThis), for example, including 2000 positive samples, 4000 negative samples in training sample.
N number of Haar features corresponding to S32, the extraction training sample obtain corresponding N number of respectively with every Haar featuresWeak Classifier.
In this step, for weak N number of Haar features of given training sample, first calculating sample, so as to obtain NIndividual Weak Classifier.
When it is implemented, the Haar features of sample can be calculated according to below equation:F=(Sumb-Sumw)/(Sumb+Sumw), wherein:
F represents any Haar features corresponding to the sample;
SumbRepresent first area in pixel and;
SumwRepresent second area in pixel and.
By taking two rectangular characteristics as an example, it is assumed that have n pixel in the black and white rectangle shown in Fig. 1 a, according to each squareThe pixel value of each pixel included in shape, the sum of the pixel value of each rectangular area is calculated respectively, it is assumed that black region instituteHave pixel value and be Sumb, in white rectangle region all pixels value and be Sumw, then can be calculated according to above-mentioned formulaObtain its Haar feature.According to the different position in rectangular area and different scalings, N number of Haar features can be obtained, N is bigIn the positive integer equal to 1, correspondingly, N number of Weak Classifier can be obtained.
S33, it is utilized respectively each Weak Classifier the training sample is classified to obtain corresponding classification results.
In this step, the grader obtained using step S32 is classified to training sample respectively, can be obtained each weakClassification results corresponding to grader.
S34, according to the classification results determine this obtained optimal Weak Classifier of training.
In this step, accuracy rate highest Weak Classifier can be selected as optimal Weak Classifier, i.e., optimal Weak ClassifierFor correct classification rate highest Weak Classifier.
S35, obtained optimal Weak Classifier is added in strong classifier, and in the classification results set of strong classifierAdd the classification results for the optimal Weak Classifier that this training obtains.
Whether the quantity of the optimal Weak Classifier included in the strong classifier that S36, judgement obtain is less than the second preset value thresholdValue, if it is, step S37 is performed, if not, flow terminates.
S37, the classification results of the optimal Weak Classifier obtained according to this training are updated in the training sample per the sameThis weight, and return and perform step S32.
Wherein, can be so that for each sample in training sample, the sample can be determined according to below equation in step S37Weight corresponding to this:Wherein:wt,iIt is the weight that i-th of sample is taken turns in t, as sample xiCorrectly classifiedWhen, ei=0, otherwise, ei=1.εtIt is the misclassification rate that t takes turns optimal Weak Classifier.Wherein, optimal weak typingThe misclassification rate of device refers to the ratio by the quantity of classification results mistake and whole sample sizes.
If the quantity of the optimal Weak Classifier included in strong classifier is less than the second predetermined threshold value, each sample is updatedWeight after return and perform step S32.Until the quantity of the optimal Weak Classifier included in obtained strong classifier reaches secondPredetermined threshold value.When it is implemented, the second predetermined threshold value can be set based on experience value, this is not entered in the embodiment of the present inventionRow limits.
So, after by taking turns training more, can obtain strong classifier and its comprising each optimal Weak Classifier classificationResults set.Contain the optimal Weak Classifier that each round trains to obtain in strong classifier, and the classification results collection of strong classifierThe classification results for the optimal Weak Classifier that each round obtains are contained in conjunction.
After strong classifier classification results set corresponding with its has been obtained, further, it is necessary to judge to obtain strongWhether the classification results set of grader meets the first preparatory condition.Specifically, the weak typing included according to the strong classifierThe classification results set of device determines classification thresholds corresponding to strong classifier;According to corresponding to the classification thresholds determine strong classifierFalse drop rate;If the false drop rate is not more than default false drop rate threshold value, it is determined that the strong classifier meets the first preparatory condition,If the false drop rate is more than the default false drop rate threshold value, it is determined that the strong classifier is unsatisfactory for the first preparatory condition.
When it is implemented, the obtained corresponding classification thresholds of strong classifier can be determined according to below equation:
Classification thresholds are calculated according to given verification and measurement ratio, it is assumed that given verification and measurement ratio is d, in training sample, positive sampleQuantity is n, and each positive sample can be obtained into n numerical value by strong classifier, these numerical value according to arranging from big to small,TakeIndividual numerical value as classification thresholds T, wherein,Represent to take the minimum integer more than r.
When it is implemented, the obtained false drop rate of strong classifier can be determined according to below equation:
The classification thresholds T obtained according to verification and measurement ratio calculates false drop rate.M negative sample is passed sequentially through into strong classifier, obtainedValue then represent the sample by flase drop more than classification thresholds T, it is assumed that such sample hasIt is individual, then can be according to below equation meterCalculate false drop rate Rfa:
Whether the false drop rate for judging to obtain is more than default false drop rate threshold value, if false drop rate is more than default false drop rate threshold value,The strong classifier for then determining to obtain is unsatisfactory for the first preparatory condition, need in this case increase Weak Classifier quantity after againThe training of strong classifier is carried out, until strong classifier meets the first preparatory condition;If false drop rate is not more than default false drop rate thresholdValue, it is determined that obtained strong classifier meets the first preparatory condition, and the strong classifier obtained according to epicycle updates cascade classifier.
When it is implemented, each strong classifier can be combined to obtain cascade classifier according to different weighted values, itsIn, weighted value is relevant with its classification results corresponding to each strong classifier, and the accuracy rate of classification results is higher, its corresponding weightValue is also higher.
It should be noted that if the first round determines that strong class divides device to meet the first preparatory condition, then only need directly defeatedGo out the cascade classifier that strong classifier obtains, i.e. now only include a strong classifier in cascade classifier, judge to cascadeWhether the false drop rate of grader meets the requirements, if do not met, needs more new samples re -training to obtain new strong classifier,And new cascade classifier is obtained according to certain weight and the strong sort merge for training to obtain before according to its classification results, according toIt is secondary to analogize, untill the false drop rate of obtained cascade classifier meets the requirements.
After it have updated cascade classifier, it is also necessary to which whether the false drop rate for testing cascade classifier is default no more than firstThreshold value, if it is, can terminate to train, the cascade classifier for exporting to obtain is as Face datection model, otherwise, it is necessary to updateSample, training is re-started, until the false drop rate for the cascade classifier that training obtains is not more than the first predetermined threshold value.
When it is implemented, when updating cascade classifier, the optimal weak classifier set that can obtain epicycle is put intoAfter the cascade classifier that one wheel obtains, so, when the cascade classifier to obtaining is tested, sample only passes throughWeak Classifier above, it can just be input into the increased optimal Weak Classifier of epicycle and be detected.
In order to be better understood from the embodiment of the present invention, below in conjunction with the embodiment of the present invention implementing procedure to the present invention realityThe process of applying is described in detail, as shown in figure 4, may comprise steps of:
S41, obtain given training sample.
S42, M Haar feature of extraction sample obtain M Weak Classifier.
S43, it is utilized respectively each Weak Classifier the training sample is classified to obtain corresponding classification results.
In this step, M classification results can be exported, wherein, the classification results of Weak Classifier can include Weak ClassifierClassification results accuracy rate.
S44, according to classification results select optimal Weak Classifier.
When it is implemented, it can be obtained using selection sort result accuracy rate highest Weak Classifier as this training optimalWeak Classifier.
S45, obtained optimal Weak Classifier is added in strong classifier and by the classification of obtained optimal Weak ClassifierAs a result it is added in classification results set corresponding to strong classifier and stores.
Whether the quantity of the optimal Weak Classifier included in the strong classifier that S46, judgement obtain reaches the second predetermined threshold value,If it is, performing step S47, otherwise, step S48 is performed.
Whether the strong classifier that S47, judgement obtain meets the first preparatory condition, if it is, performing step S49, otherwise, holdsRow step S412.
The each sample included in S48, the classification results renewal training sample according to obtained optimal Weak Classifier is correspondingWeight, and perform step S42.
S49, utilize obtained strong classifier renewal cascade classifier.
Whether the false drop rate for the cascade classifier that S410, judgement obtain is not more than default false drop rate threshold value, if it is, streamJourney terminates, and otherwise, performs step S411.
S411, renewal training sample, and perform step S42.
S412, increase Weak Classifier number, and perform step S43.
Face datection model training method provided in an embodiment of the present invention, is trained each time by storing in the training processThe classification results of obtained optimal Weak Classifier, so, the classification of identical optimal Weak Classifier is computed repeatedly without training every timeAs a result, so as to reduce the training time of Face datection model, the training effectiveness of Face datection model is improved.
In the embodiment of the present invention, what training obtained first is optimal Weak Classifier:From numerous haar features (each featuresA corresponding Weak Classifier) selection one is optimal, namely the Weak Classifier that misclassification rate is minimum.By several optimal weak typingsDevice forms a strong classifier (set of optimal Weak Classifier):Can be advance comprising several optimal Weak Classifiers in strong classifierSpecify.When most starting training, a numerical value can be rule of thumb specified, is unsatisfactory for if training the strong classifier false drop rate comeIt is required that then increasing the number of Weak Classifier in strong classifier, strong classifier is made up of several Weak Classifiers.
Based on same inventive concept, a kind of Face datection model training apparatus is additionally provided in the embodiment of the present invention, due toThe principle that said apparatus solves problem is similar to Face datection model training method, therefore the implementation side of may refer to of said apparatusThe implementation of method, repeat part and repeat no more.
As shown in figure 5, it is structural representation of Face datection model training apparatus provided in an embodiment of the present invention, can be withIncluding:
First training unit 51, for for given training sample, carrying out repeatedly training using preset algorithm and obtaining by forceThe classification results collection of grader and its Weak Classifier included merges storage, wherein, update the instruction before training each timePractice the weight of each sample in sample;
Second training unit 52, for the classification results set according to storage, if it is judged that the strong classifier is discontented withThe first preparatory condition of foot, then after increasing the Weak Classifier quantity for training strong and weak grader, using the training sample againTraining obtains new strong classifier, until obtained strong classifier meets first preparatory condition;
Updating block 53, for updating cascade classifier using obtained strong classifier;
3rd training unit 54, for if the false drop rate of the cascade classifier be not more than the first predetermined threshold value if terminateTraining, otherwise, training sample re -training is updated, until the false drop rate of obtained cascade classifier is no more than the first default thresholdValue.
Alternatively, first training unit 51, including:
Subelement is extracted, for for given training sample, extracting N number of Haar features corresponding to the training sample and obtainingTo N number of Weak Classifier corresponding with every Haar features difference, wherein, N is the integer more than or equal to 1;
Classify subelement, the training sample is classified for being utilized respectively each Weak Classifier and divided accordinglyClass result;
Determination subelement, for determining that this trains obtained optimal Weak Classifier according to the classification results;
Subelement is added, for obtained optimal Weak Classifier to be added in strong classifier, and in point of strong classifierThe classification results for the optimal Weak Classifier that this training obtains are added in class results set;
Subelement is trained, if to be less than second pre- for the quantity of the optimal Weak Classifier for being included in obtained strong classifierIf threshold value, then the weight of each sample in the training sample is updated according to the classification results of optimal Weak Classifier, and instructed againGet new optimal Weak Classifier and store classification results, until the number of the optimal Weak Classifier included in the strong classifierAmount reaches second predetermined threshold value.
Alternatively, a kind of Face datection model training apparatus, in addition to:
First determining unit, the strong classifier pair is determined for the classification results set according to corresponding to the strong classifierThe classification thresholds answered;
Second determining unit, for false drop rate corresponding to determining the strong classifier according to the classification thresholds;
3rd determining unit, if being not more than default false drop rate threshold value for the false drop rate, it is determined that the strong classificationDevice meets the first preparatory condition, if the false drop rate is more than the default false drop rate threshold value, it is determined that the strong classifier is notMeet the first preparatory condition.
Alternatively, the extraction subelement, specifically for for each sample, it is corresponding to extract the sample in accordance with the following methodsHaar features:F=(Sumb-Sumw)/(Sumb+Sumw), wherein:
F represents any Haar features corresponding to the sample;
SumbRepresent first area in pixel and;
SumwRepresent second area in pixel and.
For convenience of description, above each several part is divided by function describes respectively for each module (or unit).Certainly, existThe function of each module (or unit) can be realized in same or multiple softwares or hardware when implementing of the invention.
After the Face datection model training method and device of exemplary embodiment of the invention is described, next,Introduce the computing device of the another exemplary embodiment according to the present invention.
Person of ordinary skill in the field it is understood that various aspects of the invention can be implemented as system, method orProgram product.Therefore, various aspects of the invention can be implemented as following form, i.e.,:It is complete hardware embodiment, completeThe embodiment combined in terms of full Software Implementation (including firmware, microcode etc.), or hardware and software, can unite hereReferred to as " circuit ", " module " or " system ".
In some possible embodiments, it is single that at least one processing can be comprised at least according to the computing device of the present inventionMember and at least one memory cell.Wherein, the memory cell has program stored therein code, when described program code is describedWhen processing unit performs so that the processing unit perform this specification foregoing description according to the various exemplary implementations of the present inventionStep in mode Face datection model training method.For example, the step of processing unit can perform as shown in Figure 2S21, the training sample for giving, carry out repeatedly training the weak typing for obtaining strong classifier and its being included using preset algorithmThe classification results collection of device merges storage, wherein, the weight of each sample in the training sample is updated before training each time, and walkRapid S22, the classification results set according to storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition, then increaseAfter Weak Classifier quantity for training strong classifier, new strong classifier is obtained using the training sample re -training, directlyMeet first preparatory condition to obtained strong classifier;Step S23, cascade sort is updated using obtained strong classifierDevice;Step S24, terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise, renewal instructionPractice sample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
The computing device 60 according to the embodiment of the invention is described referring to Fig. 6.The calculating dress that Fig. 6 is shownIt is only an example to put 60, should not bring any restrictions to the function and use range of the embodiment of the present invention.
As shown in fig. 6, computing device 60 is showed in the form of universal computing device.The component of computing device 60 can includeBut it is not limited to:Above-mentioned at least one processing unit 61, above-mentioned at least one memory cell 62, connection different system component (includingMemory cell 62 and processing unit 61) bus 63.
Bus 63 represents the one or more in a few class bus structures, including memory bus or Memory Controller,Peripheral bus, processor or the local bus using any bus structures in a variety of bus structures.
Memory cell 62 can include the computer-readable recording medium of form of volatile memory, such as random access memory (RAM)621 and/or cache memory 622, it can further include read-only storage (ROM) 623.
Memory cell 62 can also include program/utility 625 with one group of (at least one) program module 624,Such program module 624 includes but is not limited to:Operating system, one or more application program, other program modules andRoutine data, the realization of network environment may be included in each or certain combination in these examples.
Computing device 60 can also communicate with one or more external equipments 64 (such as keyboard, sensing equipment etc.), may be used alsoThe equipment communication that is interacted with computing device 60 is enabled a user to one or more, and/or with enabling the computing device 60Any equipment (such as the router, modem etc.) communication to be communicated with one or more of the other computing device.ThisKind communication can be carried out by input/output (I/O) interface 65.Also, computing device 60 can also pass through network adapter 66With one or more network (such as LAN (LAN), wide area network (WAN) and/or public network, such as internet) communication.As illustrated, network adapter 66 is communicated by bus 63 with other modules for computing device 60.It will be appreciated that though figureNot shown in, computing device 60 can be combined and use other hardware and/or software module, included but is not limited to:Microcode, equipmentDriver, redundant processing unit, external disk drive array, RAID system, tape drive and data backup storage systemDeng.
In some possible embodiments, the various aspects of Face datection model training method provided by the invention may be used alsoIn the form of being embodied as a kind of program product, it includes program code, when described program product is run on a computing device,Described program code be used to making the computer equipment perform this specification foregoing description according to the various exemplary realities of the present inventionThe step in the Face datection model training method of mode is applied, for example, the computer equipment can perform as shown in Figure 2Step S21, for given training sample, carry out repeatedly training using preset algorithm and obtain strong classifier and its included weakThe classification results collection of grader merges storage, wherein, training each time updates the weight of each sample in the training sample before,With step S22, according to the classification results set of storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition, thenAfter increasing the Weak Classifier quantity for training strong classifier, new strong classification is obtained using the training sample re -trainingDevice, until obtained strong classifier meets first preparatory condition;Step S23, cascade is updated using obtained strong classifierGrader;Step S24, terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise, moreNew training sample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
Described program product can use any combination of one or more computer-readable recording mediums.Computer-readable recording medium can be readable letterNumber medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example may be-but not limited to-electricity, magnetic, optical, electromagnetic, redThe system of outside line or semiconductor, device or device, or any combination above.The more specifically example of readable storage medium storing program for executing(non exhaustive list) includes:Electrical connection, portable disc with one or more wires, hard disk, random access memory(RAM), read-only storage (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact discRead memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
The program product for Face datection model training of embodiments of the present invention can use portable compact discRead-only storage (CD-ROM) and including program code, and can run on the computing device.However, the program product of the present inventionNot limited to this, in this document, readable storage medium storing program for executing can be it is any include or the tangible medium of storage program, the program can be withIt is commanded the either device use or in connection of execution system, device.
Readable signal medium can be included in a base band or as a part of data-signal propagated of carrier wave, wherein carryingReadable program code.The data-signal of this propagation can take various forms, including --- but being not limited to --- electromagnetism letterNumber, optical signal or above-mentioned any appropriate combination.Readable signal medium can also be beyond readable storage medium storing program for executing it is any canRead medium, the computer-readable recording medium can send, propagate either transmit for being used by instruction execution system, device or device orProgram in connection.
The program code included on computer-readable recording medium can be transmitted with any appropriate medium, including --- but being not limited to ---Wirelessly, wired, optical cable, RF etc., or above-mentioned any appropriate combination.
Can being combined to write the program operated for performing the present invention with one or more programming languagesCode, described program design language include object oriented program language-Java, C++ etc., include routineProcedural programming language-such as " C " language or similar programming language.Program code can be fully in userPerform on computing device, partly perform on a user device, the software kit independent as one performs, is partly calculated in userIts upper side point is performed or performed completely in remote computing device or server on a remote computing.It is remote being related toIn the situation of journey computing device, remote computing device can pass through the network of any kind --- including LAN (LAN) or wideDomain net (WAN)-be connected to user calculating equipment, or, it may be connected to external computing device (such as utilize Internet serviceProvider passes through Internet connection).
It should be noted that although being referred to some units or subelement of device in above-detailed, but this strokePoint it is merely exemplary not enforceable.In fact, according to the embodiment of the present invention, it is above-described two or moreThe feature and function of unit can embody in a unit.Conversely, the feature and function of an above-described unit canTo be further divided into being embodied by multiple units.
In addition, although the operation of the inventive method is described with particular order in the accompanying drawings, still, this do not require that orHint must perform these operations according to the particular order, or the operation having to carry out shown in whole could realize it is desiredAs a result.Additionally or alternatively, it is convenient to omit some steps, multiple steps are merged into a step and performed, and/or by oneStep is decomposed into execution of multiple steps.
It should be understood by those skilled in the art that, embodiments of the invention can be provided as method, system or computer programProduct.Therefore, the present invention can use the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardwareApply the form of example.Moreover, the present invention can use the computer for wherein including computer usable program code in one or moreThe computer program production that usable storage medium is implemented on (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.)The form of product.
The present invention is the flow with reference to method according to embodiments of the present invention, equipment (system) and computer program productFigure and/or block diagram describe.It should be understood that can be by every first-class in computer program instructions implementation process figure and/or block diagramJourney and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be providedThe processors of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produceA raw machine so that produced by the instruction of computer or the computing device of other programmable data processing devices for realThe device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spyDetermine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring toMake the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram orThe function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that countedSeries of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, so as in computer orThe instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram oneThe step of function of being specified in individual square frame or multiple square frames.
Although preferred embodiments of the present invention have been described, but those skilled in the art once know basic creationProperty concept, then can make other change and modification to these embodiments.So appended claims be intended to be construed to include it is excellentSelect embodiment and fall into having altered and changing for the scope of the invention.
Obviously, those skilled in the art can carry out the essence of various changes and modification without departing from the present invention to the present inventionGod and scope.So, if these modifications and variations of the present invention belong to the scope of the claims in the present invention and its equivalent technologiesWithin, then the present invention is also intended to comprising including these changes and modification.