Movatterモバイル変換


[0]ホーム

URL:


CN1758263A - Multi-model ID recognition method based on scoring difference weight compromised - Google Patents

Multi-model ID recognition method based on scoring difference weight compromised
Download PDF

Info

Publication number
CN1758263A
CN1758263ACN200510061359.0ACN200510061359ACN1758263ACN 1758263 ACN1758263 ACN 1758263ACN 200510061359 ACN200510061359 ACN 200510061359ACN 1758263 ACN1758263 ACN 1758263A
Authority
CN
China
Prior art keywords
center dot
classifier
score
difference
score difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200510061359.0A
Other languages
Chinese (zh)
Other versions
CN100363938C (en
Inventor
吴朝晖
杨莹春
李东东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJUfiledCriticalZhejiang University ZJU
Priority to CNB2005100613590ApriorityCriticalpatent/CN100363938C/en
Publication of CN1758263ApublicationCriticalpatent/CN1758263A/en
Application grantedgrantedCritical
Publication of CN100363938CpublicationCriticalpatent/CN100363938C/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明涉及一种基于得分差加权融合的多模态身份识别方法,首先利用一组说话人样本数据,通过原有的传统单模态分类器每个样本相对模板中每个说话人模型的得分;如果得分最高的那个模型和样本属于不同的说话人,则记录下两者的得分差;然后把单个分类器中所有这些差值都累加起来;最后利用各分类器的得分差来确定各个模态的权重。本发明有益的效果是:利用多生物特征进行交叉身份认证,并采用一种修正的基于得分差的加权算法SDWS对两个生物认证模态进行融合,把两种身份认证的结果加以综合。利用两种生物特征信息识别的优点,提高容错性,降低不确定性,克服单个生物特征信息的不完整性,增强识别决策结果的可靠性,使其具有更广泛安全性和适应性。

Figure 200510061359

The present invention relates to a multi-modal identity recognition method based on score difference weighted fusion. Firstly, a set of speaker sample data is used to pass the score of each speaker model in each sample relative template of the original traditional single-mode classifier. ; if the model with the highest score and the sample belong to different speakers, record the score difference between the two; then add up all these differences in a single classifier; finally use the score difference of each classifier to determine each model state weight. The beneficial effects of the present invention are: using multi-biological features for cross-identity authentication, adopting a modified weighted algorithm SDWS based on score difference to fuse two biometric authentication modes, and to synthesize the results of the two identity authentications. Utilize the advantages of two types of biometric information identification, improve fault tolerance, reduce uncertainty, overcome the incompleteness of single biometric information, enhance the reliability of identification decision results, and make it more widely safe and adaptable.

Figure 200510061359

Description

Multi-modal personal identification method based on score difference weighting fusion
Technical field
The present invention relates to the Multiple Classifier Fusion technology, mainly is a kind of multi-modal personal identification method based on score difference weighting fusion.
Background technology
In the application of actual life, the discriminating of identity is a very complicated job, has very strong robustness because it need reach very high performance and requirement.The biological identification technology with people's self physical features as the authentication foundation, fundamentally be different from traditional authentication techniques based on " thing that you had " or " thing known to you ", real with the foundation of people self as authentication, own authentic representative oneself.
In numerous biological identification technology, differentiate it is current two kinds of popular methods based on the identity of sound and image.Application on Voiceprint Recognition, i.e. Speaker Identification does not have and can lose, need not memory and advantage such as easy to use, economic, accurate; Recognition of face then has initiative, non-infringement and many advantages such as user friendly.When this several method uses separately, its separately performance always can be subjected to the constraint of certain extreme value or show instability.So, adopt information fusion to come the advantage of comprehensive each subpattern, be that the reliability of raising identification is a valid approach.
Nearly all multi-modal recognition methods at present all is to carry out on the fusion rank of decision level.According to fusion rule, decision-making level merges generally two kinds of strategies.A kind of is the fixing fusion method of parameter, as the method for average, and ballot method, addition or the like; Another kind is the method that needs parameter training, as Dempster-Shafer, and knowledge and behavior space and naive Bayesian method or the like.
The fusion method of preset parameter can influence performance because of the pairing effect of sorter to a great extent.And the quality of training set and size make the decision level fusion method of parameter training often can not reach theoretic syncretizing effect.
Summary of the invention
The present invention will solve the existing defective of above-mentioned technology, and a kind of multi-modal personal identification method based on score difference weighting fusion is provided.By research to the identification score of single sorter, recognition category and affiliated class score difference as the weights foundation, obtain a kind of new weighting parameters training method " based on the weighting of score difference " SDWS (Scores Difference-BasedWeightedSum Rule) and merged vocal print sorter and people's face sorter, thus the performance of raising Speaker Identification.
The technical solution adopted for the present invention to solve the technical problems: this multi-modal personal identification method based on score difference weighting fusion, at first utilize one group of speaker's sample data, by the score of each speaker model in the relative template of original traditional each sample of single mode sorter; If that model that score is the highest belongs to different speakers with sample, the score of then noting both is poor; Then all these differences in the single sorter are all added up; Utilize the score difference of each sorter to determine the weight of each mode at last.
The technical solution adopted for the present invention to solve the technical problems can also be further perfect.Described traditional single mode sorter is Application on Voiceprint Recognition sorter and recognition of face sorter.Described be divided into sorter belongs to this guess of certain classification to the data of input support.Described score difference is under separation vessel is differentiated error situation, import this moment data former under the classification of the input data supposed of classification and sorter when inconsistent, sorter is to the difference of the support of above-mentioned two classifications.The score difference of described sorter be all speakers differentiate the score of the speaker model that the sample under the error situation belongs to and top score in the single sorter difference and.Described sorter based on the weight of score difference be single separation vessel score difference inverse to the inverse of all separation vessel score differences and ratio.
The effect that the present invention is useful is: utilize multi-biological characteristic (vocal print, people's face) intersects authentication, and adopt a kind of weighting algorithm SDWS of correction that two biological identification mode are merged based on the score difference, comprehensive in addition the result of two kinds of authentications.Utilize the advantage of two kinds of biological information identifications and the field that is suitable for, improve fault-tolerance, reduction is uncertain, overcomes the imperfection of single biological information, strengthens recognition decision result's reliability, makes it have more extensive security and adaptability.
Description of drawings
Fig. 1 is the multi-modal identification system frame diagram based on score difference weighting fusion SDWS of the present invention;
Fig. 2 is the topological structure synoptic diagram of dynamic Bayesian model of the present invention.
Embodiment
The invention will be described further below in conjunction with drawings and Examples: method of the present invention was divided into for three steps.
The first step, Application on Voiceprint Recognition
Speaker Identification is divided into the voice pre-service, feature extraction, and model training is discerned four parts.
1. voice pre-service
The voice pre-service is divided into sample quantization, zero-suppresses and floats, three parts of pre-emphasis and windowing.
A), sample quantization
I. with sharp filter sound signal is carried out filtering, make its nyquist frequency FNBe 4KHZ;
II., audio sample rate F=2F is setN
III. to sound signal Sa(t) sample by the cycle, obtain the amplitude sequence of digital audio and video signalss(n)=sa(nF);
IV. with pulse code modulation (pcm) s (n) is carried out quantization encoding, the quantization means s ' that obtains amplitude sequence (n).
B), zero-suppress and float
I. calculate the mean value s of the amplitude sequence that quantizes;
II. each amplitude is deducted mean value, obtaining zero-suppressing, to float back mean value be 0 amplitude sequence s " (n).
C), pre-emphasis
I., Z transfer function H (the z)=1-α z of digital filter is set-1In pre emphasis factor α, the value that the desirable ratio of α 1 is slightly little;
II.s " (n) by digital filter, obtain the suitable amplitude sequence s (n) of high, medium and low frequency amplitude of sound signal.
D), windowing
I. frame length N of computing voice frame (32 milliseconds) and the frame amount of moving T (10 milliseconds), satisfy respectively:
NF=0.032
TF=0.010
Here F is the speech sample rate, and unit is Hz;
II. be that N, the frame amount of moving are T with the frame length, s (n) is divided into a series of speech frame Fm, each audio frame comprises N voice signal sample;
III. calculate the hamming code window function:
Figure A20051006135900074
IV. to each speech frame FmAdd hamming code window:
2.MFCC extraction:
A), the exponent number p of Mel cepstrum coefficient is set;
B), be fast fourier transform FFT, time-domain signal s (n) is become frequency domain signal X (k).
C), calculate Mel territory scale:
Mi=ip×2595log(1+8000/2.0700.0),(i=0,1,2,...,p)
D), calculate corresponding frequency domain scale:
fi=700×eMi2595ln10-1,(i=0,1,2,...,p)
E), calculate each Mel territory passage φjOn the logarithm energy spectrum:
Ej=Σk=0K2-1φj(k)|X(k)|2
WhereinΣk=0K2-1φj(k)=1.
F), be discrete cosine transform DCT
3.DBN model training
Dynamic bayesian network model (DBN) is similar to HMM, is a generation model, and it only needs a people's speech data just can carry out modeling to it, finishes identifying.
The purpose of training is in order to make under given speech data, and the parameter of model can better be described the distribution situation of voice in feature space.Here DBN training mainly lays particular emphasis on the training to model parameter, does not learn at network topology.
A) if likelihood score does not have convergence, and iterations changes B less than preset times) step; Otherwise, change E).
Here convergent definition is:
Converged=TRUE,if|PreLogLik-CurLogLik|<&theta;FALSE,otherwize
The PreLogLik here is meant the likelihood score of back iteration, and CurLogLik is meant the likelihood score of current iteration, and they all are by step C) in forward-backward algorithm traversal obtain.θ is the threshold values of presetting.Default maximum iteration time MAXITER can set arbitrarily.The judgement in this step is to make iteration be unlikely to unconfined to carry out.
B), the ASSOCIATE STATISTICS value of each node empties.
Will empty statistical value before forward-backward algorithm traversal, said here statistical value is meant CPD (conditional probability distribution) to node needed data when learning.
C), pooled observations, carry out forward-backward algorithm traversal, the output likelihood score.
Network is carried out the forward-backward algorithm traversal, make observed reading can make other nodes in the network also can obtain upgrading to the renewal of some node, satisfy locally coherence and global coherency condition, this step has realized in abutting connection with algorithm, and the frame inner structure has been carried out the probability diffusion with COLLECT-EVIDENCE (collecting evidence) and DISTRIBUTE-EVIDENCE (issue evidence).This traversal will be exported the Log likelihood score, at A in step) in will be used to.Used probability output also obtains by this traversal in the identification.
D), according to observed reading, calculate the ASSOCIATE STATISTICS value, upgrade the probability distribution of interdependent node, change A).
According to observed reading, calculate the ASSOCIATE STATISTICS value, the probability distribution of new node more, this is determined by the EM learning algorithm.
E), preserve model.
4. identification
After the user speech input,, obtain a characteristic vector sequence C through feature extraction.Press Bayes rule,, meet model M giving under the given data CiLikelihood score be,
P(Mi|C)=P(C|Mi)*P(Mi)P(C)
Because without any the knowledge of priori, so we think to all models P (Mi) be identical, i.e. P (Mi)=1/N, i=1,2 ..., N, and concerning all speakers, P (C) is a unconditional probability, also is identical, that is:
P(Mi|C)∝P(C|Mi)
We are converted into the posterior probability of asking model and ask the prior probability of model to data.So, speaker's identification test is exactly to calculate following formula,
i*=argmaxiP(C|Mi)
Second step: recognition of face
2 dimension face identification systems mainly comprise image pre-service, feature extraction and three parts of sorter classification.
1. image pre-service
The pretreated general objects of image is to adjust the difference of original image on illumination and geometry, obtains normalized new images.Pre-service comprises the alignment and the convergent-divergent of image.
2.PCA feature extraction
By the pivot conversion, with a low n-dimensional subspace n (pivot subspace) facial image is described, try hard to when rejecting the classification interference components, remain with the discriminant information that is beneficial to classification.
With the standard picture after pretreated as training sample set, and with the covariance matrix of this sample set generation matrix as the pivot conversion:
&Sigma;=1M&Sigma;i=0M-1(xi-&mu;)(xi-&mu;)T
X whereiniBe the image vector of i training sample, μ is the average image vector of training sample set, and M is the sum of training sample.If the image size is K * L, then the matrix ∑ has KL * KL dimension.When image is very big, directly calculating the eigenwert and the proper vector that produce matrix will have certain difficulty.As sample number M during less than KL * KL, available svd theorem (SVD) is converted to the calculating of M dimension matrix.
With the eigenwert λ that sorts from big to small0〉=λ1〉=... λR-1, and establish the vectorial u of being of their characteristics of correspondenceiLike this, each width of cloth facial image can project to by u0, u1..., uM-1In the subspace of opening.Obtained M proper vector altogether, chosen preceding k maximum proper vector, made:
&Sigma;i=0k&lambda;i&Sigma;i=0M-1&lambda;i=&alpha;
Wherein α is called the energy ratio, accounts for the ratio of whole energy for the energy of sample set on preceding k axle.
3. sorter classification
With the arest neighbors sorting technique as component classifier.What distance metric used is the Euclidean distance formula.
The 3rd step: based on the Multiple Classifier Fusion of score difference weighting
Multiple Classifier Fusion algorithm based on the weighting of score difference is divided into the sorter formalized description, trains and discern three parts.
1. sorter formalized description
A), sorter is described: establish D={D1, D2..., DLRepresent a group component sorter;
B), classification is described: establish Ω={ ω1..., ωc) represent a category not identify, promptly all possible classification results
C), input: proper vector
Figure A20051006135900103
D), output: length is the vectorial D of ci(x)=[dI, 1(x), dI, 2(x) ..., dI, c(x)]T, d whereinI, j(x) represent DiBelong to for x
Figure A20051006135900104
The support .d of this guessI, j(x) normalized to [0,1] interval interior component classifier output, and
&Sigma;j=1cdi,j(x)=1
E), the output of all sorters can be synthesized a DP (Decision Profile) matrix:
DP(x)=d1,1(x),d1,2(x),...,d1,c(x)...di,1(x),di,2(x),...,di,c(x)...dL,1(x),dI,2(x),...,dI,c(x)
In this matrix, the i row element is represented component classifier DiOutput Di(x); The j column element represents each component classifier right
Figure A20051006135900112
Support.
2. training
A), training sample: the training set X={x that N element arranged1, x2..., xN}
B), sorter is to the recognition result of sample:
S(X)=s1,1(X),...,S1,L(X)...sj,1(X),...,sj,L(X)...sN,1(X),...,sN,L(X)
S whereinJ, iBe sorter DiTo sample elements xjThe class that is identified, and if only if
sj,i=Di(xj)
=s&DoubleLeftRightArrow;di,s(xj)=maxo=1,2,..,c{di,o(xj)}
Here j=1 ..., N is the number of element in the training set; I=1 ... L is that the number .C of sorter is the number of classification, is number to be identified herein.
C), original affiliated classification: the L (X) of sample=[k1 ..., kN]T,
Figure A20051006135900115
D), the score difference SD of i sorteri(X) be:
SDi(X)=&Sigma;j=1NSDij(xj)
=&Sigma;j=1N&Sigma;sj,i&NotEqual;kj|di,kj(xj)-di,sj,i(xj)|
SDi(X) be under separation vessel is differentiated error situation, import this moment data former under the classification of the input data supposed of classification and sorter s when inconsistentJ, i≠ kj, sorter is to the difference of the support of above-mentioned two classifications.D whereinI, j(x) be element in DP (x) matrix.
E), sorter is based on the weights of score difference:
Wi=SDi(X)-1&Sigma;i=1LSDi(X)-1
3. judgement
According to weights, recomputate under the multi-modal state support of each classification:
D(x)=[d1(x),d2(x),...,dc(x)]T
=[&Sigma;i=1LWi*di,1(x),&Sigma;i=1LWi*di,2(x),...&Sigma;i=1LWi*di,c(x),]T
A plurality of sorters are ω to the classification results of test vector xsAnd if only ifs=maxi=1,...cdi(x).
Experimental result
Native system is tested on a multi-modal speech database that comprises 54 user's vocal prints and voice messaging.This database has been gathered the people's face and the voiceprint of 54 students of Zhejiang University (37 schoolboys, 17 schoolgirls).The collecting work of entire database carries out in the environment of low noise bright and clear.In the phonological component, everyone is required to saypersonal information 3 times; The mandarin numeric string, dialect numeric string, english digit string, mandarin word string, each 10 of picture talks, one section of short essay.The voice document form is the wav/nist form, and all standard becomes the 8000Hz sampling rate, the 16bit data.Experiment adopts short essay and personal information as training, and all the other 50 voice are as test.In the facial image part, everyone respectively produces the front and people from side face shines totally 4, and wherein positive according to two, the side is according to two.Experiment employing wherein positive a photograph is trained, and another is tested.
We use the single mode Application on Voiceprint Recognition simultaneously on this storehouse, single mode recognition of face and addition, weighting, ballot method and carried out same experiment based on this several frequently seen decision-making level's blending algorithm of method of behavior knowledge space, be used for and native system (SDWS is based on the blending algorithm of score difference weighting) compares.Wherein Application on Voiceprint Recognition is based on people's phonetic feature, and recognition of face is based on people's face feature.Blending algorithm combines these two kinds of features, and addition and ballot are owned by France in the fixing fusion method of parameter; Weighted sum belongs to the blending algorithm that needs parameter training based on the method for behavior knowledge space.
Single mode vocal print method for distinguishing speek person is based on the first step of this explanation, voice are carried out pre-service after, it is extracted the Mel cepstrum feature, utilize dynamic Bayesian model to speaker's modeling.Dynamically the topology of Bayesian model adopts structure as shown in Figure 2, wherein qij, i=1,2,3, j=1,2 ... T represents latent node variable, and each node hypothesis has two discrete values, oij, i=1,2,3, j=1,2 ... T is an observer nodes, corresponding to observation vector, has the father node q of Discrete Distributionij, satisfy Gaussian distribution.Same, tested speech is carried out rightly with the speaker model of building up after the process of extracting through pre-service and Mel cepstrum feature, obtains the pairing artificial identification person that speaks of the highest model of branch.
The single mode recognition of face goes on foot based on second of this explanation, after facial image is manuallyd locate according to eyes, it is extracted the PCA feature, by comparing the Euclidean distance between the PCA feature, gets the pairing artificial identification person that speaks of nearest feature.
For addition, its thought can be by following formulate:
μi(x)=F(d1,i(x),...,dL,i(x)),i=1,...,c
Wherein F has represented add operation (Sum), and final classification results is to make μiThe ω of maximum i correspondencei
Weighting algorithm is to grow up on the basis of addition, embodies difference good and bad between each sorter by weight.Here adopt each sorter etc. error rate as its weight.
The basic thought of ballot method is " the minority is subordinate to the majority ".Wherein, the voter is all component classifiers, and the candidate is all possible classification results.Give its candidate's ballot of supporting by the voter, the candidate that poll is maximum wins.
Method based on the behavior knowledge space is to estimate posterior probability under the situation of knowing the component classifier classification results.It need add up the number that each class sample drops on each unit of behavior knowledge space.When using this method, the sample in the training set is divided into different unit, and these unit are that the various combination by all component classifier classification results defines.When a unknown sample need be carried out the branch time-like, all component classifiers all can be known the combination of classification results, can find corresponding unit thus.Then, according to the sample concrete class in this unit, unknown sample is included into the maximum classification of occurrence number.
We are being different under the voice collection of voice content and languages, and single mode identification and above several blending algorithm are assessed.
Assess for performance, select for use discrimination (IR, Identification Rate) to be used as the evaluation criteria of experimental result Speaker Recognition System.
The computing formula of discrimination IR is:
Figure A20051006135900131
Experimental result is as follows:
Fusion methodDiscrimination (%)
MandarinDialectEnglishVocabularyPicture talk
Application on Voiceprint Recognition 84.63 85.55 91.11 87.78 87.78
Recognition of face 85.18
Addition 85.37 85.18 86.11 85.18 85
Weighting 85.37 85.18 86.67 85.18 85
SDWS 97.96 97.98 98.89 99.26 98.33
The ballot method 85.18 85.18 85.18 85.18 85.18
Method based on the behavior knowledge space 89.15 89.68 92.33 90.21 88.10
Experimental result shows that the biological authentication method of single mode can't reach discrimination preferably, can not satisfy the requirement of security and robustness.
Under the situation of two Multiple Classifier Fusion, the method for addition and weighting tends to make the advantage of two sorters disappear mutually on the contrary because do not consider the score distribution situation of sorter.
The ballot method has only been considered the category label of each sorter output, and does not consider their error rate, and this has wasted the information of training sample to a certain extent.
Though behavior knowledge space method is to tie up the direct statistics that distributes more than a plurality of sorter results of decision, the decision-making that can make up component classifier is to obtain best result.Yet, because the relative training sample quantity of behavior knowledge space is too huge, being easy to occur undisciplined situation, this is because training set can't be huge to each unit is filled into enough density.
This recognizer can be by the analysis to the sorter score, according under the situation of sorter identification error, difference between the score of the model under the score of the model that the sorter of collecting is judged and the sample, with this weight as sorter, by simple and effective method of weighting sorter is merged in decision-making level, make two kinds of sorters have complementary advantages, to improving a lot on the system performance, head and shoulders above other fusion method, improved about 7.8-13.3% than the method for single mode.Thereby improved the recognition performance of Speaker Identification.

Claims (7)

Translated fromChinese
1、一种基于得分差加权融和的多模态身份识别方法,其特征在于:首先利用一组说话人样本数据,通过原有的传统单模态分类器每个样本相对模版中每个说话人模型的得分;如果得分最高的那个模型和样本属于不同的说话人,则记录下两者的得分差;然后把单个分类器中所有这些差值都累加起来;最后利用各分类器的得分差来确定各个模态的权重。1. A multimodal identity recognition method based on weighted fusion of score differences, characterized in that: first, a set of speaker sample data is used, and each sample is compared to each speaker in the template through the original traditional single-modal classifier. The score of the model; if the model with the highest score and the sample belong to different speakers, record the score difference between the two; then add up all these differences in a single classifier; finally use the score difference of each classifier to Determine the weight of each modality.2、根据权利要求1所述的基于得分差加权融和的多模态身份识别方法,其特征在于:所述的传统单模态分类器为声纹识别分类器和人脸识别分类器。2. The multi-modal identity recognition method based on score difference weighted fusion according to claim 1, wherein the traditional single-modal classifier is a voiceprint recognition classifier and a face recognition classifier.3、根据权利要求1所述的基于得分差加权融和的多模态身份识别方法,其特征在于:所述的得分为分类器对输入的数据属于某个类别的这一猜想的支持度。3. The multi-modal identity recognition method based on score difference weighted fusion according to claim 1, wherein the score is the support degree of the classifier to the conjecture that the input data belongs to a certain category.4、根据权利要求1所述的基于得分差加权融和的多模态身份识别方法,其特征在于:所述的得分差为在分离器判别错误情况下,此时输入数据原所属类别与分类器假设的输入数据的类别不一致时,分类器对上述两个类别的支持度的差值。4. The multi-modal identity recognition method based on weighted fusion of score differences according to claim 1, characterized in that: the score difference is the difference between the original category of the input data and the classifier when the separator makes an error. When the categories of the assumed input data are inconsistent, the difference between the support of the classifier for the above two categories.5、根据权利要求1所述的基于得分差加权融和的多模态身份识别方法,其特征在于:所述的分类器的得分差为单个分类器中所有说话人判别错误情况下的样本属于的说话人模型的得分与最高得分的差值的和。5. The multimodal identity recognition method based on weighted fusion of score differences according to claim 1, characterized in that: the score difference of the classifiers belongs to the samples in the case of all speaker discrimination errors in a single classifier. The sum of the difference between the speaker model's score and the highest score.6、根据权利要求1所述的基于得分差加权的多模态身份识别方法,其特征在于:所述的分类器基于得分差的权重为单个分离器得分差的倒数对所有分离器得分差的倒数和的比值。6. The multi-modal identity recognition method based on weighted score difference according to claim 1, characterized in that: the weight of the classifier based on the score difference is the reciprocal of the score difference of a single separator to the weight of the score difference of all separators The ratio of the reciprocal sum.7、根据权利要求1或2或3或4或5或6所述的基于得分差加权的多模态身份识别方法,其特征在于:基于得分差加权的分类器融合算法分为分类器形式化描述,训练和识别三个部分;7. The multimodal identification method based on score difference weighting according to claim 1 or 2 or 3 or 4 or 5 or 6, characterized in that: the classifier fusion algorithm based on score difference weighting is divided into classifier formalization Describe, train and recognize three parts;1)、分类器形式化描述1) Formal description of the classifierA)、分类器描述:设D={D1,D2,...,DL}代表一组分量分类器;A), classifier description: Let D={D1 , D2 ,..., DL } represent a group of component classifiers;B)、类别描述:设Ω={ω1,...,ωc}代表一组类别标识,即所有可能的分类结果;B), category description: Let Ω={ω1 ,...,ωc } represent a set of category identifiers, that is, all possible classification results;C)、输入:特征向量
Figure A2005100613590002C1
C), input: feature vector
Figure A2005100613590002C1
D)、输出:长度为c的向量Di(x)=[di,1(x),di,2(x),...,di,c(x)]T,其中di,j(x)代表Di对于x属于
Figure A2005100613590003C1
这一猜想的支持度,di,j(x)是被归一化到[0,1]区间内的分量分类器输出,且
D), output: length is the vector Di (x)=[di, 1 (x), di, 2 (x), ..., di, c (x)]T of length c, wherein di , j (x) represents Di for x belongs to
Figure A2005100613590003C1
The support of this conjecture, di, j (x) is the output of the component classifier normalized to the interval [0, 1], and
&Sigma;&Sigma;jj==11ccddii,,jj((xx))==11;;E)、所有分类器的输出合成一个DP矩阵:E), the output of all classifiers is synthesized into a DP matrix:DPDP((xx))==dd1,11,1((xx)),,dd1,21,2((xx)),,&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;,,dd11,,cc((xx))&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;ddii,,11((xx)),,ddii,,22((xx)),,&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;,,ddii,,cc((xx))&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;ddLL,,11((xx)),,ddll,,22((xx))&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;,,ddll,,cc((xx))在这个矩阵中,第i行元素代表分量分类器Di的输出Di(x);第j列元素代表每个分量分类器对
Figure A2005100613590003C4
的支持度;
In this matrix, the i-th row element represents the output Di (x) of the component classifier Di ; the j-th column element represents each component classifier pair
Figure A2005100613590003C4
the degree of support;
2)、训练2), trainingA)、训练样本:有N个元素的训练集合X={x1,x2,...,xN};A), training samples: a training set X={x1 , x2 , . . . , xN } with N elements;B)、分类器对样本的识别结果:B), the recognition result of the classifier to the sample:SS((Xx))==sthe s11,,11((Xx)),,&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;,,sthe s11,,LL((Xx))&CenterDot;&CenterDot;&CenterDot;&Center Dot;&CenterDot;&CenterDot;sthe sjj,,ii((Xx)),,&CenterDot;&CenterDot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;,,sthe sjj,,LL((Xx))&CenterDot;&Center Dot;&CenterDot;&CenterDot;&CenterDot;&Center Dot;sthe sNN,,11((Xx)),,&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;,,sthe sNN,,LL((Xx))其中sj,i为分类器Di对样本元素xj所标识的类,当且仅当where sj, i is the class identified by the classifier Di for the sample element xj , if and only ifsthe sjj,,ii==DD.ii((xxjj))==sthe s&DoubleLeftRightArrow;&DoubleLeftRightArrow;ddii,,sthe s((xxjj))==maxmaxoo==1,21,2,,&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;,,cc{{ddii,,oo((xxjj))}}这里j=1,...,N是训练集合中元素的数目:i=1,...L是分类器的数目,C是分类的数目,此处为待识别的人数;Here j=1, ..., N is the number of elements in the training set: i=1, ... L is the number of classifiers, C is the number of classifications, here is the number of people to be identified;C)、样本原始所属类别:L(X)=[k1,...,kN]T
Figure A2005100613590003C7
C), the original category of the sample: L(X)=[k1 ,...,kN ]T ,
Figure A2005100613590003C7
D)、第i个分类器的得分差SDi(X)为:D), the score difference SDi (X) of the i-th classifier is:SDSDii((Xx))==&Sigma;&Sigma;JJ==11NNSDSDiijj((xxjj))==&Sigma;&Sigma;jj==11NN&Sigma;&Sigma;sthe sjj,,ii&NotEqual;&NotEqual;kkjj||ddii,,kkjj((xxjj))--ddii,,sthe sjj,,ii((xxjj))||SDi(X)为在分离器判别错误情况下,此时输入数据原所属类别与分类器假设的输入数据的类别不一致时sj,i≠kj,分类器对上述两个类别的支持度的差值。其中di,j(x)为DP(x)矩阵中的元素;SDi (X) is the support degree of the classifier for theabove two categories when the original category of the input data is inconsistent with the category of the input data assumed by the classifier when the separator is wrongly judged, i≠kj difference. Where di, j (x) is an element in the DP(x) matrix;E)、分类器基于得分差的权值:E), the weight of the classifier based on the score difference:WWii==SDSDii((Xx))--11&Sigma;&Sigma;ii==11LLSDSDii((Xx))--113)、判决3), Judgment根据权值,重新计算多模态状态下,每个类别的支持度:According to the weight, recalculate the support of each category in the multimodal state:DD.((xx))==[[dd11((xx)),,dd22((xx)),,&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;,,ddcc((xx))]]TT==[[&Sigma;&Sigma;ii==11LLWWii**ddii,,11((xx)),,&Sigma;&Sigma;ii==11LLWWii**ddii,,22((xx)),,&CenterDot;&Center Dot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;&Sigma;&Sigma;ii==11LLWWii**ddii,,cc((xx)),,]]TT多个分类器对测试向量x的分类结果为ωs当且仅当s=maxi=1,&CenterDot;&CenterDot;&CenterDot;cdi(x).The classification result of multiple classifiers on the test vector x is ωs if and only if the s = max i = 1 , &Center Dot; &CenterDot; &Center Dot; c d i ( x ) .
CNB2005100613590A2005-10-312005-10-31 Multimodal Identity Recognition Method Based on Score Difference Weighted FusionExpired - Fee RelatedCN100363938C (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CNB2005100613590ACN100363938C (en)2005-10-312005-10-31 Multimodal Identity Recognition Method Based on Score Difference Weighted Fusion

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CNB2005100613590ACN100363938C (en)2005-10-312005-10-31 Multimodal Identity Recognition Method Based on Score Difference Weighted Fusion

Publications (2)

Publication NumberPublication Date
CN1758263Atrue CN1758263A (en)2006-04-12
CN100363938C CN100363938C (en)2008-01-23

Family

ID=36703632

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CNB2005100613590AExpired - Fee RelatedCN100363938C (en)2005-10-312005-10-31 Multimodal Identity Recognition Method Based on Score Difference Weighted Fusion

Country Status (1)

CountryLink
CN (1)CN100363938C (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102810154A (en)*2011-06-022012-12-05国民技术股份有限公司Method and system for biological characteristic acquisition and fusion based on trusted module
CN104183240A (en)*2014-08-192014-12-03中国联合网络通信集团有限公司Vocal print feature fusion method and device
CN104598795A (en)*2015-01-302015-05-06科大讯飞股份有限公司Authentication method and system
CN104598796A (en)*2015-01-302015-05-06科大讯飞股份有限公司Method and system for identifying identity
CN105810199A (en)*2014-12-302016-07-27中国科学院深圳先进技术研究院Identity verification method and device for speakers
CN106127156A (en)*2016-06-272016-11-16上海元趣信息技术有限公司Robot interactive method based on vocal print and recognition of face
CN106303797A (en)*2016-07-302017-01-04杨超坤A kind of automobile audio with control system
WO2017067136A1 (en)*2015-10-202017-04-27广州广电运通金融电子股份有限公司Method and device for authenticating identify by means of fusion of multiple biological characteristics
CN107249434A (en)*2015-02-122017-10-13皇家飞利浦有限公司Robust classification device
CN110008676A (en)*2019-04-022019-07-12合肥智查数据科技有限公司A kind of personnel's multidimensional challenge and true identity discrimination system and method
CN110378414A (en)*2019-07-192019-10-25中国计量大学The personal identification method of multi-modal biological characteristic fusion based on evolution strategy
CN111144167A (en)*2018-11-022020-05-12银河水滴科技(北京)有限公司Gait information identification optimization method, system and storage medium
CN112990252A (en)*2019-12-182021-06-18株式会社东芝Information processing apparatus, information processing method, and program
CN114841293A (en)*2022-07-042022-08-02国网信息通信产业集团有限公司Multimode data fusion analysis method and system for power Internet of things

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1304114A (en)*1999-12-132001-07-18中国科学院自动化研究所Identity identification method based on multiple biological characteristics
CN1172260C (en)*2001-12-292004-10-20浙江大学 Cross-authentication method based on fingerprint and voiceprint
US6944566B2 (en)*2002-03-262005-09-13Lockheed Martin CorporationMethod and system for multi-sensor data fusion using a modified dempster-shafer theory

Cited By (19)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102810154A (en)*2011-06-022012-12-05国民技术股份有限公司Method and system for biological characteristic acquisition and fusion based on trusted module
CN102810154B (en)*2011-06-022016-05-11国民技术股份有限公司A kind of physical characteristics collecting fusion method and system based on trusted module
CN104183240A (en)*2014-08-192014-12-03中国联合网络通信集团有限公司Vocal print feature fusion method and device
CN105810199A (en)*2014-12-302016-07-27中国科学院深圳先进技术研究院Identity verification method and device for speakers
CN104598795A (en)*2015-01-302015-05-06科大讯飞股份有限公司Authentication method and system
CN104598796A (en)*2015-01-302015-05-06科大讯飞股份有限公司Method and system for identifying identity
CN107249434B (en)*2015-02-122020-12-18皇家飞利浦有限公司Robust classifier
CN107249434A (en)*2015-02-122017-10-13皇家飞利浦有限公司Robust classification device
WO2017067136A1 (en)*2015-10-202017-04-27广州广电运通金融电子股份有限公司Method and device for authenticating identify by means of fusion of multiple biological characteristics
US10346602B2 (en)2015-10-202019-07-09Grg Banking Equipment Co., Ltd.Method and device for authenticating identify by means of fusion of multiple biological characteristics
CN106127156A (en)*2016-06-272016-11-16上海元趣信息技术有限公司Robot interactive method based on vocal print and recognition of face
CN106303797A (en)*2016-07-302017-01-04杨超坤A kind of automobile audio with control system
CN111144167A (en)*2018-11-022020-05-12银河水滴科技(北京)有限公司Gait information identification optimization method, system and storage medium
CN110008676A (en)*2019-04-022019-07-12合肥智查数据科技有限公司A kind of personnel's multidimensional challenge and true identity discrimination system and method
CN110008676B (en)*2019-04-022022-09-16合肥智查数据科技有限公司System and method for multi-dimensional identity checking and real identity discrimination of personnel
CN110378414A (en)*2019-07-192019-10-25中国计量大学The personal identification method of multi-modal biological characteristic fusion based on evolution strategy
CN110378414B (en)*2019-07-192021-11-09中国计量大学Multi-mode biological characteristic fusion identity recognition method based on evolution strategy
CN112990252A (en)*2019-12-182021-06-18株式会社东芝Information processing apparatus, information processing method, and program
CN114841293A (en)*2022-07-042022-08-02国网信息通信产业集团有限公司Multimode data fusion analysis method and system for power Internet of things

Also Published As

Publication numberPublication date
CN100363938C (en)2008-01-23

Similar Documents

PublicationPublication DateTitle
CN1236423C (en)Background learning of speaker voices
CN1162839C (en) Method and apparatus for generating an acoustic model
CN1296886C (en)Speech recognition system and method
CN105469784B (en)A kind of speaker clustering method and system based on probability linear discriminant analysis model
CN101136199B (en)Voice data processing method and equipment
Zhang et al.Automatic mispronunciation detection for Mandarin
CN108281137A (en)A kind of universal phonetic under whole tone element frame wakes up recognition methods and system
US20080195389A1 (en)Text-dependent speaker verification
CN1188804C (en)Method for recognizing voice print
CN1758263A (en)Multi-model ID recognition method based on scoring difference weight compromised
CN1302427A (en)Model adaptation system and method for speaker verification
WO2010047019A1 (en)Statistical model learning device, statistical model learning method, and program
Gold et al.Examining long-term formant distributions as a discriminant in forensic speaker comparisons under a likelihood ratio framework
CN1197526A (en)Speaker verification system
JP6996627B2 (en) Information processing equipment, control methods, and programs
KR20060070603A (en) Method and device for verifying two-stage speech in speech recognition system
Ge et al.Neural network based speaker classification and verification systems with enhanced features
CN103514170A (en)Speech-recognition text classification method and device
CN1787076A (en)Method for distinguishing speek person based on hybrid supporting vector machine
Sadıç et al.Common vector approach and its combination with GMM for text-independent speaker recognition
CN1253851C (en)Speaker&#39;s inspection and speaker&#39;s identification system and method based on prior knowledge
CN119993161A (en) A conference recording method based on Internet of Things
CN1298533A (en)Adaptation of a speech recognizer for dialectal and linguistic domain variations
CN1787075A (en)Method for distinguishing speek speek person by supporting vector machine model basedon inserted GMM core
CN102419976A (en)Audio indexing method based on quantum learning optimization decision

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20080123

Termination date:20211031

CF01Termination of patent right due to non-payment of annual fee

[8]ページ先頭

©2009-2025 Movatter.jp