Movatterモバイル変換


[0]ホーム

URL:


CN110444223A - Speaker's separation method and device based on Recognition with Recurrent Neural Network and acoustic feature - Google Patents

Speaker's separation method and device based on Recognition with Recurrent Neural Network and acoustic feature
Download PDF

Info

Publication number
CN110444223A
CN110444223ACN201910561692.XACN201910561692ACN110444223ACN 110444223 ACN110444223 ACN 110444223ACN 201910561692 ACN201910561692 ACN 201910561692ACN 110444223 ACN110444223 ACN 110444223A
Authority
CN
China
Prior art keywords
speaker
word
result
identified
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910561692.XA
Other languages
Chinese (zh)
Other versions
CN110444223B (en
Inventor
王健宗
贾雪丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co LtdfiledCriticalPing An Technology Shenzhen Co Ltd
Priority to CN201910561692.XApriorityCriticalpatent/CN110444223B/en
Publication of CN110444223ApublicationCriticalpatent/CN110444223A/en
Priority to PCT/CN2019/117805prioritypatent/WO2020258661A1/en
Application grantedgrantedCritical
Publication of CN110444223BpublicationCriticalpatent/CN110444223B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses speaker's separation methods and device based on Recognition with Recurrent Neural Network and acoustic feature.This method includes the term vector set of voice data to be identified being obtained by speech recognition, and identify the MFCC feature vector set for obtaining voice data to be identified, is connected entirely, with feature vector after being merged;It is encoded feature vector after fusion to obtain coding result;Coding result is decoded to obtain segmentation result corresponding with feature vector after fusion;The prediction that segmentation result is carried out to voice conversion obtains voice conversion and accords with corresponding Speaker Identification result;By Speaker Identification result by cluster, speaker clustering result is obtained;And speaker clustering result is sent to the corresponding upload end of voice data to be identified.The method achieve having merged the sequence of vocabulary and acoustics characteristic information to series model and going to carry out the separation of speaker, can capture including the encoded information before and after voice conversion point.

Description

Speaker's separation method and device based on Recognition with Recurrent Neural Network and acoustic feature
Technical field
The present invention relates to speech classification technique field more particularly to a kind of saying based on Recognition with Recurrent Neural Network and acoustic featureTalk about people's separation method and device.
Background technique
For a complete speech identifying system comprising multiple speakers, (ASR, full name are speaker's separationAutomatic Speech Recognition) for be a very important pre-treatment step, and speaker separationInformation is also most important for the speech analysis of such as image angle colour change etc.
Usual speaker's separation system includes two parts: segmentation and cluster.The purpose of segmentation is to find all speakHuman world transfer point, the most commonly used is the dividing methods based on bayesian information criterion.Recently, using Recognition with Recurrent Neural Network,The speaker of the methods of the deep neural network of pre-training of simultaneous factor analysis and process supervised learning and unsupervised learningSeparation all achieves good effect.However, few algorithms are related to excavating lexical information, it is most of to be related to lexical informationResearch be directed to identity or the role of speaker, i.e., the text that speech recognition obtains be applied not to separation one canThe reason of energy is to introduce more noises if first running ASR before separation.
Summary of the invention
The embodiment of the invention provides a kind of speaker's separation methods based on Recognition with Recurrent Neural Network and acoustic feature, dressSet, computer equipment and storage medium, it is intended to solve in the prior art usual speaker's separation system using Recognition with Recurrent Neural Network,The speaker of the methods of the deep neural network of pre-training of simultaneous factor analysis and process supervised learning and unsupervised learningSeparation, and more noises can be generated due to introducing speech recognition before speaker separates, the text for causing speech recognition to obtainOriginally the problem of being applied not to speaker's separation.
In a first aspect, being separated the embodiment of the invention provides a kind of based on the speaker of Recognition with Recurrent Neural Network and acoustic featureMethod comprising:
It receives and uploads voice data to be identified transmitted by end;
The term vector set of the voice data to be identified is obtained by speech recognition, and by described in speech recognition acquisitionThe MFCC feature vector set of voice data to be identified, the term vector set and MFCC feature vector set are connected entirelyIt connects, with feature vector after being merged;
Feature vector after the fusion is input in encoder and is encoded, coding result is obtained;
It is decoded, obtains corresponding with feature vector after the fusion using the coding result as the input of decoderSegmentation result;Wherein, the segmentation result includes word sequence and voice conversion symbol;
It will be distributed apart from nearest voice conversion symbol to corresponding word, to described in the segmentation result away from each wordSegmentation result carries out the prediction of voice conversion, obtains voice conversion and accords with corresponding Speaker Identification result;
By the Speaker Identification result by cluster, speaker clustering result is obtained;And
The speaker clustering result is sent to the corresponding upload end of the voice data to be identified.
Second aspect is separated the embodiment of the invention provides a kind of based on the speaker of Recognition with Recurrent Neural Network and acoustic featureDevice comprising:
Voice receiving unit uploads voice data to be identified transmitted by end for receiving;
Fusion Features unit for obtaining the term vector set of the voice data to be identified by speech recognition, and leads toThe MFCC feature vector set that speech recognition obtains the voice data to be identified is crossed, by the term vector set and MFCC featureVector set is connected entirely, with feature vector after being merged;
Coding unit encodes for feature vector after the fusion to be input in encoder, obtains coding result;
Decoding unit, for being decoded the coding result as the input of decoder, obtain with after described mergeThe corresponding segmentation result of feature vector;Wherein, the segmentation result includes word sequence and voice conversion symbol;
Speaker's predicting unit, for distribution will to be accorded with apart from nearest voice conversion away from each word in the segmentation resultIt obtains voice conversion to carry out the prediction of voice conversion to the segmentation result to corresponding word and accords with corresponding speakerRecognition result;
Speaker clustering unit, for the Speaker Identification result by cluster, to be obtained speaker clustering result;WithAnd
As a result transmission unit, it is corresponding for the speaker clustering result to be sent to the voice data to be identifiedPass end.
The third aspect, the embodiment of the present invention provide a kind of computer equipment again comprising memory, processor and storageOn the memory and the computer program that can run on the processor, the processor execute the computer programSpeaker's separation method described in the above-mentioned first aspect of Shi Shixian based on Recognition with Recurrent Neural Network and acoustic feature.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, wherein the computer canIt reads storage medium and is stored with computer program, it is above-mentioned that the computer program when being executed by a processor executes the processorSpeaker's separation method described in first aspect based on Recognition with Recurrent Neural Network and acoustic feature.
The embodiment of the invention provides a kind of speaker's separation methods based on Recognition with Recurrent Neural Network and acoustic feature, dressIt sets, computer equipment and storage medium.This method includes receiving to upload voice data to be identified transmitted by end;Known by voiceThe term vector set of the voice data to be identified is not obtained, and the voice data to be identified is obtained by speech recognitionMFCC feature vector set is connected the term vector set and MFCC feature vector set entirely, with special after being mergedLevy vector;Feature vector after the fusion is input in encoder and is encoded, coding result is obtained;By the coding resultInput as decoder is decoded, and obtains segmentation result corresponding with feature vector after the fusion;Wherein, the segmentationIt as a result include that word sequence and voice conversion accord with;It will divide away from each word apart from nearest voice conversion symbol in the segmentation resultIt is assigned to corresponding word, to carry out the prediction of voice conversion to the segmentation result, voice conversion is obtained and accords with corresponding speakPeople's recognition result;By the Speaker Identification result by cluster, speaker clustering result is obtained;And the speaker is dividedClass result is sent to the corresponding upload end of the voice data to be identified.The method achieve merged vocabulary and acoustic feature letterThe sequence of breath goes to carry out the separation of speaker to series model, can capture including the coding letter before and after voice conversion pointBreath.
Detailed description of the invention
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment descriptionAttached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this fieldFor logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the stream of speaker's separation method provided in an embodiment of the present invention based on Recognition with Recurrent Neural Network and acoustic featureJourney schematic diagram;
Fig. 2 is the son of speaker's separation method provided in an embodiment of the present invention based on Recognition with Recurrent Neural Network and acoustic featureFlow diagram;
Fig. 3 is the another of speaker's separation method provided in an embodiment of the present invention based on Recognition with Recurrent Neural Network and acoustic featureOne sub-process schematic diagram;
Fig. 4 is to solve in speaker's separation method provided in an embodiment of the present invention based on Recognition with Recurrent Neural Network and acoustic featureThe schematic diagram of the voice conversion vector of the output of code device and overlapping;
Fig. 5 is showing for speaker's separator provided in an embodiment of the present invention based on Recognition with Recurrent Neural Network and acoustic featureMeaning property block diagram;
Fig. 6 is the son of speaker's separator provided in an embodiment of the present invention based on Recognition with Recurrent Neural Network and acoustic featureUnit schematic block diagram;
Fig. 7 is the another of speaker's separator provided in an embodiment of the present invention based on Recognition with Recurrent Neural Network and acoustic featureOne subelement schematic block diagram;
Fig. 8 is the schematic block diagram of computer equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, completeSite preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hairEmbodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative effortsExample, shall fall within the protection scope of the present invention.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instructionDescribed feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precludedBody, step, operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this description of the invention merely for the sake of description specific embodimentAnd be not intended to limit the present invention.As description of the invention and it is used in the attached claims, unless onOther situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in description of the invention and the appended claims isRefer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
Referring to Fig. 1, it is the speaker provided in an embodiment of the present invention based on Recognition with Recurrent Neural Network and acoustic feature pointFlow diagram from method should be applied to server based on speaker's separation method of Recognition with Recurrent Neural Network and acoustic featureIn, this method is executed by the application software being installed in server.
As shown in Figure 1, the method comprising the steps of S110~S170.
S110, voice data to be identified transmitted by upload end is received.
In the present embodiment, when using upload end sound-recording function or video record function collected voice number to be identifiedAccording to when, in order to realize to the speaker of voice data to be identified separation, voice data to be identified need to be sent to clothes by uploading endBusiness device, carries out data processing to voice data to be identified by server and obtains speaker clustering result.
S120, the term vector set that the voice data to be identified is obtained by speech recognition, and obtained by speech recognitionThe MFCC feature vector set for taking the voice data to be identified carries out the term vector set and MFCC feature vector setFull connection, with feature vector after being merged.
In the present embodiment, it in order to carry out speaker's separation to voice data to be identified, needs first to extract voice to be identifiedThe corresponding term vector set of data and MFCC feature vector set.
Wherein, term vector set is obtained from the corresponding text data of the voice data to be identified.In one embodiment,As shown in Fig. 2, step S120 includes:
S121, obtained by speech recognition independent heat corresponding with respectively being segmented in the voice data to be identified encode word toAmount;
S122, will each independence corresponding with voice to be identified by the Word2Vec model for converting word to vectorHeat coding term vector is converted, and term vector set corresponding with voice data to be identified is obtained.
It i.e. first will independent heat coding term vector corresponding with respectively being segmented in the voice data to be identified with a linear layerThe word embeding layer being transformed into encoder, will be corresponding with voice to be identified each only by the Word2Vec model in word embeding layerVertical heat coding term vector is converted, and term vector set corresponding with voice data to be identified is obtained.
Wherein, MFCC feature vector set is directly acquired from the voice data to be identified.In one embodiment, stepS120 includes:
Feature extraction is carried out from the voice to be identified according to preset characteristic window, is obtained and the voice pair to be identifiedThe MFCC feature vector answered;Wherein, the length of window of the characteristic window is preset first time value, the characteristic windowMoving distance is preset second time value.
In the present embodiment, (MFCC's is full when carrying out feature extraction MFCC feature vector set from the voice to be identifiedTitle is Mel-scaleFrequency Cepstral Coefficients, indicates mel cepstrum coefficients), the spy can be presetThe length of window for levying window is 25ms, and the moving distance for presetting the characteristic window is 10ms.From length of window 25ms, it is mobile away fromThe MFCC feature that 13 dimensions are extracted in window from 10ms, is then averaged to word section, each word obtains the one of a 13*1Dimensional vector, to form MFCC feature vector set.By extracting MFCC feature vector corresponding with the voice to be identified, haveOne group of feature vector for obtaining voice physical information (spectrum envelope and details) progress encoding operation of effect.
In one embodiment, the term vector set and the MFCC feature vector set are input to phaseWith being connected entirely in the GRU model of number hidden layer, with feature vector after being merged.
For MFCC feature vector, using the hidden layer of 256 hiding layer units, the term vector of 256 sizes and 256 is greatlySmall output layer.And it enables the hidden layer number of plies of MFCC feature consistent with the hidden layer number of plies of word embeding layer, could export in this wayPreferably performance.
S130, it feature vector after the fusion is input in encoder encodes, obtain coding result.
In the present embodiment, it is encoded since feature vector after the fusion to be input in encoder, it is known that codingMFCC feature vector and term vector are merged in device, MFCC feature vector and term vector are carried out by linear layers some in encoderFull connection, feature vector after being merged.Then feature vector after fusion is input to the GRU model used in encoder (i.e.Gating cycle unit), coding result can be obtained.
In one embodiment, step S130 includes:
Feature vector after the fusion is input in encoder and carries out nonlinear change conversion to obtain intermediate semanteme;
The Automobile driving probability distribution respectively segmented in the intermediate semanteme is obtained by attention mechanism, to obtain and instituteState intermediate semantic corresponding coding result.
In the present embodiment, attention mechanism is usually used in coder-decoder frame.Encoder is exactly as its name suggests to defeatedEnter sentence Source (such as sentence in the corresponding text of voice data to be identified) to be encoded, input sentence Source is led toIt crosses nonlinear transformation and is converted into intermediate semantic expressiveness C, wherein C=F (x1, x2... ..., xm);For decoder, taskIt is according to the intermediate semantic expressiveness C for inputting sentence Source and the historical information y generated before1, y2... ... yi-1To generateThe i moment word y to be generatedi=(Ci, y1, y2... ... yi-1), CiAutomobile driving for participle i in the intermediate semanteme is generalRate distribution.Even in the encoder and the decoder between introduce attention model, the above process such as y1=f (C1), y2=f (C2,Y1), y3=f (C3, y1, y2) namely each Ci may correspond to the Automobile driving probability point of different source statement sub-wordsCloth, this attention mechanism for allowing for model can help to capture the most important part of the feature of speaker.
Attention mechanism in Sequence Learning task have huge castering action, in codec framework, byA model is added in coding section, data weighting transformation is carried out to source data sequence, or introduce A model in decoding end, to target dataIt is weighted variation, sequence is can effectively improve and the system under the natural way of sequence is showed.
S140, it is decoded, obtains and feature vector after described merge using the coding result as the input of decoderCorresponding segmentation result;Wherein, the segmentation result includes word sequence and voice conversion symbol.
In the present embodiment, decoder output is the segmentation result for including word sequence and voice conversion symbol.For example, toIdentify the corresponding text of voice data are as follows: hello hi my name is James hi James.The segmentation of decoder outputAs a result are as follows: hello#A hi#B my name is James#A hi James.
Decoder loss function calculate when, have ignored the ID of speaker, and be only concerned speaker be grouped situation.ThanSuch as, voice conversion symbol sequence ABA is considered as being equivalent to BAB.Because the original of conversion symbol sequence can be calculated in calculating processVersion and flipped version, and select loss reduction that as penalty values.This loss function can also be to avoid study instructionPractice the probability concentrated in target sequence between conversion symbol and word.
S150, it will be distributed apart from nearest voice conversion symbol to corresponding word in the segmentation result away from each word, withThe prediction that voice conversion is carried out to the segmentation result obtains voice conversion and accords with corresponding Speaker Identification result.
In the present embodiment, in order to realize the accuracy for maximizing voice conversion prediction, a movement-overlapping is usedDesign.For example, using the window inswept entire part from the beginning to the end of a 32 word length.For each window, with trainingGood sequence carrys out predictive conversion symbol sequence to series model.
In one embodiment, as shown in figure 3, step S150 includes:
Segmentation result in S151, acquisition decoder;
S152, the word that marker in the segmentation result is directed toward first place is obtained, to originate word as current;
S153, it will be distributed into the segmentation result often away from each word apart from nearest voice conversion symbol in segmentation resultA word, to establish converting vector;
S154, the converting vector is stored into voice conversion sequence matrix;
S155, marker direction position is moved right into a word to update current starting word;
Whether S156, the current starting word of judgement are the word that last bit is in segmentation result, if currently starting word is not segmentationAs a result it is in the word of last bit in, returns to step S153, if currently starting word is that the word in segmentation result in last bit executesStep S157;
The process of the prediction of S157, end voice conversion.
I.e. in prediction, extracted from text and audio file respectively 32 words term vector and 32 dimensions MFCC feature toAmount.The prediction of specific speaker's conversion sequence is carried out by Fig. 4 and following steps:
51) segmentation result is obtained from decoder;
52) conversion is established in such a way that conversion symbol nearest away from each word in segmentation result is distributed to each wordVector;
53) in the voice conversion sequence matrix for accumulating the converting vector there are one;
54) move right a word distance and by next group of 32 term vectors and 32 tie up MFCC feature vector inputInto encoder;
After window is moved to end, determined by way of majority voting belonging to voice conversion symbol.In this way, it is equivalent to 32 kinds of different predictions and determines.
S160, the Speaker Identification result is passed through into cluster, obtains speaker clustering result.
In the present embodiment, using the cluster mode for being based on bayesian information criterion (BIC), and the feature usedIt is the other MFCC feature of frame level.In more speaker's speech Separations, the Speaker Identification result is clustered, is exactly oneVoice flow is clustered into the voice flow of multiple speakers by the cluster process of voice flow.
S170, the speaker clustering result is sent to the corresponding upload end of the voice data to be identified.
In the present embodiment, after completing the identification of speaker clustering result, the speaker clustering result will be sent outUpload end corresponding to the voice data to be identified is sent, voice data to be identified is completed and carries out speaker point on server lineFrom.
The method achieve the sequences for having merged vocabulary and acoustics characteristic information to go to carry out dividing for speaker to series modelFrom can capture including the encoded information before and after voice conversion point.
The embodiment of the present invention also provides a kind of speaker's separator based on Recognition with Recurrent Neural Network and acoustic feature, the baseIt is aforementioned special based on Recognition with Recurrent Neural Network and acoustics for executing in speaker's separator of Recognition with Recurrent Neural Network and acoustic featureAny embodiment of speaker's separation method of sign.Specifically, referring to Fig. 5, Fig. 5 is provided in an embodiment of the present invention based on followingThe schematic block diagram of speaker's separator of ring neural network and acoustic feature.It should be based on Recognition with Recurrent Neural Network and acoustic featureSpeaker's separator 100 can be configured in server.
As shown in figure 5, speaker's separator 100 based on Recognition with Recurrent Neural Network and acoustic feature includes phonetic incepting listFirst 110, Fusion Features unit 120, coding unit 130, decoding unit 140, speaker's predicting unit 150, speaker clustering listFirst 160, result transmission unit 170.
Voice receiving unit 110 uploads voice data to be identified transmitted by end for receiving.
In the present embodiment, when using upload end sound-recording function or video record function collected voice number to be identifiedAccording to when, in order to realize to the speaker of voice data to be identified separation, voice data to be identified need to be sent to clothes by uploading endBusiness device, carries out data processing to voice data to be identified by server and obtains speaker clustering result.
Fusion Features unit 120, for obtaining the term vector set of the voice data to be identified by speech recognition, andThe MFCC feature vector set that the voice data to be identified is obtained by speech recognition, the term vector set and MFCC is specialSign vector set is connected entirely, with feature vector after being merged.
In the present embodiment, it in order to carry out speaker's separation to voice data to be identified, needs first to extract voice to be identifiedThe corresponding term vector set of data and MFCC feature vector set.
Wherein, term vector set is obtained from the corresponding text data of the voice data to be identified.In one embodiment,As shown in fig. 6, Fusion Features unit 120 includes:
Independent heat coding term vector acquiring unit 121, for being obtained and the voice data to be identified by speech recognitionIn respectively segment corresponding independent heat coding term vector;
Term vector set acquiring unit 122, for by the Word2Vec model for converting word to vector will with toThe corresponding each independent heat coding term vector of identification voice is converted, and term vector collection corresponding with voice data to be identified is obtainedIt closes.
It i.e. first will independent heat coding term vector corresponding with respectively being segmented in the voice data to be identified with a linear layerThe word embeding layer being transformed into encoder, will be corresponding with voice to be identified each only by the Word2Vec model in word embeding layerVertical heat coding term vector is converted, and term vector set corresponding with voice data to be identified is obtained.
Wherein, MFCC feature vector set is directly acquired from the voice data to be identified.In one embodiment, featureIntegrated unit 120 is also used to:
Feature extraction is carried out from the voice to be identified according to preset characteristic window, is obtained and the voice pair to be identifiedThe MFCC feature vector answered;Wherein, the length of window of the characteristic window is preset first time value, the characteristic windowMoving distance is preset second time value.
In the present embodiment, (MFCC's is full when carrying out feature extraction MFCC feature vector set from the voice to be identifiedTitle is Mel-scaleFrequency Cepstral Coefficients, indicates mel cepstrum coefficients), the spy can be presetThe length of window for levying window is 25ms, and the moving distance for presetting the characteristic window is 10ms.From length of window 25ms, it is mobile away fromThe MFCC feature that 13 dimensions are extracted in window from 10ms, is then averaged to word section, each word obtains the one of a 13*1Dimensional vector, to form MFCC feature vector set.By extracting MFCC feature vector corresponding with the voice to be identified, haveOne group of feature vector for obtaining voice physical information (spectrum envelope and details) progress encoding operation of effect.
In one embodiment, the term vector set and the MFCC feature vector set are input to phaseWith being connected entirely in the GRU model of number hidden layer, with feature vector after being merged.
For MFCC feature vector, using the hidden layer of 256 hiding layer units, the term vector of 256 sizes and 256 is greatlySmall output layer.And it enables the hidden layer number of plies of MFCC feature consistent with the hidden layer number of plies of word embeding layer, could export in this wayPreferably performance.
Coding unit 130 is encoded for feature vector after the fusion to be input in encoder, obtains coding knotFruit.
In the present embodiment, it is encoded since feature vector after the fusion to be input in encoder, it is known that codingMFCC feature vector and term vector are merged in device, MFCC feature vector and term vector are carried out by linear layers some in encoderFull connection, feature vector after being merged.Then feature vector after fusion is input to the GRU model used in encoder (i.e.Gating cycle unit), coding result can be obtained.
In one embodiment, coding unit 130 includes:
Intermediate semanteme acquiring unit, carries out nonlinear change for feature vector after the fusion to be input in encoderConversion is to obtain intermediate semanteme;
Attention mechanism processing unit, for obtaining the attention respectively segmented in the intermediate semanteme by attention mechanismAllocation probability distribution, to obtain and the intermediate semantic corresponding coding result.
In the present embodiment, attention mechanism is usually used in coder-decoder frame.Encoder is exactly as its name suggests to defeatedEnter sentence Source (such as sentence in the corresponding text of voice data to be identified) to be encoded, input sentence Source is led toIt crosses nonlinear transformation and is converted into intermediate semantic expressiveness C, wherein C=F (x1, x2... ..., xm);For decoder, taskIt is according to the intermediate semantic expressiveness C for inputting sentence Source and the historical information y generated before1, y2... ... yi-1To generateThe i moment word y to be generatedi=(Ci, y1, y2... ... yi-1), CiAutomobile driving for participle i in the intermediate semanteme is generalRate distribution.Even in the encoder and the decoder between introduce attention model, the above process such as y1=f (C1), y2=f (C2,Y1), y3=f (C3, y1, y2) namely each Ci may correspond to the Automobile driving probability point of different source statement sub-wordsCloth, this attention mechanism for allowing for model can help to capture the most important part of the feature of speaker.
Attention mechanism in Sequence Learning task have huge castering action, in codec framework, byA model is added in coding section, data weighting transformation is carried out to source data sequence, or introduce A model in decoding end, to target dataIt is weighted variation, sequence is can effectively improve and the system under the natural way of sequence is showed.
Decoding unit 140 obtains merging with described for being decoded the coding result as the input of decoderThe corresponding segmentation result of feature vector afterwards;Wherein, the segmentation result includes word sequence and voice conversion symbol.
In the present embodiment, decoder output is the segmentation result for including word sequence and voice conversion symbol.For example, toIdentify the corresponding text of voice data are as follows: hello hi my name is James hi James.The segmentation of decoder outputAs a result are as follows: hello#A hi#B my name is James#A hi James.
Decoder loss function calculate when, have ignored the ID of speaker, and be only concerned speaker be grouped situation.ThanSuch as, voice conversion symbol sequence ABA is considered as being equivalent to BAB.Because the original of conversion symbol sequence can be calculated in calculating processVersion and flipped version, and select loss reduction that as penalty values.This loss function can also be to avoid study instructionPractice the probability concentrated in target sequence between conversion symbol and word.
Speaker's predicting unit 150, for will be accorded with away from each word apart from nearest voice conversion in the segmentation resultDistribution obtains voice conversion and accords with corresponding theory to corresponding word to carry out the prediction of voice conversion to the segmentation resultTalk about people's recognition result.
In the present embodiment, in order to realize the accuracy for maximizing voice conversion prediction, a movement-overlapping is usedDesign.For example, using the window inswept entire part from the beginning to the end of a 32 word length.For each window, with trainingGood sequence carrys out predictive conversion symbol sequence to series model.
In one embodiment, as shown in fig. 7, speaker's predicting unit 150 includes:
Segmentation result acquiring unit 151, for obtaining the segmentation result in decoder;
Current starting word acquiring unit 152 is directed toward the first word for obtaining marker in the segmentation result, using asCurrent starting word;
Converting vector acquiring unit 153, for will divide away from each word apart from nearest voice conversion symbol in segmentation resultIt is assigned to each word in the segmentation result, to establish converting vector;
Converting vector storage unit 154, for storing the converting vector into voice conversion sequence matrix;
Word updating unit 155 is originated, for marker direction position to be moved right a word to update current riseBeginning word;
Last bit word judging unit 156 currently originates whether word is the word that last bit is in segmentation result for judging, if working asPreceding starting word is not the word that last bit is in segmentation result, returns and executes and will speak away from each word apart from nearest in segmentation resultPeople converts symbol distribution each word into the segmentation result, the step of to establish converting vector, if currently starting word is segmentation knotWord in fruit in last bit executes the step of process for terminating the prediction of voice conversion;
Process end unit 157, the process of the prediction for terminating voice conversion.
I.e. in prediction, extracted from text and audio file respectively 32 words term vector and 32 dimensions MFCC feature toAmount.The prediction of specific speaker's conversion sequence is carried out by Fig. 4 and following steps:
51) segmentation result is obtained from decoder;
52) conversion is established in such a way that conversion symbol nearest away from each word in segmentation result is distributed to each wordVector;
53) in the voice conversion sequence matrix for accumulating the converting vector there are one;
54) move right a word distance and by next group of 32 term vectors and 32 tie up MFCC feature vector inputInto encoder;
After window is moved to end, determined by way of majority voting belonging to voice conversion symbol.In this way, it is equivalent to 32 kinds of different predictions and determines.
Speaker clustering unit 160, for the Speaker Identification result by cluster, to be obtained speaker clustering knotFruit.
In the present embodiment, using the cluster mode for being based on bayesian information criterion (BIC), and the feature usedIt is the other MFCC feature of frame level.In more speaker's speech Separations, the Speaker Identification result is clustered, is exactly oneVoice flow is clustered into the voice flow of multiple speakers by the cluster process of voice flow.
As a result transmission unit 170, it is corresponding for the speaker clustering result to be sent to the voice data to be identifiedUpload end.
In the present embodiment, after completing the identification of speaker clustering result, the speaker clustering result will be sent outUpload end corresponding to the voice data to be identified is sent, voice data to be identified is completed and carries out speaker point on server lineFrom.
The arrangement achieves the sequences for having merged vocabulary and acoustics characteristic information to go to carry out dividing for speaker to series modelFrom can capture including the encoded information before and after voice conversion point.
Above-mentioned speaker's separator based on Recognition with Recurrent Neural Network and acoustic feature can be implemented as computer programForm, the computer program can be run in computer equipment as shown in Figure 8.
Referring to Fig. 8, Fig. 8 is the schematic block diagram of computer equipment provided in an embodiment of the present invention.The computer equipment500 be server, and server can be independent server, is also possible to the server cluster of multiple server compositions.
Refering to Fig. 8, which includes processor 502, memory and the net connected by system bus 501Network interface 505, wherein memory may include non-volatile memory medium 503 and built-in storage 504.
The non-volatile memory medium 503 can storage program area 5031 and computer program 5032.The computer program5032 are performed, and processor 502 may make to execute speaker's separation method based on Recognition with Recurrent Neural Network and acoustic feature.
The processor 502 supports the operation of entire computer equipment 500 for providing calculating and control ability.
The built-in storage 504 provides environment for the operation of the computer program 5032 in non-volatile memory medium 503, shouldWhen computer program 5032 is executed by processor 502, processor 502 may make to execute based on Recognition with Recurrent Neural Network and acoustic featureSpeaker's separation method.
The network interface 505 is for carrying out network communication, such as the transmission of offer data information.Those skilled in the art canTo understand, structure shown in Fig. 8, only the block diagram of part-structure relevant to the present invention program, is not constituted to this hairThe restriction for the computer equipment 500 that bright scheme is applied thereon, specific computer equipment 500 may include than as shown in the figureMore or fewer components perhaps combine certain components or with different component layouts.
Wherein, the processor 502 is for running computer program 5032 stored in memory, to realize the applicationSpeaker's separation method in embodiment based on Recognition with Recurrent Neural Network and acoustic feature.
It will be understood by those skilled in the art that the embodiment of computer equipment shown in Fig. 8 is not constituted to computerThe restriction of equipment specific composition, in other embodiments, computer equipment may include components more more or fewer than diagram, orPerson combines certain components or different component layouts.For example, in some embodiments, computer equipment can only include depositingReservoir and processor, in such embodiments, the structure and function of memory and processor are consistent with embodiment illustrated in fig. 8,Details are not described herein.
It should be appreciated that in embodiments of the present invention, processor 502 can be central processing unit (CentralProcessing Unit, CPU), which can also be other general processors, digital signal processor (DigitalSignal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit,ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logicDevice, discrete gate or transistor logic, discrete hardware components etc..Wherein, general processor can be microprocessor orPerson's processor is also possible to any conventional processor etc..
Computer readable storage medium is provided in another embodiment of the invention.The computer readable storage medium can be withFor non-volatile computer readable storage medium.The computer-readable recording medium storage has computer program, wherein calculatingMachine program is realized in the embodiment of the present application when being executed by processor and is separated based on the speaker of Recognition with Recurrent Neural Network and acoustic featureMethod.
It is apparent to those skilled in the art that for convenience of description and succinctly, foregoing description is setThe specific work process of standby, device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.Those of ordinary skill in the art may be aware that unit described in conjunction with the examples disclosed in the embodiments of the present disclosure and algorithmStep can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware and softwareInterchangeability generally describes each exemplary composition and step according to function in the above description.These functions are studied carefullyUnexpectedly the specific application and design constraint depending on technical solution are implemented in hardware or software.Professional technicianEach specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceedThe scope of the present invention.
In several embodiments provided by the present invention, it should be understood that disclosed unit and method, it can be withIt realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unitIt divides, only logical function partition, there may be another division manner in actual implementation, can also will be with the same functionUnit set is at a unit, such as multiple units or components can be combined or can be integrated into another system or someFeature can be ignored, or not execute.In addition, shown or discussed mutual coupling, direct-coupling or communication connection canBe through some interfaces, the indirect coupling or communication connection of device or unit, be also possible to electricity, mechanical or other shapesFormula connection.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unitThe component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multipleIn network unit.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needsPurpose.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unitIt is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integratedUnit both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent productWhen, it can store in one storage medium.Based on this understanding, technical solution of the present invention is substantially in other words to existingThe all or part of part or the technical solution that technology contributes can be embodied in the form of software products, shouldComputer software product is stored in a storage medium, including some instructions are used so that a computer equipment (can bePersonal computer, server or network equipment etc.) execute all or part of step of each embodiment the method for the present inventionSuddenly.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), magnetic disk orThe various media that can store program code such as person's CD.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, anyThose familiar with the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or replaceIt changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with rightIt is required that protection scope subject to.

Claims (10)

CN201910561692.XA2019-06-262019-06-26Speaker separation method and device based on cyclic neural network and acoustic characteristicsActiveCN110444223B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201910561692.XACN110444223B (en)2019-06-262019-06-26Speaker separation method and device based on cyclic neural network and acoustic characteristics
PCT/CN2019/117805WO2020258661A1 (en)2019-06-262019-11-13Speaking person separation method and apparatus based on recurrent neural network and acoustic features

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910561692.XACN110444223B (en)2019-06-262019-06-26Speaker separation method and device based on cyclic neural network and acoustic characteristics

Publications (2)

Publication NumberPublication Date
CN110444223Atrue CN110444223A (en)2019-11-12
CN110444223B CN110444223B (en)2023-05-23

Family

ID=68428733

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910561692.XAActiveCN110444223B (en)2019-06-262019-06-26Speaker separation method and device based on cyclic neural network and acoustic characteristics

Country Status (2)

CountryLink
CN (1)CN110444223B (en)
WO (1)WO2020258661A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110931013A (en)*2019-11-292020-03-27北京搜狗科技发展有限公司Voice data processing method and device
CN111128223A (en)*2019-12-302020-05-08科大讯飞股份有限公司Text information-based auxiliary speaker separation method and related device
CN111223476A (en)*2020-04-232020-06-02深圳市友杰智新科技有限公司Method and device for extracting voice feature vector, computer equipment and storage medium
CN111261186A (en)*2020-01-162020-06-09南京理工大学Audio sound source separation method based on improved self-attention mechanism and cross-frequency band characteristics
CN111276131A (en)*2020-01-222020-06-12厦门大学Multi-class acoustic feature integration method and system based on deep neural network
CN111461173A (en)*2020-03-062020-07-28华南理工大学 A multi-speaker clustering system and method based on attention mechanism
CN111524527A (en)*2020-04-302020-08-11合肥讯飞数码科技有限公司Speaker separation method, device, electronic equipment and storage medium
CN111640450A (en)*2020-05-132020-09-08广州国音智能科技有限公司Multi-person audio processing method, device, equipment and readable storage medium
CN111640456A (en)*2020-06-042020-09-08合肥讯飞数码科技有限公司Overlapped sound detection method, device and equipment
CN111883165A (en)*2020-07-022020-11-03中移(杭州)信息技术有限公司Speaker voice segmentation method, device, electronic equipment and storage medium
WO2020258661A1 (en)*2019-06-262020-12-30平安科技(深圳)有限公司Speaking person separation method and apparatus based on recurrent neural network and acoustic features
CN112201275A (en)*2020-10-092021-01-08深圳前海微众银行股份有限公司Voiceprint segmentation method, voiceprint segmentation device, voiceprint segmentation equipment and readable storage medium
CN112233668A (en)*2020-10-212021-01-15中国人民解放军海军工程大学Voice instruction and identity recognition method based on neural network
CN112951270A (en)*2019-11-262021-06-11新东方教育科技集团有限公司Voice fluency detection method and device and electronic equipment
CN112992175A (en)*2021-02-042021-06-18深圳壹秘科技有限公司Voice distinguishing method and voice recording device thereof
WO2021139772A1 (en)*2020-01-102021-07-15阿里巴巴集团控股有限公司Audio information processing method and apparatus, electronic device, and storage medium
CN113707130A (en)*2021-08-162021-11-26北京搜狗科技发展有限公司Voice recognition method and device for voice recognition
CN113723166A (en)*2021-03-262021-11-30腾讯科技(北京)有限公司Content identification method and device, computer equipment and storage medium
CN114927124A (en)*2022-03-042022-08-19上海交通大学Laboratory voice monitoring system based on voice recognition and natural language processing
CN117037801A (en)*2023-05-182023-11-10武汉天天互动科技有限公司Method for detecting speech wheel and identifying speaker in real teaching environment based on multiple modes
CN118072734A (en)*2024-01-302024-05-24中电信人工智能科技(北京)有限公司 Speech recognition method, device, processor, memory and electronic device
CN118918883A (en)*2024-10-102024-11-08世优(北京)科技股份有限公司Scene-based voice recognition method and device
WO2025007610A1 (en)*2023-07-062025-01-09腾讯科技(深圳)有限公司Model determination method, model application method, and related device

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113642422B (en)*2021-07-272024-05-24东北电力大学Continuous Chinese sign language recognition method
CN113555034B (en)*2021-08-032024-03-01京东科技信息技术有限公司Compressed audio identification method, device and storage medium
CN115841813A (en)*2021-09-182023-03-24北京猿力未来科技有限公司Voice recognition method, device, equipment and storage medium
CN113822276B (en)*2021-09-302024-06-14中国平安人寿保险股份有限公司Picture correction method, device, equipment and medium based on neural network
CN114330474B (en)*2021-10-202024-04-26腾讯科技(深圳)有限公司Data processing method, device, computer equipment and storage medium
CN119252260B (en)*2024-12-022025-02-11武汉纺织大学Non-flow model flow type speech recognition method based on attention and boundary detection

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106683661A (en)*2015-11-052017-05-17阿里巴巴集团控股有限公司Role separation method and device based on voice
US20170178666A1 (en)*2015-12-212017-06-22Microsoft Technology Licensing, LlcMulti-speaker speech separation
CN107731233A (en)*2017-11-032018-02-23王华锋A kind of method for recognizing sound-groove based on RNN
CN108766440A (en)*2018-05-282018-11-06平安科技(深圳)有限公司Speaker's disjunctive model training method, two speaker's separation methods and relevant device
CN109147758A (en)*2018-09-122019-01-04科大讯飞股份有限公司A kind of speaker's sound converting method and device
US20190066713A1 (en)*2016-06-142019-02-28The Trustees Of Columbia University In The City Of New YorkSystems and methods for speech separation and neural decoding of attentional selection in multi-speaker environments
CN109584903A (en)*2018-12-292019-04-05中国科学院声学研究所A kind of multi-person speech separation method based on deep learning
US20190156837A1 (en)*2017-11-232019-05-23Samsung Electronics Co., Ltd.Neural network device for speaker recognition, and method of operation thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6895376B2 (en)*2001-05-042005-05-17Matsushita Electric Industrial Co., Ltd.Eigenvoice re-estimation technique of acoustic models for speech recognition, speaker identification and speaker verification
CN105427858B (en)*2015-11-062019-09-03科大讯飞股份有限公司Realize the method and system that voice is classified automatically
CN108320732A (en)*2017-01-132018-07-24阿里巴巴集团控股有限公司The method and apparatus for generating target speaker's speech recognition computation model
CN109036454A (en)*2018-06-062018-12-18安徽继远软件有限公司The isolated method and system of the unrelated single channel recording of speaker based on DNN
CN110444223B (en)*2019-06-262023-05-23平安科技(深圳)有限公司Speaker separation method and device based on cyclic neural network and acoustic characteristics

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106683661A (en)*2015-11-052017-05-17阿里巴巴集团控股有限公司Role separation method and device based on voice
US20170178666A1 (en)*2015-12-212017-06-22Microsoft Technology Licensing, LlcMulti-speaker speech separation
US20190066713A1 (en)*2016-06-142019-02-28The Trustees Of Columbia University In The City Of New YorkSystems and methods for speech separation and neural decoding of attentional selection in multi-speaker environments
CN107731233A (en)*2017-11-032018-02-23王华锋A kind of method for recognizing sound-groove based on RNN
US20190156837A1 (en)*2017-11-232019-05-23Samsung Electronics Co., Ltd.Neural network device for speaker recognition, and method of operation thereof
CN108766440A (en)*2018-05-282018-11-06平安科技(深圳)有限公司Speaker's disjunctive model training method, two speaker's separation methods and relevant device
CN109147758A (en)*2018-09-122019-01-04科大讯飞股份有限公司A kind of speaker's sound converting method and device
CN109584903A (en)*2018-12-292019-04-05中国科学院声学研究所A kind of multi-person speech separation method based on deep learning

Cited By (36)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2020258661A1 (en)*2019-06-262020-12-30平安科技(深圳)有限公司Speaking person separation method and apparatus based on recurrent neural network and acoustic features
CN112951270A (en)*2019-11-262021-06-11新东方教育科技集团有限公司Voice fluency detection method and device and electronic equipment
CN112951270B (en)*2019-11-262024-04-19新东方教育科技集团有限公司Voice fluency detection method and device and electronic equipment
CN110931013A (en)*2019-11-292020-03-27北京搜狗科技发展有限公司Voice data processing method and device
CN111128223A (en)*2019-12-302020-05-08科大讯飞股份有限公司Text information-based auxiliary speaker separation method and related device
CN111128223B (en)*2019-12-302022-08-05科大讯飞股份有限公司Text information-based auxiliary speaker separation method and related device
WO2021139772A1 (en)*2020-01-102021-07-15阿里巴巴集团控股有限公司Audio information processing method and apparatus, electronic device, and storage medium
US12154545B2 (en)2020-01-102024-11-26Alibaba Group Holding LimitedAudio information processing method, audio information processing apparatus, electronic device, and storage medium
CN111261186A (en)*2020-01-162020-06-09南京理工大学Audio sound source separation method based on improved self-attention mechanism and cross-frequency band characteristics
CN111276131B (en)*2020-01-222021-01-12厦门大学Multi-class acoustic feature integration method and system based on deep neural network
US11217225B2 (en)2020-01-222022-01-04Xiamen UniversityMulti-type acoustic feature integration method and system based on deep neural networks
CN111276131A (en)*2020-01-222020-06-12厦门大学Multi-class acoustic feature integration method and system based on deep neural network
CN111461173A (en)*2020-03-062020-07-28华南理工大学 A multi-speaker clustering system and method based on attention mechanism
CN111461173B (en)*2020-03-062023-06-20华南理工大学 A multi-speaker clustering system and method based on attention mechanism
CN111223476A (en)*2020-04-232020-06-02深圳市友杰智新科技有限公司Method and device for extracting voice feature vector, computer equipment and storage medium
CN111524527B (en)*2020-04-302023-08-22合肥讯飞数码科技有限公司Speaker separation method, speaker separation device, electronic device and storage medium
CN111524527A (en)*2020-04-302020-08-11合肥讯飞数码科技有限公司Speaker separation method, device, electronic equipment and storage medium
CN111640450A (en)*2020-05-132020-09-08广州国音智能科技有限公司Multi-person audio processing method, device, equipment and readable storage medium
CN111640456A (en)*2020-06-042020-09-08合肥讯飞数码科技有限公司Overlapped sound detection method, device and equipment
CN111640456B (en)*2020-06-042023-08-22合肥讯飞数码科技有限公司Method, device and equipment for detecting overlapping sound
CN111883165A (en)*2020-07-022020-11-03中移(杭州)信息技术有限公司Speaker voice segmentation method, device, electronic equipment and storage medium
CN112201275A (en)*2020-10-092021-01-08深圳前海微众银行股份有限公司Voiceprint segmentation method, voiceprint segmentation device, voiceprint segmentation equipment and readable storage medium
CN112201275B (en)*2020-10-092024-05-07深圳前海微众银行股份有限公司Voiceprint segmentation method, voiceprint segmentation device, voiceprint segmentation equipment and readable storage medium
CN112233668A (en)*2020-10-212021-01-15中国人民解放军海军工程大学Voice instruction and identity recognition method based on neural network
CN112233668B (en)*2020-10-212023-04-07中国人民解放军海军工程大学Voice instruction and identity recognition method based on neural network
CN112992175A (en)*2021-02-042021-06-18深圳壹秘科技有限公司Voice distinguishing method and voice recording device thereof
WO2022166219A1 (en)*2021-02-042022-08-11深圳壹秘科技有限公司Voice diarization method and voice recording apparatus thereof
CN112992175B (en)*2021-02-042023-08-11深圳壹秘科技有限公司Voice distinguishing method and voice recording device thereof
CN113723166A (en)*2021-03-262021-11-30腾讯科技(北京)有限公司Content identification method and device, computer equipment and storage medium
CN113707130A (en)*2021-08-162021-11-26北京搜狗科技发展有限公司Voice recognition method and device for voice recognition
CN114927124A (en)*2022-03-042022-08-19上海交通大学Laboratory voice monitoring system based on voice recognition and natural language processing
CN117037801A (en)*2023-05-182023-11-10武汉天天互动科技有限公司Method for detecting speech wheel and identifying speaker in real teaching environment based on multiple modes
WO2025007610A1 (en)*2023-07-062025-01-09腾讯科技(深圳)有限公司Model determination method, model application method, and related device
CN118072734A (en)*2024-01-302024-05-24中电信人工智能科技(北京)有限公司 Speech recognition method, device, processor, memory and electronic device
CN118918883A (en)*2024-10-102024-11-08世优(北京)科技股份有限公司Scene-based voice recognition method and device
CN118918883B (en)*2024-10-102024-12-13世优(北京)科技股份有限公司Scene-based voice recognition method and device

Also Published As

Publication numberPublication date
CN110444223B (en)2023-05-23
WO2020258661A1 (en)2020-12-30

Similar Documents

PublicationPublication DateTitle
CN110444223A (en)Speaker's separation method and device based on Recognition with Recurrent Neural Network and acoustic feature
CN109313910B (en)Permutation invariant training for speaker independent multi-speaker speech separation
KR102294638B1 (en)Combined learning method and apparatus using deepening neural network based feature enhancement and modified loss function for speaker recognition robust to noisy environments
CN113782048B (en)Multi-mode voice separation method, training method and related device
CN111429889B (en)Method, apparatus, device and computer readable storage medium for real-time speech recognition based on truncated attention
CN104700828B (en)The construction method of depth shot and long term memory Recognition with Recurrent Neural Network acoustic model based on selective attention principle
CN111754992A (en) A noise-robust audio and video dual-modal speech recognition method and system
CN111178157A (en) A Tone-Based Cascaded Sequence-to-Sequence Model for Chinese Lip Language Recognition
CN113178206B (en)AI (Artificial intelligence) composite anchor generation method, electronic equipment and readable storage medium
CN111461173A (en) A multi-speaker clustering system and method based on attention mechanism
CN104036774A (en)Method and system for recognizing Tibetan dialects
CN114360491A (en)Speech synthesis method, speech synthesis device, electronic equipment and computer-readable storage medium
CN116189034A (en)Head posture driving method and device, equipment, medium and product thereof
CN112133294B (en)Speech recognition method, device and system and storage medium
CN112420079A (en)Voice endpoint detection method and device, storage medium and electronic equipment
CN103985381A (en)Voice frequency indexing method based on parameter fusion optimized decision
US12300220B1 (en)Pitch-based speech conversion model training method and speech conversion system
CN114360485B (en)Voice processing method, system, device and medium
Maas et al.Recurrent neural network feature enhancement: The 2nd CHiME challenge
CN113555027B (en)Voice emotion conversion method and device, computer equipment and storage medium
CN114373443A (en)Speech synthesis method and apparatus, computing device, storage medium, and program product
CN112951201A (en)End-to-end emotion voice synthesis method under business hall environment
EP3847646B1 (en)An audio processing apparatus and method for audio scene classification
CN114495914B (en) Speech recognition method, speech recognition model training method and related device
CN119968677A (en)Media segment representation using fixed weights

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp