Movatterモバイル変換


[0]ホーム

URL:


CN106373592B - Audio holds processing method and the system of making pauses in reading unpunctuated ancient writings of making an uproar - Google Patents

Audio holds processing method and the system of making pauses in reading unpunctuated ancient writings of making an uproar
Download PDF

Info

Publication number
CN106373592B
CN106373592BCN201610799384.7ACN201610799384ACN106373592BCN 106373592 BCN106373592 BCN 106373592BCN 201610799384 ACN201610799384 ACN 201610799384ACN 106373592 BCN106373592 BCN 106373592B
Authority
CN
China
Prior art keywords
frame
sentence
energy
independent
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610799384.7A
Other languages
Chinese (zh)
Other versions
CN106373592A (en
Inventor
胡飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUAKEFEIYANG Co Ltd
Original Assignee
HUAKEFEIYANG Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HUAKEFEIYANG Co LtdfiledCriticalHUAKEFEIYANG Co Ltd
Priority to CN201610799384.7ApriorityCriticalpatent/CN106373592B/en
Publication of CN106373592ApublicationCriticalpatent/CN106373592A/en
Application grantedgrantedCritical
Publication of CN106373592BpublicationCriticalpatent/CN106373592B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

It carries out audio and holds processing method and the system of making pauses in reading unpunctuated ancient writings of making an uproar, comprising: multiple framing sections are obtained according to audio;Energy threshold is obtained according to the energy value of each framing section, according to the energy threshold, it is more than setting energy threshold E that its energy value is obtained from each framing sectiontFraming section, then be scanned by preamble frame or postorder frame of the sentence intermediate frame to the frame of the framing section, if the energy threshold of preamble frame or postorder frame be less than setting energy threshold Et, then merging the frame by frame start sequence with the sentence intermediate frame becomes independent sentence, carries out spectrum entropy analysis to each independent sentence later, obtains last parsing sentence.To solve in existing subtitle corresponding process, the problem of can not be made pauses in reading unpunctuated ancient writings automatically.To which the present invention both can handle the audio-video recorded, also can handle the audio-video being broadcast live.For network direct broadcasting stream, automatically network direct broadcasting voice can be cut, facilitate follow-up link such as dictation link parallel processing, faster processing time.

Description

Audio holds processing method and the system of making pauses in reading unpunctuated ancient writings of making an uproar
Technical field
The present invention relates to voice, subtitle processing technology field, more particularly to carry out audio appearance and make an uproar to make pauses in reading unpunctuated ancient writings processing method and beSystem.
Background technique
Subtitle production field at present, main by manually carrying out voice punctuate, the premise of artificial speech punctuate is by voiceIt all listens one time, marks the starting point and end point of a word by patting shortcut key while dictation.Due to beatingThere is dislocation, need to manually adjust in delay, obtained starting point and end point.Whole flow process needs to consume the plenty of time.ThanSuch as, 30 minutes audios need time-consuming 40 minutes to 1 hour punctuate time, and productivity is extremely low.And it is led in network direct broadcastingDomain, if by manually being dictated, being difficult to carry out parallelization without punctuate, and the speed of people's dictation can be slower than live streaming speed,Can not carry out parallelization cannot carry out real-time live broadcast in both illustration and text.By artificial punctuate, since the speed manually made pauses in reading unpunctuated ancient writings is also than playingSpeed is slow, also causes to be difficult to carry out real-time live broadcast.
Summary of the invention
For above-mentioned defect in the prior art, making an uproar to make pauses in reading unpunctuated ancient writings the object of the present invention is to provide audio appearance processing method and isSystem.To solving in existing subtitle corresponding process, can not be made pauses in reading unpunctuated ancient writings automatically and problem that noise is high.
The present invention is directed to classroom recorded broadcast and network direct broadcasting, and a kind of method for proposing intelligent sound punctuate, this method passes throughSpeech analysis techniques, can quickly analyze the audio data of recording or acquisition automatically, and detection obtains the language for meeting subtitle specificationTablet section saves the time of video and audio subtitle production.
In order to achieve the above object, the invention provides the following technical scheme:
Audio holds processing method of making pauses in reading unpunctuated ancient writings of making an uproar, comprising:
Step S101 obtains multiple framing sections according to audio;
Step S102 obtains energy threshold E according to the energy value of each framing sectionk
Step S103, according to the energy threshold Ek, it is more than setting energy that its energy value is obtained from each framing sectionThreshold value EtFraming section, then be scanned by preamble frame or postorder frame of the sentence intermediate frame to the frame of the framing section, if preamble frameOr the energy threshold of postorder frame is less than setting energy threshold Et, then the frame is merged into the sentence intermediate frame by frame start sequenceFor independent sentence;
Step S104, from the front and back of each sentence, two frames is searched for forward and backward, if the next frame searched belongs to itHis sentence, then merge two sentences;If the energy of next frame is less than setting energy threshold Et, and it is not belonging to other sentencesSon then carries out Fourier transform to the frame, takes the amplitude of 0-4000HZ, is divided into z bands of a spectrum according to fixed width, every bands of a spectrumIntensity is Vi, i=1,2 ... z, overall strength Vsum, PiFor the probability of every bands of a spectrum: PiCalculation formula are as follows:
Then, the spectrum entropy of the frame are as follows:
The energy of each frame and the ratio of spectrum entropy are energy entropy ratio, are denoted as R, set an energy entropy than threshold value RtIf the frameEnergy entropy ratio be not less than Rt, then the frame is grouped into sentence, if the beginning or end of voice flow, scan abort are arrived in scanning;
Step S105 judges whether the frame length of the independent sentence is the short sentence frame length range set, if so, history is depositedThe short independent sentence sample of storage is compared with current independent sentence, if matching degree is lower than setting value, independent sentence is identified as noiseSentence;
Step S106, the independent sentence for not being identified as noise sentence that each framing section of the audio is obtained is as the disconnected of audioSentence.
In a preferred embodiment, include: in the step S101
Step S1011: audio file is received;
Step S1012: the audio file is split according to the sliced time of setting, obtains multiple framing sections.
It in a preferred embodiment, include: being averaged according to the energy value of each framing section in the step S102Value obtains energy threshold Ek
In a preferred embodiment, " if the energy threshold of preamble frame or postorder frame is less than in the step S103Set energy threshold Et, then merging the frame and the sentence intermediate frame by frame start sequence becomes independent sentence unit " the step of wrapIt includes:
If the energy threshold of preamble frame or postorder frame is less than setting energy threshold Et, then judge between present frame and next frameWhether it is less than setting interval time every the time, if so, the sentence intermediate frame is merged by frame start sequence becomes independent sentence.
In a preferred embodiment, after step S103 further include:
Step S1031: if the frame length of the independent sentence calculates the spectrum entropy of the independent every frame of sentence beyond independent frame length is setThan using lowest spectrum entropy than corresponding frame as cut-point, above-mentioned independent sentence is divided into two independent sentences.
The present invention also provides a kind of automatic split systems for carrying out audio punctuate simultaneously, comprising: framing unit, energy valveIt is worth acquiring unit, independent sentence acquiring unit;Compose entropy analytical unit;
The framing unit is configured to obtain multiple framing sections according to audio;
The energy threshold acquiring unit is configured to obtain energy threshold E according to the energy value of each framing sectionk
The independent sentence acquiring unit, is configured to according to the energy threshold Ek, its energy is obtained from each framing sectionMagnitude is more than setting energy threshold EtFraming section, then using the framing section as sentence intermediate frame to the preamble frame of the frame or postorder frame intoRow scanning, if the energy threshold of preamble frame or postorder frame is less than setting energy threshold Et, then the frame and the sentence intermediate frame are pressedFrame start sequence, which merges, becomes independent sentence;
The spectrum entropy analytical unit is configured to search for forward and backward from two frame of the front and back of each sentence, if searchedNext frame belong to other sentences, then two sentences are merged;If the energy of next frame is less than setting energy threshold Et,And be not belonging to other sentences, then Fourier transform is carried out to the frame, takes the amplitude of 0-4000HZ, be divided into z item according to fixed widthBands of a spectrum, the intensity of every bands of a spectrum are Vi, i=1,2 ... z, overall strength Vsum, PiFor the probability of every bands of a spectrum, PiCalculation formulaAre as follows:
Then, the spectrum entropy of the frame are as follows:
The energy of each frame and the ratio of spectrum entropy are energy entropy ratio, are denoted as R, set an energy entropy than threshold value RtIf the frameEnergy entropy ratio be not less than Rt, then the frame is grouped into sentence, if the beginning or end of voice flow, scan abort are arrived in scanning;
The noise sentence judging unit is configured to judge whether the frame length of the independent sentence is the short sentence frame length model setIt encloses, if so, the short independent sentence sample of historical storage and current independent sentence are compared, if matching degree is lower than setting value,Independent sentence is identified as noise sentence;
Punctuate acquiring unit, the independent sentence for not being identified as noise sentence for being configured to obtain each framing section of the audio are madeFor the punctuate of audio.
In a preferred embodiment, the framing unit is additionally configured to: receiving audio file;According to point of settingCutting the time is split the audio file, obtains multiple framing sections.
In a preferred embodiment, the energy threshold acquiring unit is additionally configured to, according to the energy of each framing sectionThe average value of magnitude obtains energy threshold Ek
In a preferred embodiment, the independent sentence acquiring unit is additionally configured to, if preamble frame or postorder frameEnergy threshold is less than setting energy threshold Et, then when judging whether the interval time of present frame and next frame is less than setting intervalBetween, if so, the sentence intermediate frame is merged by frame start sequence becomes independent sentence.
In a preferred embodiment, further includes: long sentence judging unit;
The long sentence judging unit, if the frame length for being configured to the independent sentence calculates this solely beyond independent frame length is setAbove-mentioned independent sentence is divided into two independences using lowest spectrum entropy than corresponding frame as cut-point by the spectrum entropy ratio of the vertical every frame of sentenceSentence.
The invention has the benefit that main calculate of this method is carried out in time domain, calculating speed is fast.For possibleIt is the limited regional area that consonant is also likely to be noise, is analyzed in conjunction with time-domain and frequency-domain, increases the accuracy of cutting.Only needA few frames are carried out with time-consuming spectrum analysis (frame as shown below selects part), cutting speed is i.e. fast, again accurate, while having againStronger noise resistance characteristic.For automatically generating the time point of voice cutting, the workload of audio-video caption editing can be saved.It devises a set of directly using existing calculated result, no longer carries out the cutting method of quadratic character calculating, can quickly be grownSentence cutting, guarantee is not in too long sentence, meets the needs of production subtitle.Using machine learning method, to short sentence intoRow determines detection, determines whether it is people's sound or noise, abandons noise, further promotes accuracy.This method can both be locatedThe audio-video recorded is managed, also can handle the audio-video being broadcast live.It, can be automatically by net for network direct broadcasting streamVoice cutting is broadcast live in network, facilitates follow-up link such as dictation link parallel processing, faster processing time.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show belowThere is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only thisSome embodiments of invention for those of ordinary skill in the art without creative efforts, can be withIt obtains other drawings based on these drawings.
Fig. 1 is in one embodiment of the present invention, and audio holds the flow diagram of processing method of making pauses in reading unpunctuated ancient writings of making an uproar;
Fig. 2 is in one embodiment of the present invention, and audio holds the logical connection schematic diagram of processing system of making pauses in reading unpunctuated ancient writings of making an uproar.
Specific embodiment
Below in conjunction with attached drawing of the invention, technical solution of the present invention is clearly and completely described, it is clear that instituteThe embodiment of description is only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention,Every other embodiment obtained by those of ordinary skill in the art without making creative efforts, belongs to this hairThe range of bright protection.
Audio in the present invention holds processing method of making pauses in reading unpunctuated ancient writings of making an uproar, as shown in Figure 1, comprising:
Step S101 obtains multiple framing sections according to audio.
The present invention may be mounted on server, also may be mounted on personal computer or mobile computing device.BelowSo-called computing terminal can be server, be also possible to personal computer, be also possible to mobile computing device.Firstly, toServer uploads audio-video document, either opens audio-video document on personal computer or mobile computing device.Later, it countsThe audio stream in equipment extraction audio-video document is calculated, audio stream unification is had into symbol single-channel data to fixed sampling frequency.ItPreset framing parameter is used afterwards, and sub-frame processing is carried out to data.
Step S1011: audio file is received;Step S1012: the audio file is carried out according to the sliced time of settingSegmentation, obtains multiple framing sections.
Sub-frame processing is carried out to audio.Every frame length is differed from 10ms to 500ms.In speech recognition, in order to accurately knowOther voice needs to be overlapped between consecutive frame.The purpose of the present invention is not to carry out speech recognition, therefore can weigh between frame and frameIt is folded, can not also be overlapped in addition consecutive frame between allow to have interval, be divided into 0ms to 500ms.Voice is divided in this wayFrame number, to reduce calculation amount, will improve calculating speed less than frame number needed for speech recognition.With F1,F2,…Fm, representFrame, each frame have n sample, are s respectivelyk1,sk2,…,skn, the range value of each sample is fki,fk2,…,fkn.Each frame noteRecord starting and end time.
Voice data is obtained real number numeric string after sampling by fixed sample rate to sound.Sample rate 16K, justRepresent 16000 data of sampling in 1 second.Framing, which means for this burst of data to be used as by regular time section for a set, to divideAnalyse unit.For example, 16K sample rate has 1600 voice data inside 1 frame if every frame length is 100 milliseconds.By dividingFrame determines the granularity of control.In this patent, usually according to 100 milliseconds of framings, that is to say, that N seconds videos need to be divided into10N frame.Certainly, can be non-conterminous between frame and frame, for example, 100 milliseconds of the interval of two frames, then N seconds videos, framing are exactly5N frame.Totalframes can be reduced by increasing the interval between frame and frame, improve analysis speed, but cost is that time accuracy can dropIt is low.
Step S102 obtains energy threshold E according to the energy value of each framing sectionk
In this step:
Its threshold energy E is calculated each framek.Energy definition is the sum of including but not limited to amplitude square and with absolute valueTwo ways.
According to the energy balane formula of amplitude square and definition are as follows:
The energy balane formula defined according to absolute value are as follows:
Set an energy threshold Et, adjacent and energy is searched more than EtSpeech frame, obtain speech sentence S1,S2,…Sj.It that is to say:
Si={ Fk| k=a, a+1, a+2 ... a+b, Ek>=Et, and E(a-1)<Et, and E(a+b+1)<Et}。
In another embodiment, include: in the step S101
It include: that energy threshold E is obtained according to the average value of the energy value of each framing section in the step S102k.That is, by upperThe energy value that one step obtains obtains average energy divided by sample size.Energy threshold is the threshold value of every frame average energy, usual rootAccording to experience setting, some number between 0.001-0.01 is commonly used, and user can manually adjust.
Step S103 merges into independent sentence.
According to the energy threshold Ek, it is more than setting energy threshold E that its energy value is obtained from each framing sectiontPointFrame section is then scanned by preamble frame or postorder frame of the sentence intermediate frame to the frame of the framing section, if preamble frame or postorder frameEnergy threshold is less than setting energy threshold Et, then merging the frame by frame start sequence with the sentence intermediate frame becomes independent sentence.
" if the energy threshold of preamble frame or postorder frame is less than setting energy threshold E in the step S103t, then by the frameWith the sentence intermediate frame by frame start sequence merge become independent sentence unit " if the step of include: preamble frame or postorder frame energyIt measures threshold values and is less than setting energy threshold Et, then judge whether the interval time of present frame and next frame is less than setting interval time,If so, the sentence intermediate frame is merged by frame start sequence becomes independent sentence.
From the front and back of each sentence, two frames is searched for forward and backward.If the next frame searched belongs to other sentences,Two sentences are merged.If the energy of next frame is less than setting energy threshold Et, and be not belonging to other sentences, then to thisFrame carries out Fourier transform, takes the amplitude of 0-4000HZ, is divided into z bands of a spectrum according to fixed width, the intensity of every bands of a spectrum is Vi,I=1,2 ... z.Overall strength is Vsum, PiFor the probability of every bands of a spectrum.PiCalculation formula are as follows:
Then, the spectrum entropy of the frame are as follows:
The energy of each frame and the ratio of spectrum entropy are energy entropy ratio, are denoted as R.An energy entropy is set than threshold value RtIf the frameEnergy entropy ratio be not less than Rt, then the frame is grouped into sentence.If the beginning or end of voice flow, scan abort are arrived in scanning.
Such as have 10 speech frames, every frame energy is respectively:
0.05,0.12,0.002,0.004,0.1,0.2,0.4,0,5,0.001,0.12
If being threshold value with 0.003, pass through third step, available three sentences:
Sentence 1 includes: 0.05,0.12
Sentence 2 includes: 0.004,0.1,0.2,0.4,0.5
Sentence 3 includes: 0.12
It is example with sentence 2, scans forward, the frame before it is 0.002, this frame is not belonging to any sentence, andIts energy is less than threshold value 0.003, at this moment, carries out Fourier transform to this frame, calculating can entropy ratio.If energy entropy ratio is lower than thisThreshold value, then it is assumed that this frame is not belonging to sentence 2, forward the end of scan.If energy entropy ratio is not less than this threshold value, then it is assumed that thisFrame belongs to sentence 2, and continuation scans forward next frame.Next frame is 0.12,0.12 to belong to sentence 1, then closes sentence 1 and sentence 2And.After having merged, it has been first frame that one frame of foremost, which is 0.05, can not be scanned forward, forward the end of scan.BackwardThe logic that the logical AND of scanning scans forward is the same.Energy is encountered lower than energy threshold, calculates its energy entropy ratio, and energy entropy ratio is lower thanEnergy entropy is than threshold value, then otherwise the end of scan continues to scan on.Other sentences are encountered, then are merged, after merging, are continued to scan on.
Later, merge similar sentence.For the sentence being bordered by, its interval time is calculated, if interval time is lower than fingerFixed time threshold then merges two sentences.
This step be further merge, such as, it is assumed that every frame length be 100 milliseconds, sentence 1 include the 22nd, 23,24,25,26 totally 5 frames, sentence 2 include 29,30,31,32,33,34,35 totally 7 frames, and there is no other sentences between two sentences.The two2 frames, that is, 200 milliseconds are spaced between sentence.It is assumed that specified 10 milliseconds of time threshold, because 200 milliseconds are less than300 milliseconds, then sentence 1 and sentence 2 are merged, merges into 1 sentence.Frame 27,28 between sentence 1 and sentence 2 also oneAnd in being merged into, the new sentence after merging includes 22,23,24,25,26,27,28,29,30,31,32,33,34,35 totally 14 frames.
Step S104 carries out spectrum entropy analysis to every.
In this step, from the front and back of each sentence, two frames is searched for forward and backward, if the next frame searched belongs to itHis sentence, then merge two sentences;If the energy of next frame is less than setting energy threshold Et, and it is not belonging to other sentencesSon then carries out Fourier transform to the frame, takes the amplitude of 0-4000HZ, is divided into z bands of a spectrum according to fixed width, every bands of a spectrumIntensity is Vi, i=1,2 ... z.Overall strength is Vsum, PiFor the probability of every bands of a spectrum: PiCalculation formula are as follows:
Then, the spectrum entropy of the frame are as follows:
The energy of each frame and the ratio of spectrum entropy are energy entropy ratio, are denoted as R.An energy entropy is set than threshold value RtIf the frameEnergy entropy ratio be not less than Rt, then the frame is grouped into sentence, if the beginning or end of voice flow, scan abort are arrived in scanning;
Step S105 identifies noise sentence;Whether the frame length for judging the independent sentence is the short sentence frame length range set, ifIt is then to compare the short independent sentence sample of historical storage and current independent sentence, it, will be independent if matching degree is lower than setting valueSentence is identified as noise sentence;Using machine learning method, judgement detection is carried out to short sentence, determines whether it is people's sound or makes an uproarSound abandons noise, further promotes accuracy.
Step S106 obtains punctuate.The independent sentence for not being identified as noise sentence that each framing section of the audio is obtained is madeFor the punctuate of audio.
In a preferred embodiment, after step S103 further include:
Step S1031: if the frame length of the independent sentence calculates the spectrum entropy of the independent every frame of office beyond independent frame length is setThan using lowest spectrum entropy than corresponding frame as cut-point, above-mentioned independent sentence is divided into two independent sentences.
Split too long sentence.If the length of sentence is higher than specified time threshold, which is split.It tears openPoint mode is as follows: ignoring each a certain proportion of speech frame of head and the tail of sentence, traverses to remaining speech frame.If each frame isIt is computed spectrum entropy ratio, then using spectrum entropy ratio as weight W.If spectrum entropy ratio is not calculated, using the frame energy as weightW.For each frame, if in this sentence, there is Nleft frame on the left of the frame, there is Nright frame on right side, and definition splits coefficient valueWS is as follows: by traversal, finding the frame for enabling the fractionation value WS of the sentence minimum, which is divided into two sentences in left and right.IfToo long sentence is still had in two sentences in left and right, then too long sentence is continued to split using this method, until being not presentLong sentence.Filter too short meaningless sentence.A time threshold is specified, for being lower than the sentence of time span, it is possible toIt is not that people is speaking.For such sentence, the highest frame of its energy is adopted, its mel cepstrum coefficients are calculated.When useFirst trained support vector machines (SVM) classifier classifies to it, judge whether be people sound.If not the sound of peopleSound then abandons the sentence.SVM classifier training method is as follows: acquiring several people's sounds from lecture video and network direct broadcasting videoSample, as positive sample, several typical inhuman sound samples are as negative sample.Meier is used to be instructed to spectral coefficient as featurePractice, obtains model parameter.(principle of support vector machines can refer to).Here other machines learning method can also be taken, it is such as deepDegree neural network carries out classification judgement.
The present invention also provides the automatic split system for carrying out audio punctuate simultaneously, as shown in Figure 2, comprising: framing unit101, energy threshold acquiring unit 201, independent sentence acquiring unit 301;Compose entropy analytical unit 401, noise sentence judging unit 501 andPunctuate acquiring unit 601.
The framing unit 101 is configured to obtain multiple framing sections according to audio;
The energy threshold acquiring unit 201 is configured to obtain energy threshold E according to the energy value of each framing sectionk
The independent sentence acquiring unit 301, is configured to according to the energy threshold Ek, it is obtained from each framing sectionEnergy value is more than setting energy threshold EtFraming section, then be sentence intermediate frame to the preamble frame or postorder frame of the frame using the framing sectionIt is scanned, if the energy threshold of preamble frame or postorder frame is less than setting energy threshold Et, then by the frame and the sentence intermediate frameMerging by frame start sequence becomes independent sentence.
Entropy analytical unit 401 is composed, is configured to search for forward and backward from two frame of the front and back of each sentence, if searchNext frame belongs to other sentences, then merges to two sentences;If the energy of next frame is less than setting energy threshold Et, andOther sentences are not belonging to, then Fourier transform is carried out to the frame, takes the amplitude of 0-4000HZ, are divided into z item spectrum according to fixed widthBand, the intensity of every bands of a spectrum are Vi, i=1,2 ... z.Overall strength is Vsum, PiFor the probability of every bands of a spectrum.PiCalculation formulaAre as follows:
Then, the spectrum entropy of the frame are as follows:
The energy of each frame and the ratio of spectrum entropy are energy entropy ratio, are denoted as R.An energy entropy is set than threshold value RtIf the frameEnergy entropy ratio be not less than Rt, then the frame is grouped into sentence.If the beginning or end of voice flow, scan abort are arrived in scanning.
The noise sentence judging unit 501 is configured to judge whether the frame length of the independent sentence is the short sentence frame length setRange, if so, the short independent sentence sample of historical storage and current independent sentence are compared, if matching degree is lower than setting value,Independent sentence is then identified as noise sentence;
Punctuate acquiring unit 601 is configured to the independence for not being identified as noise sentence for obtaining each framing section of the audioPunctuate of the sentence as audio
In a preferred embodiment, the framing unit 101 is additionally configured to: receiving audio file;According to settingSliced time the audio file is split, obtain multiple framing sections.
In a preferred embodiment, the energy threshold acquiring unit 201 is additionally configured to, according to each framing sectionThe average value of energy value obtains energy threshold Ek
In a preferred embodiment, the independent sentence acquiring unit 301 is additionally configured to, if preamble frame or postorder frameEnergy threshold be less than setting energy threshold Et, then when judging whether the interval time of present frame and next frame is less than setting intervalBetween, if so, the sentence intermediate frame is merged by frame start sequence becomes independent sentence.
In a preferred embodiment, comprising: long sentence judging unit 3011;
The long sentence judging unit, if the frame length for being configured to the independent sentence calculates this solely beyond independent frame length is setAbove-mentioned independent sentence is divided into two independences using lowest spectrum entropy than corresponding frame as cut-point by the spectrum entropy ratio of the vertical every frame of sentenceSentence.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, anyThose familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all containLid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

CN201610799384.7A2016-08-312016-08-31Audio holds processing method and the system of making pauses in reading unpunctuated ancient writings of making an uproarActiveCN106373592B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610799384.7ACN106373592B (en)2016-08-312016-08-31Audio holds processing method and the system of making pauses in reading unpunctuated ancient writings of making an uproar

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610799384.7ACN106373592B (en)2016-08-312016-08-31Audio holds processing method and the system of making pauses in reading unpunctuated ancient writings of making an uproar

Publications (2)

Publication NumberPublication Date
CN106373592A CN106373592A (en)2017-02-01
CN106373592Btrue CN106373592B (en)2019-04-23

Family

ID=57899361

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610799384.7AActiveCN106373592B (en)2016-08-312016-08-31Audio holds processing method and the system of making pauses in reading unpunctuated ancient writings of making an uproar

Country Status (1)

CountryLink
CN (1)CN106373592B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107424628A (en)*2017-08-082017-12-01哈尔滨理工大学A kind of method that specific objective sound end is searched under noisy environment
CN109389999B (en)*2018-09-282020-12-11北京亿幕信息技术有限公司High-performance audio and video automatic sentence-breaking method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2000132177A (en)*1998-10-202000-05-12Canon Inc Audio processing device and method
CN1622193A (en)*2004-12-242005-06-01北京中星微电子有限公司Voice signal detection method
CN101625862A (en)*2008-07-102010-01-13新奥特(北京)视频技术有限公司Method for detecting voice interval in automatic caption generating system
CN103345922A (en)*2013-07-052013-10-09张巍Large-length voice full-automatic segmentation method
CN103426440A (en)*2013-08-222013-12-04厦门大学Voice endpoint detection device and voice endpoint detection method utilizing energy spectrum entropy spatial information
CN107424628A (en)*2017-08-082017-12-01哈尔滨理工大学A kind of method that specific objective sound end is searched under noisy environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2000132177A (en)*1998-10-202000-05-12Canon Inc Audio processing device and method
CN1622193A (en)*2004-12-242005-06-01北京中星微电子有限公司Voice signal detection method
CN101625862A (en)*2008-07-102010-01-13新奥特(北京)视频技术有限公司Method for detecting voice interval in automatic caption generating system
CN103345922A (en)*2013-07-052013-10-09张巍Large-length voice full-automatic segmentation method
CN103426440A (en)*2013-08-222013-12-04厦门大学Voice endpoint detection device and voice endpoint detection method utilizing energy spectrum entropy spatial information
CN107424628A (en)*2017-08-082017-12-01哈尔滨理工大学A kind of method that specific objective sound end is searched under noisy environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
voice activity detection based on the improved dual-threshold method;SUN Yiming 等;《2015 International Conference on Intelligent Transportation》;20151231;第996-999页
基于时频结合的带噪语音端点检测算法;王洋 等;《黑龙江大学自然科学学报》;20160630;第33卷(第3期);第410-415页

Also Published As

Publication numberPublication date
CN106373592A (en)2017-02-01

Similar Documents

PublicationPublication DateTitle
CN106157951B (en)Carry out the automatic method for splitting and system of audio punctuate
CN109256150B (en)Speech emotion recognition system and method based on machine learning
Cai et al.Sensor network for the monitoring of ecosystem: Bird species recognition
DE60320414T2 (en) Apparatus and method for the automatic extraction of important events in audio signals
US8326610B2 (en)Producing phonitos based on feature vectors
CN103714826B (en)Formant automatic matching method towards vocal print identification
KR101616112B1 (en)Speaker separation system and method using voice feature vectors
CN104050965A (en)English phonetic pronunciation quality evaluation system with emotion recognition function and method thereof
CN103700370A (en)Broadcast television voice recognition method and system
CN107767881A (en)A kind of acquisition methods and device of the satisfaction of voice messaging
CN104517605B (en)A kind of sound bite splicing system and method for phonetic synthesis
CN111105785A (en)Text prosodic boundary identification method and device
CN101625862B (en)Method for detecting voice interval in automatic caption generating system
CN112270933A (en)Audio identification method and device
CN106373592B (en)Audio holds processing method and the system of making pauses in reading unpunctuated ancient writings of making an uproar
CN105916090A (en)Hearing aid system based on intelligent speech recognition technology
DE60318450T2 (en) Apparatus and method for segmentation of audio data in meta-patterns
CN107123420A (en)Voice recognition system and interaction method thereof
CN112581937A (en)Method and device for acquiring voice instruction
CN113689885A (en)Intelligent auxiliary guide system based on voice signal processing
Vicsi et al.Problems of the automatic emotion recognitions in spontaneous speech; an example for the recognition in a dispatcher center
Morrison et al.Real-time spoken affect classification and its application in call-centres
CN111613249A (en)Voice analysis method and equipment
Amir et al.Unresolved anger: Prosodic analysis and classification of speech from a therapeutic setting
CN111402887A (en)Method and device for escaping characters by voice

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp