Movatterモバイル変換


[0]ホーム

URL:


US9275631B2 - Speech synthesis system, speech synthesis program product, and speech synthesis method - Google Patents

Speech synthesis system, speech synthesis program product, and speech synthesis method
Download PDF

Info

Publication number
US9275631B2
US9275631B2US13/731,268US201213731268AUS9275631B2US 9275631 B2US9275631 B2US 9275631B2US 201213731268 AUS201213731268 AUS 201213731268AUS 9275631 B2US9275631 B2US 9275631B2
Authority
US
United States
Prior art keywords
cost
speech
speech segment
prosody
segment sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/731,268
Other versions
US20130268275A1 (en
Inventor
Ryuki Tachibana
Masafumi Nishimura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cerence Operating Co
Original Assignee
Nuance Communications Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/731,268priorityCriticalpatent/US9275631B2/en
Application filed by Nuance Communications IncfiledCriticalNuance Communications Inc
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATIONreassignmentINTERNATIONAL BUSINESS MACHINES CORPORATIONASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: NISHIMURA, MASAFUMI, TACHIBANA, RYUKI
Assigned to NUANCE COMMUNICATIONS, INC.reassignmentNUANCE COMMUNICATIONS, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Publication of US20130268275A1publicationCriticalpatent/US20130268275A1/en
Publication of US9275631B2publicationCriticalpatent/US9275631B2/en
Application grantedgrantedCritical
Assigned to CERENCE INC.reassignmentCERENCE INC.INTELLECTUAL PROPERTY AGREEMENTAssignors: NUANCE COMMUNICATIONS, INC.
Assigned to CERENCE OPERATING COMPANYreassignmentCERENCE OPERATING COMPANYCORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED AT REEL: 050836 FRAME: 0191. ASSIGNOR(S) HEREBY CONFIRMS THE INTELLECTUAL PROPERTY AGREEMENT.Assignors: NUANCE COMMUNICATIONS, INC.
Assigned to BARCLAYS BANK PLCreassignmentBARCLAYS BANK PLCSECURITY AGREEMENTAssignors: CERENCE OPERATING COMPANY
Assigned to CERENCE OPERATING COMPANYreassignmentCERENCE OPERATING COMPANYRELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS).Assignors: BARCLAYS BANK PLC
Assigned to WELLS FARGO BANK, N.A.reassignmentWELLS FARGO BANK, N.A.SECURITY AGREEMENTAssignors: CERENCE OPERATING COMPANY
Assigned to CERENCE OPERATING COMPANYreassignmentCERENCE OPERATING COMPANYCORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE THE CONVEYANCE DOCUMENT WITH THE NEW ASSIGNMENT PREVIOUSLY RECORDED AT REEL: 050836 FRAME: 0191. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT.Assignors: NUANCE COMMUNICATIONS, INC.
Assigned to CERENCE OPERATING COMPANYreassignmentCERENCE OPERATING COMPANYRELEASE (REEL 052935 / FRAME 0584)Assignors: WELLS FARGO BANK, NATIONAL ASSOCIATION
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Waveform concatenation speech synthesis with high sound quality. Prosody with both high accuracy and high sound quality is achieved by performing a two-path search including a speech segment search and a prosody modification value search. An accurate accent is secured by evaluating the consistency of the prosody by using a statistical model of prosody variations (the slope of fundamental frequency) for both of two paths of the speech segment selection and the modification value search. In the prosody modification value search, a prosody modification value sequence that minimizes a modified prosody cost is searched for. This allows a search for a modification value sequence that can increase the likelihood of absolute values or variations of the prosody to the statistical model as high as possible with minimum modification values.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This Application claims the benefit under 35 U.S.C. §120 and is a continuation of U.S. application Ser. No. 12/192,510, entitled “SPEECH SYNTHESIS SYSTEM, SPEECH SYNTHESIS PROGRAM PRODUCT, AND SPEECH SYNTHESIS METHOD” filed on Aug. 15, 2008, which claims foreign priority benefits under 35 U.S.C. §119(a)-(d) or 35 U.S.C. §365(b) of Japanese application number 2007-232395, entitled “SPEECH SYNTHESIS SYSTEM, SPEECH SYNTHESIS PROGRAM PRODUCT, AND SPEECH SYNTHESIS METHOD” filed Sep. 7, 2007, both of which are herein incorporated by reference in their entirety.
TECHNICAL FIELD
The present invention relates to a speech synthesis technology for synthesizing speech by computer processing and particularly to a technology for synthesizing the speech with high sound quality.
BACKGROUND
It is important to synthesize speech with accurate and natural accent in speech synthesis. Therefore, there is known a concatenative speech synthesis technology as one of speech synthesis technologies. This technology generates synthesized speech by selecting speech segments having similar prosody to the target prosody predicted using a prosody model from a speech segment database and concatenating them. The first advantage of this technology is that it can provide high sound quality and naturalness close to those of a recorded human voice in a portion where appropriate speech segments are selected. Particularly, the fine tuning (smoothing) of prosody is unnecessary in a portion where originally continuous speech segments (continuous speech segments) in speakers original speech can be used for the synthesized speech directly in the concatenated sequence, and therefore the best sound quality with natural accent is achieved.
In the waveform concatenation speech synthesis, however, accurate and natural prosody cannot always be produced by synthesis. It is because the consistency of prosody may be lost as a result of concatenating speech segments selected based on minimizing cost. Particularly in Japanese, a relationship in pitch between moras is recognized as a pitch accent. Therefore, unless the prosody generated as a result of concatenating the speech segments is consistent as a whole, the naturalness of synthesized speech is lost. In addition, the high naturalness of accent cannot always be obtained when continuous speech segments are used for synthesized speech. It is because an accent depends on a context, the frequency of speech may be different according to the context even if the accent is the same, and the prosody may become unnatural at the connection of the accent as a whole in the case of poor consistency with outer portions of the continuous speech segments.
Japanese Unexamined Patent Publication (Kokai) No. 2005-292433 discloses a technology for: acquiring a prosody sequence for target speech to be speech-synthesized with respect to a plurality of respective segments, each of which is a synthesis unit of speech synthesis; associating a fused speech segment obtained by fusing a plurality of speech segments, which are intended for the same speech unit and different in prosody of the speech unit from each other, with fused speech segment prosody information indicating the prosody of the fused speech segment and holding them; estimating a degree of distortion between segment prosody information indicating the prosody of segments obtained by division and the fused speech segment prosody information; selecting a fused speech segment based on the degree of the estimated distortion; and generating synthesized speech by concatenating the fused speech segments selected for the respective segments. Japanese Unexamined Patent Publication (Kokai) No. 2005-292433, however, does not suggest a technique for treating continuous speech segments.
The following document [1] discloses that a speech segment sequence having the maximum likelihood is obtained by learning the distribution of absolute values and relative values of a fundamental frequency (F0) in a prosody model for use in waveform concatenation speech synthesis. Also in the technique disclosed in this document, however, unnatural prosody is produced by the synthesis without speech segments. Although it is possible to use a F0 curve having the maximum likelihood forcibly as the prosody of synthesized speech, the naturalness only possible in the waveform concatenation speech synthesis is lost.
On the other hand, the following document [2] discloses that speech segment prosody is used directly for continuous speech segments since discontinuity never occurs in the continuous speech segments. In this technique, the synthesized speech is used after smoothing the speech segment prosody in the portions other than the continuous speech segments.
Patent Document 1
Japanese Unexamined Patent Publication (Kokai) No. 2005-292433
Nonpatent Document 1
[1] Xi jun Ma, Wei Zhang, Weibin Zhu, Qin Shi and Ling Jin, “PROBABILITY BASED PROSODY MODEL FOR UNIT SELECTION,” proc. ICASSP, Montreal, 2004.
Nonpatent Document 2
[1] E Eide, A. Aaron, R. Bakis, P. Cohen, R. Donovan, W. Hamza, T. Mathes, M. Picheny, M. Polkosky, M. Smith, and M. Viswanathan, “Recent improvements to the IBM trainable speech synthesis system,” in Proc. of ICASSP, 2003, pp. 1-708-1-711.
SUMMARY
In the waveform concatenation speech synthesis, preferably synthesized speech is produced with high sound quality where accents are naturally connected in the case where there are large quantities of speech segments, while synthesized speech can be produced with accurate accents even if the above is not the case. Stated another way, preferably a sentence having a similar content to recorded speaker's speech is synthesized with high sound quality, while any other sentence can be synthesized with accurate accents. In the above conventional technology, however, it is difficult to synthesize speech with natural quality in some cases.
Therefore, it is an object of the present invention to provide a speech synthesis technology that not only allows a sentence having a similar content to recorded speaker's speech to be synthesized with high quality, but allows a sentence having a dissimilar content to the recorded speaker's speech to be synthesized with stable quality.
The present invention has been provided to solve the above problem and it provides prosody with high accuracy and high sound quality by performing a two-path search including a speech segment search and a prosody modification value search. In the preferred embodiment of the present invention, an accurate accent is secured by evaluating the consistency of prosody by using a statistical model of prosody variations (the slope of fundamental frequency) for both of two paths of the speech segment selection and the modification value search. In the prosody modification value search, a prosody modification value sequence that minimizes a modified prosody cost is searched for. This allows a search for a modification value sequence that can increase the likelihood of absolute values or variations of the prosody to the statistical model as high as possible with minimum modification values. With regard to the continuous speech segments, an evaluation is made to determine whether they keep the consistency by using the statistical model of prosody variations similarly and only correct continuous speech segments are treated on a priority basis. The term “treated on a priority basis” means that the best sound quality is achieved by leaving the fine tuning undone in the corresponding portion, first. In addition, the prosody of other speech segments is modified with the priority continuous speech segments particularly weighted in the modification value search so as to ensure that other speech segments have correct consistency in the relationship with the prior continuous speech segments. The consistency of the fundamental frequency is evaluated by modeling the slope of the fundamental frequency using the statistical model and calculating the likelihood for the model. Stable values can be observed independently of a mora length and the consistency can be evaluated in consideration of all parts of the fundamental frequency within the range by using the slope obtained by linear-approximating the fundamental frequency within a certain time interval, instead of a difference from the fundamental frequency in a position in an adjacent mora, which contributes to the reproduction of an accent that sounds accurate to a human ear. The slope of the fundamental frequency is calculated during learning, for example, by linear-approximating a curve generated by interpolating pitch marks in a silent section by linear interpolation first and then smoothing the entire curve, preferably within a range from a point obtained by equally dividing each mora to a point traced back for a certain time period.
According to the present invention, it is possible to obtain an effect that high-quality speech synthesis is achieved by detecting and thereby advantageously utilizing original speech segments as continuous speech segments, if any, and even if not, high-quality speech synthesis is achieved by evaluating the consistency of prosody using a statistical model of prosody variations to secure accurate accents.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is an outline block diagram illustrating a learning process which is the premise of the present invention and an entire speech synthesis process;
FIG. 2 is a block diagram of hardware for practicing the present invention;
FIG. 3 is a flowchart of the main process of the present invention;
FIG. 4 is a diagram illustrating an example of a decision tree;
FIG. 5 is a flowchart of the process for determining priority continuous speech segments;
FIG. 6 is a diagram illustrating the state of applying prosody modification values to speech segments; and
FIG. 7 is a diagram illustrating a difference in the process between the case where continuous speech segments are priority continuous speech segments and a case other than that.
DETAILED DESCRIPTION
Hereinafter, the present invention will be described by way of embodiments with reference to accompanying drawings. Unless otherwise indicated, the same reference numerals will be used to refer to the same elements in the entire description below.
Referring toFIG. 1, there is shown an outline block diagram illustrating the overview of speech processing which is the premise of the present invention. The left part ofFIG. 1 is a processing block diagram illustrating a learning step of preparing necessary information such as a speech segment database and a prosody model necessary for speech synthesis. The right part ofFIG. 1 is a processing block diagram illustrating a speech synthesis step.
In the learning process, a recordedscript102 includes at least several hundred sentences corresponding to various fields and situations in a text file format.
On the other hand, the recordedscript102 is read aloud by a plurality of narrators preferably including men and women, the readout speech is converted to a speech analog signal through a microphone (not shown) and then A/D-converted, and the A/D-converted speech is stored preferably in PCM format into the hard disk of a computer. Thus, arecording process104 is performed. Digital speech signals stored in the hard disk constitute aspeech corpus106. Thespeech corpus106 can include analytical data such as classes of recorded speeches.
At the same time, alanguage processing unit108 performs processing specific to the language of the recordedscript102. More specifically, it obtains the reading (phonemes), accents, and word classes of the input text. Since no space is left between words in some languages, there may also be a need to divide the sentence in word units. Therefore, a parsing technique is used, if necessary.
In a textanalysis result block110, a reading and accent are assigned to each of the divided words. It is performed with reference to a prepared dictionary in which a reading is associated with an accent for each word.
In abuilding block112 by a waveform editing and synthesis unit, the speech is divided into speech segments (an alignment of speech segments is obtained).
The waveform editing andsynthesis unit114 observes the fundamental frequency preferably at three equally spaced points of each mora on the basis of speech segment data generated in thebuilding block112 by the waveform editing and synthesis unit and constructs a decision tree for predicting this. Furthermore, the distribution is modeled by the Gaussian mixture model (GMM) for each node of the decision tree. More specifically, the decision tree is used to cluster the input feature values so as to associate the probability distribution determined by the Gaussian mixture model with each cluster. Aspeech segment database116 and aprosody model118 constructed as described above are stored in the hard disk of the computer. Data of thespeech segment database116 and that of theprosody model118 prepared in this manner can be copied to another speech synthesis system and used for an actual speech synthesis process.
Note that the above processing of observing the fundamental frequency at three equally spaced points of each mora is appropriate for Japanese, though it may be more appropriate in other languages such as English and Chinese that the observation points are determined in consideration of syllables or other elements in some cases.
Subsequently, the speech synthesis process will be described with reference toFIG. 1. The speech synthesis process is basically to read aloud a sentence provided in a text format via text-to-speech (US). This type ofinput text120 is typically generated by an application program of the computer. For example, a typical computer application program displays a message in a popup window format for a user, and the message can be used as an input text. For a car navigation system, an instruction such as, for example, “Turn to the right at the intersection located 200 meters ahead” is used as text to be read aloud.
Subsequently, alanguage processing unit122 obtains the reading (phonemes), accents, and word classes of the input text, similarly to the above processing of thelanguage processing unit108. In the case of a Japanese input text, the sentence is divided into words in this process, too.
Subsequently, in a textanalysis result block124, a reading and accent are assigned to each of the divided words similarly to the textanalysis result block110 in response to a processing output of thelanguage processing unit122.
In asynthesis block126 by the waveform editing and synthesis unit, typically the following processes are sequentially performed:
    • Obtaining prosody modification values using theprosody model118;
    • Reading candidates of speech segments from thespeech segment database116;
    • Getting a speech segment sequence;
    • Applying prosody modification appropriately; and
    • Generating synthesized speech by concatenating speech segments.
Thus, thesynthesized speech128 is obtained. The signal of the synthesizedspeech128 is converted to an analog signal by DA conversion and is output from a speaker.
Referring toFIG. 2, there is shown a block diagram illustrating a basic structure of the speech synthesis system (text-to-speech synthesis system) according to the present invention. Although this embodiment will be described under the assumption that the configuration inFIG. 2 is applied to a car navigation system, it should be appreciated that the present invention is not limited thereto, but the invention may be applied to an arbitrary information processor having a speech synthesis function such as a vending machine or any other arbitrary built-in device and an ordinary personal computer.
InFIG. 2, abus202 is connected to aCPU204, a main storage (RAM)206, a hard disk drive (HDD)208, aDVD drive210, akeyboard212, adisplay214, and aDA converter216. TheDA converter216 is connected to thespeaker218 and thus speech synthesized by the speech synthesis system according to the present invention is output from thespeaker218. In addition, the car navigation system is equipped with a GPS function and a GPS antenna, though they are not shown.
Furthermore, inFIG. 2, theCPU204 has a 32-bit or 64-bit architecture that enables the execution of an operating system such as TRON, Windows® Automotive, and Linux®.
TheHDD208 stores data of thespeech segment database116 generated by the learning process inFIG. 1 and data of theprosody model118. TheHDD208 further stores an operating system, a program for generating information related to a location detected by the GPS function or other text data to be speech-synthesized, and a speech synthesis program according to the present invention. Alternatively, these programs can be stored in an EEPROM (not shown) so as to be loaded into themain storage206 from the EEPROM at power on.
TheDVD drive210 is for use in mounting a DVD having map information for navigation. The DVD can store a text file to be read aloud by the speech synthesis function. Thekeyboard212 substantially includes operation buttons provided on the front of the car navigation system.
Thedisplay214 is preferably a liquid crystal display and is used for displaying a navigation map in conjunction with the GPS function. Moreover, thedisplay214 appropriately displays a control panel or a control menu to be operated through thekeyboard212.
TheDA converter216 is for use in converting a digital signal of the speech synthesized by the speech synthesis system according to the present invention to an analog signal for driving thespeaker218.
Referring toFIG. 3, there is shown a flowchart illustrating processing of the speech segment search and the prosody modification value search according to the present invention. A processing module for this processing is included in thesynthesis block126 by the waveform editing and synthesis unit in the configuration shown inFIG. 1. Moreover, inFIG. 2, it is stored in thehard disk drive208 and executable loaded into theRAM206. Prior to describing the flowchart shown inFIG. 3, a plurality of types of prosody to be used during processing will be described below.
1. Speech Segment Prosody.
Prosody indigenous to the speaker's original speech.
2. Target Prosody.
Prosody predicted using a prosody model for an input sentence in the runtime of a conventional approach. Generally, in the conventional approach, speech segments having speech segment prosody close to this value are selected. Note that, however, the target prosody is basically not used in the approach of the present invention. More specifically, speech segments are selected because of its speech segment prosody having a high likelihood to the model stochastically representing the features of the speaker's prosody, instead of being selected because of the similar prosody to the target prosody.
3. Final Prosody.
Prosody finally assigned to the synthesized speech. There are pluralities of options available for a value therefore.
3-1. Directly Using Speech Segment Prosody.
Since speech segments are used without modification in this option, the best sound quality may be achieved. Discontinuous prosody, however, may occur between the speech segments and speech segments adjacent thereto, which leads to deterioration of the sound quality on the contrary in some cases. Since such discontinuous prosody never occurs in continuous speech segments, this method is used only in such a portion in the conventional approach.
3-2. Using Smoothed Speech Segment Prosody.
In this option, the speech segment prosody is smoothed in adjacent speech segments to obtain the final prosody. This eliminates discontinuity in accent and thereby the speech sounds smooth In the conventional approach, this method is generally used in the portions other than the continuous speech segments. In that case, however, an inaccurate accent may be produced unless there are any speech segments having the similar speech segment prosody to the target prosody.
3-3. Using Target Prosody.
In this option, the target prosody is forcibly used. As described above, the target prosody is determined by predicting the target prosody using the prosody model for the input sentence as described above. If this method is used, a major modification is required for the speech segments in a portion where there are no speech segments having the similar speech segment prosody to the target prosody, and the sound quality significantly deteriorates in that portion. Although this method is one of the conventional technologies, it is an undesirable method since it impairs the advantage of the high sound quality of the waveform concatenation speech synthesis.
3-4. Using Speech Segment Prosody with Partial Modification.
In this option, the speech segment prosody is basically used, while the likelihood is evaluated to use calculations of the final prosody depending on each part. In this technique, the speech segment prosody is directly used similarly to 3-1 for a portion where the likelihood is sufficiently high in the continuous speech segments (priority continuous speech segments). The best sound quality is achieved by directly using the speech segment prosody for the portion sufficiently high in likelihood. For a portion where the likelihood is low in the continuous speech segments, it is considered to be other than the continuous speech segments and then the following process is performed. Specifically, the speech segment prosody is smoothed before it is used similarly to 3-2 for a portion whose likelihood is relatively high regarding other speech segments than the continuous speech segments. Thereby, considerably high sound quality is obtained. For a portion whose likelihood is relatively low, the prosody is modified with the minimum modification values so as to increase the likelihood and then the modified prosody is used as the final prosody. The sound quality is not as high as the above one. We can say that this case is similar to the case of 3-3.
Now, returning to the flowchart shown inFIG. 3, instep302, the GMM (Gaussian mixture model) decision is made using a decision tree. Note that the decision tree is, for example, as shown inFIG. 4 and questions are associated with respective nodes. The control reaches an end-point by following the tree according to the determination of yes or no on the basis of the input feature value.FIG. 4 illustrates an example of the decision tree based on the questions related to the positions of moras within a sentence. As described above, the decision tree is used for the GMM decision and a GMM ID number is associated with its end-point. The GMM parameter is obtained by checking the table using the ID number. The term “GMM,” namely “the Gaussian mixture distribution” is the superposition of a plurality of weighted normal distributions, and the GMM parameter includes an average, dispersion, and a weighting factor.
According to the present invention, the input feature values to the decision tree include a word class, the type of speech segment, and the position of mora within the sentence. On the other hand, the term “output parameter” means a GMM parameter of a frequency slope or an absolute frequency. The combination of the decision tree and GMM is used to predict the output parameter based on the input feature values. The related technology is conventionally known and therefore a more detailed description is omitted here. For example, refer to the above document [1] or the specification of Japanese Patent Application No. 2006-320890 filed by the present applicant.
If the GMM parameter is obtained instep304, then speech segments are searched for by using the GMM parameter instep306. Thespeech segment database116 contains a speech segment list and actual voices of respective speech segments. Moreover, in thespeech segment database116, each speech segment is associated with information such as a start-edge frequency, end-edge frequency, sound volume, length, and tone (cep strum vector) at the start edge or end edge. Instep306, the above information is used to obtain a speech segment sequence having the minimum cost.
In this situation, it is necessary to clarify what kind of cost should be employed.
In the typical conventional technology, a speech segment sequence is selected which minimizes the sum of the costs described below. The costs in the conventional technology are basically based on the disclosure of the above document [2].
1. Spectrum Continuity Cost
The spectrum continuity cost is applied as a cost (penalty) to a difference across the spectrum so that the tones (spectrum) are smoothly connected in the selection of the speech segments.
2. Frequency Continuity Cost
The frequency continuity cost is applied as a cost to a difference of the fundamental frequency so that the fundamental frequencies are smoothly connected in the selection of the speech segments.
3. Duration Error Cost
The duration error cost is applied as a cost to a difference between target duration and speech segment duration so that the speech segment duration (length) is close to duration predicted using the prosody model in the selection of the speech segments.
4. Volume Error Cost
The volume error cost is applied as a cost to a difference between a target sound volume and a speech segment volume.
5. Frequency Error Cost
The frequency error cost is applied as a cost to an error of a speech segment frequency (speech segment prosody) from a target frequency, where the target frequency (target prosody) is previously obtained.
In the present invention, the frequency error cost and the frequency continuity cost are omitted among the above costs as a result of reconsidering the costs of the conventional technology. Instead, an absolute frequency likelihood cost (Cla), a frequency slope likelihood cost (Cld), and a frequency linear approximation error cost (Cf) are introduced.
The absolute frequency likelihood cost (Cla) will be described below. In the case of Japanese, preferably the fundamental frequency is observed at three equally spaced points of each mora and a decision tree for predicting it is constructed during learning. Furthermore, the distribution is modeled by the Gaussian mixture model (GMM) for the nodes of the decision tree. Thus, in the runtime, the decision tree and GMM are used to calculate the likelihood of the speech segment prosody of the speech segments currently under consideration. Then, its log likelihood is positive-negative reversed and an external weighting factor is applied thereto to obtain the cost. The reason why the frequency likelihood is used instead of the target frequency is because the approximation to one frequency is not indispensable only if there is a consistency with adjacent speech segments in producing a Japanese accent. Therefore, GMM is employed with the aim of increasing the choices of speech segments here.
The frequency slope likelihood cost (Cld) will be described below. During learning, preferably the slope of the fundamental frequency is observed at three equally spaced points of each mora and a decision tree for predicting it is constructed. Moreover, the distribution is modeled by GMM for the nodes of the decision tree. In the runtime, the decision tree and GMM are used to calculate the likelihood of the slope of the speech segment sequence currently under consideration. Then, its log likelihood is positive-negative reversed and an external weighting factor is applied thereto to obtain the cost. The slope is calculated during learning within a range from the position under consideration to a point going back, for example, 0.15 sec. Also in the runtime, the slope of the speech segments is calculated within a range from the speech segment under consideration to a point going back 0.15 sec similarly to calculate the likelihood. The slope is calculated by obtaining an approximate straight line having the minimum square error.
The frequency linear approximation error cost (Cf) will be described below. While a change in the log frequency within the above range of 0.15 sec is approximated by a straight line when the frequency slope likelihood is calculated, the external weighting factor is applied to its approximation error to obtain the frequency linear approximation error cost (Cf). This cost is used due to the following two reasons: (1) If the approximation error is too large, the calculation of the frequency slope cost becomes meaningless; and (2) The prosody of the concatenated speech segments should change smoothly to the extent that the change can be approximated by the first-order approximation during the short time period of 0.15 sec.
Summarizing the above, in this embodiment of the present invention, the speech segment sequence is determined by a beam search so as to minimize the spectrum continuity cost, the duration error cost, the volume error cost, the absolute frequency likelihood cost, the frequency slope likelihood cost, and the frequency linear approximation error cost. The beam search is to limit the number of steps in the best-first search for rationalization of the search space. Thus, instep308, the speech segment sequence is determined.
In this embodiment, different decision trees are used for the spectrum continuity cost, the duration error cost, the volume error cost, the absolute frequency likelihood cost, the frequency slope likelihood cost, and the frequency linear approximation error cost, respectively. Alternatively, however, for example, the volume, frequency, and duration are combined as a vector and a value of the vector can be estimated at a time using a single decision tree.
The likelihood evaluation instep310 is intended for a continuous speech segment portion including continuous speech segments selected by the number exceeding an externally provided threshold value Tc in the selected speech segment sequence: The frequency slope likelihood cost Cld of that portion is compared with another externally provided threshold value Td. Only the portion exceeding the threshold value is handled as “priority continuous speech segments” as shown instep312 in the subsequent processes. Handling of the priority continuous speech segments will be described later with reference to the flowchart ofFIG. 5.
Subsequently, the prosody modification value search instep314 will now be described. In this step, an appropriate modification value sequence for the speech segment prosody sequence is obtained by a Viterbi search. Specifically, in this case, the Viterbi search is used to find the prosody modification value sequence so as to maximize the likelihood estimation of the speech segment prosody sequence through the dynamic programming. Also in this process, the GMM parameter obtained instep304 is used. Alternatively, the beam search can be used, instead of the Viterbi search, to obtain the prosody modification value sequence in this step, too. One modification value is selected out of candidates determined discretely within the previously determined range from the lower limit to the upper limit (For example, from −100 Hz to +100 Hz at intervals of 10 Hz). The modified speech segment prosody is evaluated by the sum of the following costs, namely modified prosody cost:
  • 1. Absolute frequency likelihood cost (Cla)
  • 2. Frequency slope likelihood cost (Cld)
  • 3. Frequency linear approximation error cost (Cf)
  • 4. Prosody modification cost (Cm)
Note here that the terms, “absolute frequency likelihood cost,” “frequency slope likelihood cost,” and “frequency linear approximation error cost” are the same as those of the above speech segment search, but different decision trees from those of the calculation of the costs for the speech segment search are used to calculate the modified prosody cost. Input variables used for the decision trees, however, are the same as existing input variables used for the decision tree of the frequency error cost. Note here that it is also possible to estimate a two-dimensional vector which is the combination of the absolute frequency likelihood cost and the frequency slope likelihood cost through one decision tree at a time.
The prosody modification cost means a cost (penalty) for a modification value for the modification of a speech segment F0. The reason why it is referred to as penalty is because the sound quality deteriorates as the modification value increases. The prosody modification cost is calculated by multiplying the modification value of the prosody by an external weight. Note that, however, for the priority continuous speech segments, the prosody modification cost is calculated by multiplying the cost by another external large weight or the cost is set to an extremely large constant to inhibit the modification value to be other than zero. Thereby, a modification value is selected so as to be consistent with the prosody of the priority continuous speech segments in the vicinity of the priority continuous speech segments. Thus, instep316, the prosody modification value for each speech segment is determined.
In this embodiment, no decision tree is used to calculate the prosody modification cost (Cm). It is based on a concept that the prosody modification should be small for all phonemes equally. If, however, it is expected that the sound quality of some phonemes does not deteriorate even after the prosody modification while the sound quality of other phonemes significantly deteriorates after the prosody modification and it is desirable to perform different prosody modification for them, the use of a decision tree is appropriate for the prosody modification cost, too.
Instep318, the prosody modification value obtained instep316 is applied to each speech segment to smooth the prosody. Thus, instep320, the prosody to be finally applied to the synthesized speech is determined.
Referring toFIG. 5, there is shown a flowchart of processing for determining a weight for the modification value cost, which is used in themodification value search314 shown inFIG. 3. InFIG. 5, the speech segments are checked one by one instep502. Then, instep504, it is determined whether the number of continuous speech segments is greater than the intended threshold value Tc. The term “continuous speech segments” means a sequence of speech segments that have been originally continuous in the original speaker's speech and can be used for the synthesized speech directly in the concatenated sequence. If the number of continuous speech segments is smaller than the intended threshold value Tc, the speech segments are immediately determined to be ordinary speech segments in510.
If the number of continuous speech segments is greater than the intended threshold value Tc instep504, the speech segments are considered to be continuous speech segments for the time being instep506. The Tc value is 10 in one example. The speech segment sequence, however, is not treated specially only for this reason. Next instep508, it is determined whether the slope likelihood Ld of the continuous speech segment portion is greater than the given threshold value Td in step508: If it is not so, the control progresses to step510 to consider it to be ordinary speech segments after all; and only after the slope likelihood Ld is determined to be greater than the given threshold value Td instep508, the speech segment sequence is considered to be priority continuous speech segments. The frequency slope likelihood cost (Cld) is obtained by assigning a negative weight to the log of the slope likelihood Ld. The consideration of the priority continuous speech segments corresponds to step312 shown inFIG. 3.
If the speech segment sequence is considered to be the priority continuous speech segments, a large weight is used as shown instep516 in a prosodymodification value search514. The large weight used for the priority continuous speech segments substantially or completely inhibits the prosody modification to be applied to the priority continuous speech segments.
On the other hand, if the speech segment sequence is considered to be ordinary speech segments, a normal weight is used as shown instep518 in the prosodymodification value search514.
In this embodiment, a weight of 1.0 or 2.0 is used for the ordinary speech segments, and a weight that is twice to 10 times larger than the weight for the ordinary speech segments is used for the priority continuous speech segments.
Meanwhile, three equally spaced points of each mora are selected as described above as observation points for the fundamental frequency and the frequency slope in this embodiment. It should be appreciated that the above is consideration peculiar to the Japanese language to some extent. It is because a mora is a unit of speech in Japanese, while a syllable may be a unit of speech in another language. If the above is applied directly in the latter case, three equally spaced points of each syllable are selected, but the use of them will lead to an unsuccessful result in some cases.
For example, in the case of English, the syllable has a structure of a consonant (onset)+vowel (nucleus=vowel)+consonant (coda). In this case, the onset or coda may be omitted. If the observation points are placed at three equally spaced points of the syllable when the coda includes a voiceless consonant such as /s/ or /t/, the third point comes behind the coda which is the voiceless consonant. Actually, however, the fundamental frequency does not exist in a voiceless consonant and therefore the third point may be meaningless. Moreover, the use of the observation point for the coda may reduce the important observation points for use in modeling the fundamental frequency of a vowel.
On the other hand, in the case of Chinese, the coda includes only a voiced consonant and therefore the same problem as English does not occur. In Chinese, however, the forms of the fundamental frequencies of the four tones are very important, and they have important implications only in vowels. Almost all of consonants are voiceless consonants or plosive sounds in Chinese and they do not have a fundamental frequency, and therefore modeling of the corresponding portion is unnecessary. Moreover, the ups and downs of the fundamental frequency in Chinese are very significant, and therefore the frequency slope cannot be modeled successfully by observation at three points.
In Japanese, there is no coda, but there are many voiced consonants each having a fundamental frequency such as /m/, /n/, /r/, /w/, and /y/. Therefore, the method of placing observation points at three equally spaced points of each mora is effective.
Thus, it should be appreciated that it is necessary to appropriately change the positions and number of observation points for calculating the absolute frequency likelihood cost (Cla) and frequency slope likelihood cost (Cld) described above according to the phonetic characteristics of a language.
Referring toFIG. 6, there is shown a diagram illustrating the state of modifying speech segment prosody. InFIG. 6, the ordinate axis represents a frequency axis and an abscissa axis represents a time axis. A graph602 shows the concatenated state of the speech segments determined by the speech segment search instep306 of the flowchart inFIG. 3: a plurality of vertical lines represent boundaries between the speech segments. At this time point, the prosody of the original speech segments is shown as it is.
Agraph604 shows prosody modification values for the respective speech segments, which are determined in the prosody modification value search instep314 of the flowchart inFIG. 3. Moreover, agraph606 illustrates modified speech segment prosody as a result of application of the modification values in thegraph604.
Referring toFIG. 7, there is shown processing performed in the case where the speech segment sequence includes the priority continuous speech segment prosody. Agraph702 ofFIG. 7 shows the speech segment prosody which has not been modified yet. InFIG. 7, a speech segment before the modification is indicated by a dashed line and a speech segment after the modification is indicated by a solid line. Particularly, the speech segment sequence includescontinuous speech segments705. The continuous speech segments can be recognized by no level difference in the prosody at the joint between the speech segments. As shown in the flowchart ofFIG. 5, however, the continuous speech segments are not immediately considered as priority continuous speech segments, but only in the case where the likelihood Ld of the slope of the continuous speech segments is greater than the threshold value Td, they are considered as priority continuous speech segments. Unless the continuous speech segments are considered as priority continuous speech segments as a consequence, they are treated as ordinary speech segments and therefore thecontinuous speech segments705 are also modified into thephone segments705′ as shown in agraph704.
On the other hand, if the continuous speech segments are considered as priority continuous speech segments, a large weight is used for the priority continuous speech segments in the prosody modification value search as shown inFIG. 5, and therefore the prosody modification values are not substantially applied to the continuous speech segments as shown by thewaveform707 of agraph706. The prosody modification values, however, need to be applied so as to maximize the likelihood of the slope as a whole, and therefore thegraph706 shows that larger prosody modification values than in thegraph704 are applied to the portions other than the priority continuous speech segments.
In order to verify the effectiveness of the present invention, a subjective evaluation has been performed on the accuracy of accent in a synthesized speech. The following three objects have been adopted as those to be evaluated: the present invention, “application of speech segment prosody” which is a conventional approach, and “application of target prosody” which is one of the conventional technologies. Samples used for the evaluation are synthesized speeches each of which is composed of 75 sentences (approx. 200 breath groups) and the number of subjects is three. As a result, a significant improvement has been observed as shown in the Accent Precision column in the table below. Additionally, a result of the objective evaluation of the sound quality is shown in the rightmost column of the same table. The value indicates a prosody modification value of a speech segment by a root mean square: it is thought that the greater the value is, the more the sound quality is deteriorated by the prosody modification. As a result of the experiment, the prosody modification value is 10 Hz or more smaller than in the application of target prosody, though it is slightly greater than in the application of speech segment prosody, which proved that the present invention achieves a high accent precision with a high sound quality.
TABLE 1
Accent precision
UnnaturalProsody
though accentIncorrectmodification
Naturaltype is correctaccent typevalue [Hz]
Application of57.6%16.7%25.7%11.3 Hz
speech segment
prosody
Application of74.2%13.9%12.0%30.5 Hz
target prosody
Present invention91.2%5.88%2.94%17.7 Hz
Subsequently, the same subjective evaluation of the accent precision has been performed for different comparison objects in order to verify the effectiveness of the components of the present invention. The comparison objects are as follows: the present invention; a case where the prosody modification of the present invention is not performed; and a case where all continuous speech segments are treated as priority continuous speech segments with Td of the present invention set to an extremely small value. The samples used for the evaluation are synthesized speeches each of which is composed of 75 sentences (approx. 200 breath groups) and the number of subjects is one. As a result, it has been proved that both of the prosody modification and Td are contributed to the improvement of the accent precision as shown in the following table:
TABLE 2
Unnatural thoughIncorrect
Naturalaccent type is correctaccent type
No modification78.8%11.6%9.53%
Low Td value85.7%7.41%6.88%
Present invention91.0%4.76%2.35%
Finally, a model using the fundamental frequency slope of the present invention has been compared with a model [1] using a fundamental frequency difference under the same conditions without prosody modification in order to verify the superiority of the model using the fundamental frequency slope to the model [1] using the fundamental frequency difference. This evaluation has been performed simultaneously with the above evaluation. Therefore, the number of subjects and the number of samples are the same as those of the above. In consequence, it has been proved that the model using the fundamental frequency slope of the present invention is superior in accent precision as shown below.
TABLE 3
Unnatural thoughIncorrect
Naturalaccent type is correctaccent type
Delta pitch65.8%10.7%23.5%
without prosody
modification
Present invention78.8%11.6%9.53%
without prosody
modification
Although the prosody modification value has been used in the frequency as an example in the above embodiment, the same method is also applicable to the duration. If so, the first path for the speech segment search is shared with the case of the frequency and the second path for the modification value search is used to perform the modification value search only for the duration separately from the pitch.
Furthermore, while the combination of GMM and the decision tree has been used as a statistical model in the above embodiment, it is also possible to apply the multiple regression analysis by Quantification Theory Type I, instead of the decision tree.

Claims (15)

The invention claimed is:
1. At least one computer-readable storage device encoded with a speech synthesis program which causes a system for synthesizing speech from text to perform:
determining a first speech segment sequence corresponding to an input text, by selecting speech segments from a speech segment database according to a first cost calculated based at least in part on a statistical model stochastically representing frequency slope variations, wherein each segment in the first speech segment sequence is to be used in generating speech corresponding to the input text;
determining prosody modification values for the first speech segment sequence, after the first speech segment sequence is selected, by using a second cost calculated based at least in part on the statistical model stochastically representing frequency slope variations, wherein the first cost is different from the second cost; and
applying the determined prosody modification values to the first speech segment sequence to produce a second speech segment sequence having a same number of speech segments as the first speech segment sequence and whose prosodic characteristics are different from prosodic characteristics of the first speech segment sequence,
wherein the second cost for determining the prosody modification values includes a sum of an absolute frequency likelihood cost, a frequency slope likelihood cost, a frequency linear approximation error cost, and a prosody modification cost.
2. The at least one computer readable storage device ofclaim 1, wherein the first cost for determining the first speech segment sequence includes a spectrum continuity cost, a duration error cost, a volume error cost, an absolute frequency likelihood cost, a frequency slope likelihood cost, and a frequency linear approximation error cost.
3. The at least one computer readable storage device ofclaim 1, wherein the statistical model uses a decision tree and a Gaussian mixture model.
4. The at least one computer readable storage device ofclaim 3, wherein the Gaussian mixture model associates features of speech segments with respective frequency slope values.
5. The at least one computer-readable storage device ofclaim 1, wherein the program further causes the system to increase the prosody modification cost of at least one continuous speech segment in the first speech segment sequence having a slope likelihood greater than a given value.
6. A speech synthesis method for synthesizing speech from text by computer processing, the method comprising:
determining a first speech segment sequence corresponding to an input text, by selecting speech segments from a speech segment database according to a first cost calculated based at least in part on a statistical model stochastically representing frequency slope variations, wherein each segment in the first speech segment sequence is to be used in generating speech corresponding to the input text;
determining prosody modification values for the first speech segment sequence, after the first speech segment sequence is selected, by using a second cost calculated based at least in part on the statistical model stochastically representing frequency slope variations, wherein the first cost is different from the second cost; and
applying the determined prosody modification values to the first speech segment sequence to produce a second speech segment sequence having a same number of speech segments as the first speech segment sequence and whose prosodic characteristics are different from prosodic characteristics of the first speech segment sequence,
wherein the second cost for determining the prosody modification values includes a sum of an absolute frequency likelihood cost, a frequency slope likelihood cost, a frequency linear approximation error cost, and a prosody modification cost.
7. The method ofclaim 6, wherein the first cost for determining the first speech segment sequence includes a spectrum continuity cost, a duration error cost, a volume error cost, an absolute frequency likelihood cost, a frequency slope likelihood cost, and a frequency linear approximation error cost.
8. The method ofclaim 6, wherein the statistical model uses a decision tree and a Gaussian mixture model.
9. The method ofclaim 8, wherein the Gaussian mixture model associates features of speech segments with respective frequency slope values.
10. The method ofclaim 6, wherein the method further comprises increasing the prosody modification cost of at least one continuous speech segment in the first speech segment sequence having a slope likelihood greater than a given value.
11. A speech synthesis system for synthesizing speech from text, the system comprising:
at least one processor configured to:
determine a first speech segment sequence corresponding to an input text, by selecting speech segments from a speech segment database according to a first cost calculated based at least in part on a statistical model stochastically representing frequency slope variations, wherein each segment in the first speech segment sequence is to be used in generating speech corresponding to the input text;
determine prosody modification values for the first speech segment sequence, after the first speech segment sequence is selected, by using a second cost calculated based at least in part on the statistical model stochastically representing frequency slope variations, wherein the first cost is different from the second cost; and
apply the determined prosody modification values to the first speech segment sequence to produce a second speech segment sequence having a same number of speech segments as the first speech segment sequence and whose prosodic characteristics are different from prosodic characteristics of the first speech segment sequence,
wherein the second cost for determining the prosody modification values includes a sum of an absolute frequency likelihood cost, a frequency slope likelihood cost, a frequency linear approximation error cost, and a prosody modification cost.
12. The system ofclaim 11, wherein the first cost for determining the first speech segment sequence includes a spectrum continuity cost, a duration error cost, a volume error cost, an absolute frequency likelihood cost, a frequency slope likelihood cost, and a frequency linear approximation error cost.
13. The system ofclaim 11, wherein the statistical model uses a decision tree and a Gaussian mixture model.
14. The system ofclaim 13, wherein the Gaussian mixture model associates features of speech segments with respective frequency slope values.
15. The system ofclaim 11, wherein the at least one processor is further configured to increase the prosody modification cost of at least one continuous speech segment in the first speech segment sequence having a slope likelihood greater than a given value.
US13/731,2682007-09-072012-12-31Speech synthesis system, speech synthesis program product, and speech synthesis methodActive2028-09-28US9275631B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US13/731,268US9275631B2 (en)2007-09-072012-12-31Speech synthesis system, speech synthesis program product, and speech synthesis method

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
JP2007-2323952007-09-07
JP2007232395AJP5238205B2 (en)2007-09-072007-09-07 Speech synthesis system, program and method
US12/192,510US8370149B2 (en)2007-09-072008-08-15Speech synthesis system, speech synthesis program product, and speech synthesis method
US13/731,268US9275631B2 (en)2007-09-072012-12-31Speech synthesis system, speech synthesis program product, and speech synthesis method

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US12/192,510ContinuationUS8370149B2 (en)2007-09-072008-08-15Speech synthesis system, speech synthesis program product, and speech synthesis method

Publications (2)

Publication NumberPublication Date
US20130268275A1 US20130268275A1 (en)2013-10-10
US9275631B2true US9275631B2 (en)2016-03-01

Family

ID=40432832

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US12/192,510Active2030-09-23US8370149B2 (en)2007-09-072008-08-15Speech synthesis system, speech synthesis program product, and speech synthesis method
US13/731,268Active2028-09-28US9275631B2 (en)2007-09-072012-12-31Speech synthesis system, speech synthesis program product, and speech synthesis method

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US12/192,510Active2030-09-23US8370149B2 (en)2007-09-072008-08-15Speech synthesis system, speech synthesis program product, and speech synthesis method

Country Status (2)

CountryLink
US (2)US8370149B2 (en)
JP (1)JP5238205B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20160365085A1 (en)*2015-06-112016-12-15Interactive Intelligence Group, Inc.System and method for outlier identification to remove poor alignments in speech synthesis

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101617359B (en)*2007-02-202012-01-18日本电气株式会社Speech synthesizing device, and method
JP5238205B2 (en)2007-09-072013-07-17ニュアンス コミュニケーションズ,インコーポレイテッド Speech synthesis system, program and method
US8583438B2 (en)*2007-09-202013-11-12Microsoft CorporationUnnatural prosody detection in speech synthesis
WO2010119534A1 (en)*2009-04-152010-10-21株式会社東芝Speech synthesizing device, method, and program
EP2357646B1 (en)*2009-05-282013-08-07International Business Machines CorporationApparatus, method and program for generating a synthesised voice based on a speaker-adaptive technique.
US8332225B2 (en)*2009-06-042012-12-11Microsoft CorporationTechniques to create a custom voice font
RU2421827C2 (en)*2009-08-072011-06-20Общество с ограниченной ответственностью "Центр речевых технологий"Speech synthesis method
US8965768B2 (en)2010-08-062015-02-24At&T Intellectual Property I, L.P.System and method for automatic detection of abnormal stress patterns in unit selection synthesis
JP5717097B2 (en)*2011-09-072015-05-13独立行政法人情報通信研究機構 Hidden Markov model learning device and speech synthesizer for speech synthesis
US20140074465A1 (en)*2012-09-112014-03-13Delphi Technologies, Inc.System and method to generate a narrator specific acoustic database without a predefined script
US20140236602A1 (en)*2013-02-212014-08-21Utah State UniversitySynthesizing Vowels and Consonants of Speech
JP5807921B2 (en)*2013-08-232015-11-10国立研究開発法人情報通信研究機構 Quantitative F0 pattern generation device and method, model learning device for F0 pattern generation, and computer program
JP2015125681A (en)*2013-12-272015-07-06パイオニア株式会社Information providing device
GB2524505B (en)*2014-03-242017-11-08Toshiba Res Europe LtdVoice conversion
US9997154B2 (en)2014-05-122018-06-12At&T Intellectual Property I, L.P.System and method for prosodically modified unit selection databases
US9990916B2 (en)*2016-04-262018-06-05Adobe Systems IncorporatedMethod to synthesize personalized phonetic transcription
CN106356052B (en)*2016-10-172019-03-15腾讯科技(深圳)有限公司Phoneme synthesizing method and device
US10347238B2 (en)*2017-10-272019-07-09Adobe Inc.Text-based insertion and replacement in audio narration
CN108364632B (en)*2017-12-222021-09-10东南大学Emotional Chinese text voice synthesis method
US10770063B2 (en)2018-04-132020-09-08Adobe Inc.Real-time speaker-dependent neural vocoder
JP6698789B2 (en)*2018-11-052020-05-27パイオニア株式会社 Information provision device
WO2020101263A1 (en)*2018-11-142020-05-22Samsung Electronics Co., Ltd.Electronic apparatus and method for controlling thereof
CN109841216B (en)*2018-12-262020-12-15珠海格力电器股份有限公司Voice data processing method and device and intelligent terminal
US11062691B2 (en)*2019-05-132021-07-13International Business Machines CorporationVoice transformation allowance determination and representation
JP2020144890A (en)*2020-04-272020-09-10パイオニア株式会社Information provision device
US11335324B2 (en)*2020-08-312022-05-17Google LlcSynthesized data augmentation using voice conversion and speech recognition models

Citations (96)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US3828132A (en)*1970-10-301974-08-06Bell Telephone Labor IncSpeech synthesis by concatenation of formant encoded words
US5664050A (en)*1993-06-021997-09-02Telia AbProcess for evaluating speech quality in speech synthesis
US5913193A (en)*1996-04-301999-06-15Microsoft CorporationMethod and system of runtime acoustic unit selection for speech synthesis
US5999900A (en)*1993-06-211999-12-07British Telecommunications Public Limited CompanyReduced redundancy test signal similar to natural speech for supporting data manipulation functions in testing telecommunications equipment
US6173263B1 (en)*1998-08-312001-01-09At&T Corp.Method and system for performing concatenative speech synthesis using half-phonemes
US6233544B1 (en)*1996-06-142001-05-15At&T CorpMethod and apparatus for language translation
US6240384B1 (en)*1995-12-042001-05-29Kabushiki Kaisha ToshibaSpeech synthesis method
US6253182B1 (en)*1998-11-242001-06-26Microsoft CorporationMethod and apparatus for speech synthesis with efficient spectral smoothing
US6266637B1 (en)*1998-09-112001-07-24International Business Machines CorporationPhrase splicing and variable substitution using a trainable speech synthesizer
US20010021906A1 (en)*2000-03-032001-09-13Keiichi ChiharaIntonation control method for text-to-speech conversion
JP2001282282A (en)2000-03-312001-10-12Canon Inc Voice information processing method and apparatus, and storage medium
US20010039492A1 (en)*2000-05-022001-11-08International Business Machines CorporationMethod, system, and apparatus for speech recognition
US20010056347A1 (en)*1999-11-022001-12-27International Business Machines CorporationFeature-domain concatenative speech synthesis
US6366883B1 (en)*1996-05-152002-04-02Atr Interpreting TelecommunicationsConcatenation of speech segments by use of a speech synthesizer
US6377917B1 (en)*1997-01-272002-04-23Microsoft CorporationSystem and methodology for prosody modification
US20020152073A1 (en)*2000-09-292002-10-17Demoortel JanCorpus-based prosody translation system
US20030046079A1 (en)*2001-09-032003-03-06Yasuo YoshiokaVoice synthesizing apparatus capable of adding vibrato effect to synthesized voice
US20030088417A1 (en)*2001-09-192003-05-08Takahiro KamaiSpeech analysis method and speech synthesis system
US20030112987A1 (en)*2001-12-182003-06-19Gn Resound A/SHearing prosthesis with automatic classification of the listening environment
US20030158721A1 (en)*2001-03-082003-08-21Yumiko KatoProsody generating device, prosody generating method, and program
US20030195743A1 (en)*2002-04-102003-10-16Industrial Technology Research InstituteMethod of speech segment selection for concatenative synthesis based on prosody-aligned distance measure
US20030208355A1 (en)*2000-05-312003-11-06Stylianou Ioannis G.Stochastic modeling of spectral adjustment for high quality pitch modification
US6665641B1 (en)*1998-11-132003-12-16Scansoft, Inc.Speech synthesis using concatenation of speech waveforms
US20040030555A1 (en)*2002-08-122004-02-12Oregon Health & Science UniversitySystem and method for concatenating acoustic contours for speech synthesis
US6701295B2 (en)*1999-04-302004-03-02At&T Corp.Methods and apparatus for rapid acoustic unit selection from a large speech corpus
US20040059568A1 (en)*2002-08-022004-03-25David TalkinMethod and apparatus for smoothing fundamental frequency discontinuities across synthesized speech segments
JP2004109535A (en)2002-09-192004-04-08Nippon Hoso Kyokai <Nhk> Speech synthesis method, speech synthesis device, and speech synthesis program
JP2004139033A (en)2002-09-252004-05-13Nippon Hoso Kyokai <Nhk> Speech synthesis method, speech synthesis device, and speech synthesis program
US20040148171A1 (en)*2000-12-042004-07-29Microsoft CorporationMethod and apparatus for speech synthesis without prosody modification
US20040172249A1 (en)*2001-05-252004-09-02Taylor Paul AlexanderSpeech synthesis
US20040220813A1 (en)*2003-04-302004-11-04Fuliang WengMethod for statistical language modeling in speech recognition
US6823309B1 (en)*1999-03-252004-11-23Matsushita Electric Industrial Co., Ltd.Speech synthesizing system and method for modifying prosody based on match to database
US6829581B2 (en)*2001-07-312004-12-07Matsushita Electric Industrial Co., Ltd.Method for prosody generation by unit selection from an imitation speech database
US6839670B1 (en)*1995-09-112005-01-04Harman Becker Automotive Systems GmbhProcess for automatic control of one or more devices by voice commands or by real-time voice dialog and apparatus for carrying out this process
US20050119890A1 (en)*2003-11-282005-06-02Yoshifumi HiroseSpeech synthesis apparatus and speech synthesis method
JP2005164749A (en)2003-11-282005-06-23Toshiba Corp Speech synthesis method, speech synthesizer, and speech synthesis program
US20050182629A1 (en)*2004-01-162005-08-18Geert CoormanCorpus-based speech synthesis based on segment recombination
JP2005292433A (en)2004-03-312005-10-20Toshiba Corp Speech synthesis apparatus, speech synthesis method, and speech synthesis program
US6980955B2 (en)*2000-03-312005-12-27Canon Kabushiki KaishaSynthesis unit selection apparatus and method, and storage medium
US6988069B2 (en)*2003-01-312006-01-17Speechworks International, Inc.Reduced unit database generation based on cost information
US20060020473A1 (en)*2004-07-262006-01-26Atsuo HiroeMethod, apparatus, and program for dialogue, and storage medium including a program stored therein
US20060041429A1 (en)2004-08-112006-02-23International Business Machines CorporationText-to-speech system and method
US20060074674A1 (en)2004-09-302006-04-06International Business Machines CorporationMethod and system for statistic-based distance definition in text-to-speech conversion
US20060074678A1 (en)*2004-09-292006-04-06Matsushita Electric Industrial Co., Ltd.Prosody generation for text-to-speech synthesis based on micro-prosodic data
US20060085194A1 (en)*2000-03-312006-04-20Canon Kabushiki KaishaSpeech synthesis apparatus and method, and storage medium
US20060229877A1 (en)*2005-04-062006-10-12Jilei TianMemory usage in a text-to-speech system
US7124083B2 (en)*2000-06-302006-10-17At&T Corp.Method and system for preselection of suitable units for concatenative speech
US7136816B1 (en)*2002-04-052006-11-14At&T Corp.System and method for predicting prosodic parameters
US20060259303A1 (en)*2005-05-122006-11-16Raimo BakisSystems and methods for pitch smoothing for text-to-speech synthesis
US7165030B2 (en)*2001-09-172007-01-16Massachusetts Institute Of TechnologyConcatenative speech synthesis using a finite-state transducer
US20070073542A1 (en)*2005-09-232007-03-29International Business Machines CorporationMethod and system for configurable allocation of sound segments for use in concatenative text-to-speech voice synthesis
US7280967B2 (en)*2003-07-302007-10-09International Business Machines CorporationMethod for detecting misaligned phonetic units for a concatenative text-to-speech voice
US7280969B2 (en)*2000-12-072007-10-09International Business Machines CorporationMethod and apparatus for producing natural sounding pitch contours in a speech synthesizer
US20070264010A1 (en)*2006-05-092007-11-15Aegis Lightwave, Inc.Self Calibrated Optical Spectrum Monitor
US20070276666A1 (en)*2004-09-162007-11-29France TelecomMethod and Device for Selecting Acoustic Units and a Voice Synthesis Method and Device
US20080027727A1 (en)*2006-07-312008-01-31Kabushiki Kaisha ToshibaSpeech synthesis apparatus and method
US20080046247A1 (en)*2006-08-212008-02-21Gakuto KurataSystem And Method For Supporting Text-To-Speech
US20080059190A1 (en)*2006-08-222008-03-06Microsoft CorporationSpeech unit selection using HMM acoustic models
US7349847B2 (en)*2004-10-132008-03-25Matsushita Electric Industrial Co., Ltd.Speech synthesis apparatus and speech synthesis method
US7369994B1 (en)*1999-04-302008-05-06At&T Corp.Methods and apparatus for rapid acoustic unit selection from a large speech corpus
US20080132178A1 (en)*2006-09-222008-06-05Shouri ChatterjeePerforming automatic frequency control
JP2008134475A (en)2006-11-282008-06-12Internatl Business Mach Corp <Ibm>Technique for recognizing accent of input voice
US20080177548A1 (en)*2005-05-312008-07-24Canon Kabushiki KaishaSpeech Synthesis Method and Apparatus
US20080195391A1 (en)*2005-03-282008-08-14Lessac Technologies, Inc.Hybrid Speech Synthesizer, Method and Use
US20080243511A1 (en)*2006-10-242008-10-02Yusuke FujitaSpeech synthesizer
US7447635B1 (en)*1999-10-192008-11-04Sony CorporationNatural language interface control system
US7454343B2 (en)*2005-06-162008-11-18Panasonic CorporationSpeech synthesizer, speech synthesizing method, and program
US20080288256A1 (en)*2007-05-142008-11-20International Business Machines CorporationReducing recording time when constructing a concatenative tts voice using a reduced script and pre-recorded speech assets
US20090055188A1 (en)*2007-08-212009-02-26Kabushiki Kaisha ToshibaPitch pattern generation method and apparatus thereof
US20090083036A1 (en)*2007-09-202009-03-26Microsoft CorporationUnnatural prosody detection in speech synthesis
US20090112596A1 (en)*2007-10-302009-04-30At&T Lab, Inc.System and method for improving synthesized speech interactions of a spoken dialog system
US20090204405A1 (en)*2005-09-062009-08-13Nec CorporationMethod, apparatus and program for speech synthesis
US20090234652A1 (en)*2005-05-182009-09-17Yumiko KatoVoice synthesis device
US20090254349A1 (en)*2006-06-052009-10-08Yoshifumi HiroseSpeech synthesizer
US7617105B2 (en)*2004-05-312009-11-10Nuance Communications, Inc.Converting text-to-speech and adjusting corpus
US7630896B2 (en)*2005-03-292009-12-08Kabushiki Kaisha ToshibaSpeech synthesis system and method
US7643990B1 (en)*2003-10-232010-01-05Apple Inc.Global boundary-centric feature extraction and associated discontinuity metrics
US20100004931A1 (en)*2006-09-152010-01-07Bin MaApparatus and method for speech utterance verification
US20100076768A1 (en)2007-02-202010-03-25Nec CorporationSpeech synthesizing apparatus, method, and program
US7702510B2 (en)*2007-01-122010-04-20Nuance Communications, Inc.System and method for dynamically selecting among TTS systems
US7716052B2 (en)*2005-04-072010-05-11Nuance Communications, Inc.Method, apparatus and computer program providing a multi-speaker database for concatenative text-to-speech synthesis
US7761296B1 (en)*1999-04-022010-07-20International Business Machines CorporationSystem and method for rescoring N-best hypotheses of an automatic speech recognition system
US7801725B2 (en)*2006-03-302010-09-21Industrial Technology Research InstituteMethod for speech quality degradation estimation and method for degradation measures calculation and apparatuses thereof
US7912719B2 (en)*2004-05-112011-03-22Panasonic CorporationSpeech synthesis device and speech synthesis method for changing a voice characteristic
US7916799B2 (en)*2006-04-032011-03-29Realtek Semiconductor Corp.Frequency offset correction for an ultrawideband communication system
US8015011B2 (en)*2007-01-302011-09-06Nuance Communications, Inc.Generating objectively evaluated sufficiently natural synthetic speech from text by using selective paraphrases
US8024193B2 (en)*2006-10-102011-09-20Apple Inc.Methods and apparatus related to pruning for concatenative text-to-speech synthesis
US8041569B2 (en)*2007-03-142011-10-18Canon Kabushiki KaishaSpeech synthesis method and apparatus using pre-recorded speech and rule-based synthesized speech
US8055501B2 (en)*2007-06-232011-11-08Industrial Technology Research InstituteSpeech synthesizer generating system and method thereof
US20120059654A1 (en)*2009-05-282012-03-08International Business Machines CorporationSpeaker-adaptive synthesized voice
US8155964B2 (en)*2007-06-062012-04-10Panasonic CorporationVoice quality edit device and voice quality edit method
US8175881B2 (en)*2007-08-172012-05-08Kabushiki Kaisha ToshibaMethod and apparatus using fused formant parameters to generate synthesized speech
US8249874B2 (en)*2007-03-072012-08-21Nuance Communications, Inc.Synthesizing speech from text
US8255222B2 (en)*2007-08-102012-08-28Panasonic CorporationSpeech separating apparatus, speech synthesizing apparatus, and voice quality conversion apparatus
US20120321016A1 (en)*2002-07-122012-12-20Alcatel-Lucent Usa IncCommunicating Over Single- or Multiple- Antenna Channels Having Both Temporal and Spectral Fluctuations
US8370149B2 (en)2007-09-072013-02-05Nuance Communications, Inc.Speech synthesis system, speech synthesis program product, and speech synthesis method

Patent Citations (109)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US3828132A (en)*1970-10-301974-08-06Bell Telephone Labor IncSpeech synthesis by concatenation of formant encoded words
US5664050A (en)*1993-06-021997-09-02Telia AbProcess for evaluating speech quality in speech synthesis
US5999900A (en)*1993-06-211999-12-07British Telecommunications Public Limited CompanyReduced redundancy test signal similar to natural speech for supporting data manipulation functions in testing telecommunications equipment
US6839670B1 (en)*1995-09-112005-01-04Harman Becker Automotive Systems GmbhProcess for automatic control of one or more devices by voice commands or by real-time voice dialog and apparatus for carrying out this process
US6240384B1 (en)*1995-12-042001-05-29Kabushiki Kaisha ToshibaSpeech synthesis method
US5913193A (en)*1996-04-301999-06-15Microsoft CorporationMethod and system of runtime acoustic unit selection for speech synthesis
US6366883B1 (en)*1996-05-152002-04-02Atr Interpreting TelecommunicationsConcatenation of speech segments by use of a speech synthesizer
US6233544B1 (en)*1996-06-142001-05-15At&T CorpMethod and apparatus for language translation
US6377917B1 (en)*1997-01-272002-04-23Microsoft CorporationSystem and methodology for prosody modification
US6173263B1 (en)*1998-08-312001-01-09At&T Corp.Method and system for performing concatenative speech synthesis using half-phonemes
US6266637B1 (en)*1998-09-112001-07-24International Business Machines CorporationPhrase splicing and variable substitution using a trainable speech synthesizer
US6665641B1 (en)*1998-11-132003-12-16Scansoft, Inc.Speech synthesis using concatenation of speech waveforms
US7219060B2 (en)*1998-11-132007-05-15Nuance Communications, Inc.Speech synthesis using concatenation of speech waveforms
US6253182B1 (en)*1998-11-242001-06-26Microsoft CorporationMethod and apparatus for speech synthesis with efficient spectral smoothing
US6823309B1 (en)*1999-03-252004-11-23Matsushita Electric Industrial Co., Ltd.Speech synthesizing system and method for modifying prosody based on match to database
US7761296B1 (en)*1999-04-022010-07-20International Business Machines CorporationSystem and method for rescoring N-best hypotheses of an automatic speech recognition system
US7369994B1 (en)*1999-04-302008-05-06At&T Corp.Methods and apparatus for rapid acoustic unit selection from a large speech corpus
US6701295B2 (en)*1999-04-302004-03-02At&T Corp.Methods and apparatus for rapid acoustic unit selection from a large speech corpus
US7447635B1 (en)*1999-10-192008-11-04Sony CorporationNatural language interface control system
US20010056347A1 (en)*1999-11-022001-12-27International Business Machines CorporationFeature-domain concatenative speech synthesis
US20010021906A1 (en)*2000-03-032001-09-13Keiichi ChiharaIntonation control method for text-to-speech conversion
US6980955B2 (en)*2000-03-312005-12-27Canon Kabushiki KaishaSynthesis unit selection apparatus and method, and storage medium
US7039588B2 (en)*2000-03-312006-05-02Canon Kabushiki KaishaSynthesis unit selection apparatus and method, and storage medium
US20060085194A1 (en)*2000-03-312006-04-20Canon Kabushiki KaishaSpeech synthesis apparatus and method, and storage medium
JP2001282282A (en)2000-03-312001-10-12Canon Inc Voice information processing method and apparatus, and storage medium
US7155390B2 (en)*2000-03-312006-12-26Canon Kabushiki KaishaSpeech information processing method and apparatus and storage medium using a segment pitch pattern model
US20010039492A1 (en)*2000-05-022001-11-08International Business Machines CorporationMethod, system, and apparatus for speech recognition
US20030208355A1 (en)*2000-05-312003-11-06Stylianou Ioannis G.Stochastic modeling of spectral adjustment for high quality pitch modification
US7124083B2 (en)*2000-06-302006-10-17At&T Corp.Method and system for preselection of suitable units for concatenative speech
US7069216B2 (en)*2000-09-292006-06-27Nuance Communications, Inc.Corpus-based prosody translation system
US20020152073A1 (en)*2000-09-292002-10-17Demoortel JanCorpus-based prosody translation system
US20040148171A1 (en)*2000-12-042004-07-29Microsoft CorporationMethod and apparatus for speech synthesis without prosody modification
US7280969B2 (en)*2000-12-072007-10-09International Business Machines CorporationMethod and apparatus for producing natural sounding pitch contours in a speech synthesizer
US20030158721A1 (en)*2001-03-082003-08-21Yumiko KatoProsody generating device, prosody generating method, and program
US20040172249A1 (en)*2001-05-252004-09-02Taylor Paul AlexanderSpeech synthesis
US6829581B2 (en)*2001-07-312004-12-07Matsushita Electric Industrial Co., Ltd.Method for prosody generation by unit selection from an imitation speech database
US20030046079A1 (en)*2001-09-032003-03-06Yasuo YoshiokaVoice synthesizing apparatus capable of adding vibrato effect to synthesized voice
US7165030B2 (en)*2001-09-172007-01-16Massachusetts Institute Of TechnologyConcatenative speech synthesis using a finite-state transducer
US20030088417A1 (en)*2001-09-192003-05-08Takahiro KamaiSpeech analysis method and speech synthesis system
US20030112987A1 (en)*2001-12-182003-06-19Gn Resound A/SHearing prosthesis with automatic classification of the listening environment
US7136816B1 (en)*2002-04-052006-11-14At&T Corp.System and method for predicting prosodic parameters
US20030195743A1 (en)*2002-04-102003-10-16Industrial Technology Research InstituteMethod of speech segment selection for concatenative synthesis based on prosody-aligned distance measure
US20120321016A1 (en)*2002-07-122012-12-20Alcatel-Lucent Usa IncCommunicating Over Single- or Multiple- Antenna Channels Having Both Temporal and Spectral Fluctuations
US7286986B2 (en)*2002-08-022007-10-23Rhetorical Systems LimitedMethod and apparatus for smoothing fundamental frequency discontinuities across synthesized speech segments
US20040059568A1 (en)*2002-08-022004-03-25David TalkinMethod and apparatus for smoothing fundamental frequency discontinuities across synthesized speech segments
US20040030555A1 (en)*2002-08-122004-02-12Oregon Health & Science UniversitySystem and method for concatenating acoustic contours for speech synthesis
JP2004109535A (en)2002-09-192004-04-08Nippon Hoso Kyokai <Nhk> Speech synthesis method, speech synthesis device, and speech synthesis program
JP2004139033A (en)2002-09-252004-05-13Nippon Hoso Kyokai <Nhk> Speech synthesis method, speech synthesis device, and speech synthesis program
US6988069B2 (en)*2003-01-312006-01-17Speechworks International, Inc.Reduced unit database generation based on cost information
US20040220813A1 (en)*2003-04-302004-11-04Fuliang WengMethod for statistical language modeling in speech recognition
US7280967B2 (en)*2003-07-302007-10-09International Business Machines CorporationMethod for detecting misaligned phonetic units for a concatenative text-to-speech voice
US7643990B1 (en)*2003-10-232010-01-05Apple Inc.Global boundary-centric feature extraction and associated discontinuity metrics
JP2005164749A (en)2003-11-282005-06-23Toshiba Corp Speech synthesis method, speech synthesizer, and speech synthesis program
US20050137870A1 (en)2003-11-282005-06-23Tatsuya MizutaniSpeech synthesis method, speech synthesis system, and speech synthesis program
US7668717B2 (en)*2003-11-282010-02-23Kabushiki Kaisha ToshibaSpeech synthesis method, speech synthesis system, and speech synthesis program
US20050119890A1 (en)*2003-11-282005-06-02Yoshifumi HiroseSpeech synthesis apparatus and speech synthesis method
US7856357B2 (en)*2003-11-282010-12-21Kabushiki Kaisha ToshibaSpeech synthesis method, speech synthesis system, and speech synthesis program
US7567896B2 (en)*2004-01-162009-07-28Nuance Communications, Inc.Corpus-based speech synthesis based on segment recombination
US20050182629A1 (en)*2004-01-162005-08-18Geert CoormanCorpus-based speech synthesis based on segment recombination
JP2005292433A (en)2004-03-312005-10-20Toshiba Corp Speech synthesis apparatus, speech synthesis method, and speech synthesis program
US7912719B2 (en)*2004-05-112011-03-22Panasonic CorporationSpeech synthesis device and speech synthesis method for changing a voice characteristic
US7617105B2 (en)*2004-05-312009-11-10Nuance Communications, Inc.Converting text-to-speech and adjusting corpus
US20060020473A1 (en)*2004-07-262006-01-26Atsuo HiroeMethod, apparatus, and program for dialogue, and storage medium including a program stored therein
US7869999B2 (en)*2004-08-112011-01-11Nuance Communications, Inc.Systems and methods for selecting from multiple phonectic transcriptions for text-to-speech synthesis
US20060041429A1 (en)2004-08-112006-02-23International Business Machines CorporationText-to-speech system and method
US20070276666A1 (en)*2004-09-162007-11-29France TelecomMethod and Device for Selecting Acoustic Units and a Voice Synthesis Method and Device
US20060074678A1 (en)*2004-09-292006-04-06Matsushita Electric Industrial Co., Ltd.Prosody generation for text-to-speech synthesis based on micro-prosodic data
US20060074674A1 (en)2004-09-302006-04-06International Business Machines CorporationMethod and system for statistic-based distance definition in text-to-speech conversion
US7590540B2 (en)*2004-09-302009-09-15Nuance Communications, Inc.Method and system for statistic-based distance definition in text-to-speech conversion
US7349847B2 (en)*2004-10-132008-03-25Matsushita Electric Industrial Co., Ltd.Speech synthesis apparatus and speech synthesis method
US20080195391A1 (en)*2005-03-282008-08-14Lessac Technologies, Inc.Hybrid Speech Synthesizer, Method and Use
US7630896B2 (en)*2005-03-292009-12-08Kabushiki Kaisha ToshibaSpeech synthesis system and method
US20060229877A1 (en)*2005-04-062006-10-12Jilei TianMemory usage in a text-to-speech system
US7716052B2 (en)*2005-04-072010-05-11Nuance Communications, Inc.Method, apparatus and computer program providing a multi-speaker database for concatenative text-to-speech synthesis
US20060259303A1 (en)*2005-05-122006-11-16Raimo BakisSystems and methods for pitch smoothing for text-to-speech synthesis
US20090234652A1 (en)*2005-05-182009-09-17Yumiko KatoVoice synthesis device
US20080177548A1 (en)*2005-05-312008-07-24Canon Kabushiki KaishaSpeech Synthesis Method and Apparatus
US7454343B2 (en)*2005-06-162008-11-18Panasonic CorporationSpeech synthesizer, speech synthesizing method, and program
US20090204405A1 (en)*2005-09-062009-08-13Nec CorporationMethod, apparatus and program for speech synthesis
US20070073542A1 (en)*2005-09-232007-03-29International Business Machines CorporationMethod and system for configurable allocation of sound segments for use in concatenative text-to-speech voice synthesis
US7801725B2 (en)*2006-03-302010-09-21Industrial Technology Research InstituteMethod for speech quality degradation estimation and method for degradation measures calculation and apparatuses thereof
US7916799B2 (en)*2006-04-032011-03-29Realtek Semiconductor Corp.Frequency offset correction for an ultrawideband communication system
US20070264010A1 (en)*2006-05-092007-11-15Aegis Lightwave, Inc.Self Calibrated Optical Spectrum Monitor
US20090254349A1 (en)*2006-06-052009-10-08Yoshifumi HiroseSpeech synthesizer
US20080027727A1 (en)*2006-07-312008-01-31Kabushiki Kaisha ToshibaSpeech synthesis apparatus and method
US20080046247A1 (en)*2006-08-212008-02-21Gakuto KurataSystem And Method For Supporting Text-To-Speech
US7921014B2 (en)*2006-08-212011-04-05Nuance Communications, Inc.System and method for supporting text-to-speech
US20080059190A1 (en)*2006-08-222008-03-06Microsoft CorporationSpeech unit selection using HMM acoustic models
US20100004931A1 (en)*2006-09-152010-01-07Bin MaApparatus and method for speech utterance verification
US20080132178A1 (en)*2006-09-222008-06-05Shouri ChatterjeePerforming automatic frequency control
US8024193B2 (en)*2006-10-102011-09-20Apple Inc.Methods and apparatus related to pruning for concatenative text-to-speech synthesis
US20080243511A1 (en)*2006-10-242008-10-02Yusuke FujitaSpeech synthesizer
US20080177543A1 (en)*2006-11-282008-07-24International Business Machines CorporationStochastic Syllable Accent Recognition
JP2008134475A (en)2006-11-282008-06-12Internatl Business Mach Corp <Ibm>Technique for recognizing accent of input voice
US7702510B2 (en)*2007-01-122010-04-20Nuance Communications, Inc.System and method for dynamically selecting among TTS systems
US8015011B2 (en)*2007-01-302011-09-06Nuance Communications, Inc.Generating objectively evaluated sufficiently natural synthetic speech from text by using selective paraphrases
US20100076768A1 (en)2007-02-202010-03-25Nec CorporationSpeech synthesizing apparatus, method, and program
US8249874B2 (en)*2007-03-072012-08-21Nuance Communications, Inc.Synthesizing speech from text
US8041569B2 (en)*2007-03-142011-10-18Canon Kabushiki KaishaSpeech synthesis method and apparatus using pre-recorded speech and rule-based synthesized speech
US20080288256A1 (en)*2007-05-142008-11-20International Business Machines CorporationReducing recording time when constructing a concatenative tts voice using a reduced script and pre-recorded speech assets
US8155964B2 (en)*2007-06-062012-04-10Panasonic CorporationVoice quality edit device and voice quality edit method
US8055501B2 (en)*2007-06-232011-11-08Industrial Technology Research InstituteSpeech synthesizer generating system and method thereof
US8255222B2 (en)*2007-08-102012-08-28Panasonic CorporationSpeech separating apparatus, speech synthesizing apparatus, and voice quality conversion apparatus
US8175881B2 (en)*2007-08-172012-05-08Kabushiki Kaisha ToshibaMethod and apparatus using fused formant parameters to generate synthesized speech
US20090055188A1 (en)*2007-08-212009-02-26Kabushiki Kaisha ToshibaPitch pattern generation method and apparatus thereof
US8370149B2 (en)2007-09-072013-02-05Nuance Communications, Inc.Speech synthesis system, speech synthesis program product, and speech synthesis method
US20090083036A1 (en)*2007-09-202009-03-26Microsoft CorporationUnnatural prosody detection in speech synthesis
US20090112596A1 (en)*2007-10-302009-04-30At&T Lab, Inc.System and method for improving synthesized speech interactions of a spoken dialog system
US20120059654A1 (en)*2009-05-282012-03-08International Business Machines CorporationSpeaker-adaptive synthesized voice

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Black, A. W., Taylor, P., "Automatically clustering similar units for unit selection in speech synthesis," Proc. Eurospeech '97, Rhodes, pp. 601-604, 1997.
Donovan, R. E., et al., "Current status of the IBM trainable speech synthesis system," Proc. 4th ISCA Tutorial and Research Workshop on Speech Synthesis. Atholl Palace Hotel, Scotland, 2001.
E. Eide, A. Aaron, R. Bakis, R. Cohen, R. Donovan, W. Hamza, T. Mathes, M. Picheny, M. Polkosky, M. Smith, and M. Viswanathan, "Recent improvements to the IBM trainable speech synthesis system," in Proc. of ICASSP, 2003, pp. 1-708-I-711.
Office Action mailed Feb. 28, 2012 in corresponding Japanese Application No. 2007-232395.
Xi Jun Ma, Wei Zhang, Weibin Zhu, Qin Shi and Ling Jin, "Probability based prosody model for unit selection," Proc. ICASSP, Montreal, 2004.

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20160365085A1 (en)*2015-06-112016-12-15Interactive Intelligence Group, Inc.System and method for outlier identification to remove poor alignments in speech synthesis
US9972300B2 (en)*2015-06-112018-05-15Genesys Telecommunications Laboratories, Inc.System and method for outlier identification to remove poor alignments in speech synthesis
US10497362B2 (en)2015-06-112019-12-03Interactive Intelligence Group, Inc.System and method for outlier identification to remove poor alignments in speech synthesis

Also Published As

Publication numberPublication date
US8370149B2 (en)2013-02-05
US20090070115A1 (en)2009-03-12
JP2009063869A (en)2009-03-26
JP5238205B2 (en)2013-07-17
US20130268275A1 (en)2013-10-10

Similar Documents

PublicationPublication DateTitle
US9275631B2 (en)Speech synthesis system, speech synthesis program product, and speech synthesis method
JP4054507B2 (en) Voice information processing method and apparatus, and storage medium
JP4080989B2 (en) Speech synthesis method, speech synthesizer, and speech synthesis program
US6778962B1 (en)Speech synthesis with prosodic model data and accent type
US10692484B1 (en)Text-to-speech (TTS) processing
US9484012B2 (en)Speech synthesis dictionary generation apparatus, speech synthesis dictionary generation method and computer program product
US20060259303A1 (en)Systems and methods for pitch smoothing for text-to-speech synthesis
US20040215459A1 (en)Speech information processing method and apparatus and storage medium
JP4406440B2 (en) Speech synthesis apparatus, speech synthesis method and program
Gutkin et al.TTS for low resource languages: A Bangla synthesizer
US20160189705A1 (en)Quantitative f0 contour generating device and method, and model learning device and method for f0 contour generation
JP4648878B2 (en) Style designation type speech synthesis method, style designation type speech synthesis apparatus, program thereof, and storage medium thereof
Gutkin et al.Building statistical parametric multi-speaker synthesis for bangladeshi bangla
JP4532862B2 (en) Speech synthesis method, speech synthesizer, and speech synthesis program
JP5007401B2 (en) Pronunciation rating device and program
JP4247289B1 (en) Speech synthesis apparatus, speech synthesis method and program thereof
US20130117026A1 (en)Speech synthesizer, speech synthesis method, and speech synthesis program
JP3854593B2 (en) Speech synthesis apparatus, cost calculation apparatus therefor, and computer program
JP2006084854A (en) Speech synthesis apparatus, speech synthesis method, and speech synthesis program
EP1589524B1 (en)Method and device for speech synthesis
EP1640968A1 (en)Method and device for speech synthesis
Dzibela et al.Hidden-Markov-Model Based Speech Enhancement
JP4603290B2 (en) Speech synthesis apparatus and speech synthesis program
Janicki et al.Taking advantage of pronunciation variation in unit selection speech synthesis for Polish
CN115798452A (en)End-to-end voice splicing synthesis method

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TACHIBANA, RYUKI;NISHIMURA, MASAFUMI;REEL/FRAME:029666/0218

Effective date:20080630

ASAssignment

Owner name:NUANCE COMMUNICATIONS, INC., MASSACHUSETTS

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:029683/0432

Effective date:20090331

STCFInformation on status: patent grant

Free format text:PATENTED CASE

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4

ASAssignment

Owner name:CERENCE INC., MASSACHUSETTS

Free format text:INTELLECTUAL PROPERTY AGREEMENT;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:050836/0191

Effective date:20190930

ASAssignment

Owner name:CERENCE OPERATING COMPANY, MASSACHUSETTS

Free format text:CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED AT REEL: 050836 FRAME: 0191. ASSIGNOR(S) HEREBY CONFIRMS THE INTELLECTUAL PROPERTY AGREEMENT;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:050871/0001

Effective date:20190930

ASAssignment

Owner name:BARCLAYS BANK PLC, NEW YORK

Free format text:SECURITY AGREEMENT;ASSIGNOR:CERENCE OPERATING COMPANY;REEL/FRAME:050953/0133

Effective date:20191001

ASAssignment

Owner name:CERENCE OPERATING COMPANY, MASSACHUSETTS

Free format text:RELEASE BY SECURED PARTY;ASSIGNOR:BARCLAYS BANK PLC;REEL/FRAME:052927/0335

Effective date:20200612

ASAssignment

Owner name:WELLS FARGO BANK, N.A., NORTH CAROLINA

Free format text:SECURITY AGREEMENT;ASSIGNOR:CERENCE OPERATING COMPANY;REEL/FRAME:052935/0584

Effective date:20200612

ASAssignment

Owner name:CERENCE OPERATING COMPANY, MASSACHUSETTS

Free format text:CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE THE CONVEYANCE DOCUMENT WITH THE NEW ASSIGNMENT PREVIOUSLY RECORDED AT REEL: 050836 FRAME: 0191. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:059804/0186

Effective date:20190930

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:8

ASAssignment

Owner name:CERENCE OPERATING COMPANY, MASSACHUSETTS

Free format text:RELEASE (REEL 052935 / FRAME 0584);ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:069797/0818

Effective date:20241231


[8]ページ先頭

©2009-2025 Movatter.jp