Movatterモバイル変換


[0]ホーム

URL:


US7487093B2 - Text structure for voice synthesis, voice synthesis method, voice synthesis apparatus, and computer program thereof - Google Patents

Text structure for voice synthesis, voice synthesis method, voice synthesis apparatus, and computer program thereof
Download PDF

Info

Publication number
US7487093B2
US7487093B2US10/914,169US91416904AUS7487093B2US 7487093 B2US7487093 B2US 7487093B2US 91416904 AUS91416904 AUS 91416904AUS 7487093 B2US7487093 B2US 7487093B2
Authority
US
United States
Prior art keywords
voice
text
feature
synthetic voice
change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/914,169
Other versions
US20050065795A1 (en
Inventor
Masahiro Mutsuno
Toshiaki Fukada
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon IncfiledCriticalCanon Inc
Assigned to CANON KABUSHIKI KAISHAreassignmentCANON KABUSHIKI KAISHAASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: FUKADA, TOSHIAKI, MUTSUNO, MASAHIRO
Publication of US20050065795A1publicationCriticalpatent/US20050065795A1/en
Application grantedgrantedCritical
Publication of US7487093B2publicationCriticalpatent/US7487093B2/en
Adjusted expirationlegal-statusCritical
Expired - Fee Relatedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

In a voice synthesis apparatus, by bounding a desired range of input text to be output by, e.g., a start tag “<morphing type=“emotion” start=“happy” end=“angry”>” and end tag </morphing>, a feature of synthetic voice is continuously changed while gradually changing voice from a happy voice to an angry voice upon outputting synthetic voice.

Description

TECHNICAL FIELD
The present invention relates to the field of a voice synthesis apparatus which outputs an input sentence (text) as synthetic voice from a loudspeaker.
BACKGROUND ART
Conventionally, a voice synthesis apparatus which outputs an input sentence (text) as synthetic voice (synthetic sound, synthetic speech) from a loudspeaker has been proposed.
In order to generate richly expressive synthetic voice from text using such apparatus, control information of a strength, speed, pitch, and the like must be given, so that the user as a listener can listen to it as natural voice.
For this purpose, even when synthetic voice is output on the basis of a predetermined rule contained in a character string of text, an attempt is made to add desired language information into that text.
In this case, additional information given to the text uses a format that bounds additional information by tags expressed by “< >” like those used in so-called HTML (Hyper Text Markup Language), and a method of controlling synthetic voice tones corresponding to input text using these tags has been proposed.
However, in such conventional tagging method, since tagging is made for respective discrete units such as sentences, words, and the like to set a predetermined fixed value, synthetic voice to be actually output undergoes only discrete changes although that method aims at outputting synthetic voice corresponding to various characters and words in input text while continuously changing an appropriate prosody, resulting in unnatural synthetic voice for a listener.
As a technique for continuously changing a certain prosody of voice, a voice morphing method is proposed by Japanese Patent Laid-Open No. 9-244693. However, with this method, only the pitch pattern can be interpolated.
Furthermore, with these methods, when synthetic voice of a portion bounded by tags in input text is to be continuously changed, tags must be adequately assigned to change points of the synthetic voice. Hence, the tagging operation is troublesome, and only a discrete change can be consequently obtained.
DISCLOSURE OF INVENTION
The present invention has been proposed to solve the conventional problems, and has as its object to continuously and easily change a feature of synthetic voice of a desired range.
In order to achieve the above object, a voice synthesis method according to the present invention is characterized by the following arrangement.
That is, there is provided a voice synthesis method for synthesizing a voice waveform to continuously change a feature of synthetic voice of a range assigned a predetermined identifier included in input text upon outputting synthetic voice corresponding to the text, comprising:
a setting step of setting a desired range of text to be output, in which the feature of synthetic voice is to be continuously changed, using a predetermined identifier including attribute information that represents a change mode of the feature of synthetic voice; a recognition step of recognizing the predetermined identifier and a type of attribute information contained in the predetermined identifier from the text with the identifier, which is set in the setting step; and a voice synthesis step of synthesizing a voice waveform, whose feature of synthetic voice continuously changes, in accordance with the attribute information contained in the predetermined identifier, by interpolating synthetic voice corresponding to text within the desired range of the text with the identifier in accordance with a recognition result in the recognition step.
In a preferred embodiment, the attribute information contained in the predetermined identifier represents a change mode of the feature of synthetic voice at a start position of the range set by the identifier, and a change mode of the feature of synthetic voice at an end position.
For example, the change mode of the feature of synthetic voice represented by the attribute information is at least one of a change in volume, a change in speaker, a change in output device, a change in number of speakers, a change in emotion, a change in uttering speed, and a change in fundamental frequency.
For example, the voice synthesis step includes a step of: generating synthetic voice corresponding to the text within the desired range on the basis of attribute information associated with start and end positions of the range set by identifiers contained in the predetermined identifier, and a mode of the feature of synthetic voice before the start position.
More specifically, the voice synthesis step preferably comprises a step of:
    • generating synthetic voice corresponding to the text within the desired range on the basis of a ratio between values that represent uttering speeds set as the attribute information associated with the start and end positions, and a value that represents an uttering speed before the start position, or
    • generating synthetic voice corresponding to the text within the desired range on the basis of a ratio between values that represent volumes set as the attribute information associated with the start and end positions, and a value that represents a volume before the start position.
Alternatively, in order to achieve the above object, there is provided a text structure for voice synthesis, in which a predetermined identifier is assigned to change a feature of synthetic voice of a desired range of text to be output by voice synthesis,
wherein the predetermined identifier contains attribute information that represents a change mode upon continuously changing the feature of synthetic voice.
Note that the above object is also achieved by a voice synthesis apparatus corresponding to the voice synthesis method with the above arrangements.
Also, the above object is also achieved by a program code which makes a computer implement the voice synthesis method or apparatus with the above arrangements, and a computer readable storage medium that stores the program code.
Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
FIG. 1 is a block diagram of a voice synthesis apparatus according to the first embodiment;
FIG. 2 shows an example of tags assigned to text;
FIGS. 3A and 3B are flow charts showing the control process of the voice synthesis apparatus of the first embodiment;
FIG. 4 is a graph for explaining an example of interpolation of an uttering speed upon outputting synthetic voice;
FIG. 5 is a graph for explaining an example of interpolation of a volume upon outputting synthetic voice;
FIG. 6 is a graph for explaining an example of interpolation of the number of speakers upon outputting synthetic voice;
FIG. 7 shows an example of tags assigned to text in the second embodiment;
FIG. 8 shows an example of tags assigned to text in the third embodiment;
FIG. 9 is a flow chart showing the control process of a voice synthesis apparatus according to the third embodiment;
FIG. 10 shows an example of tags assigned to text in the fourth embodiment;
FIG. 11 shows an example of tags assigned to text in the fifth embodiment;
FIG. 12 is a graph for explaining a change in feature of synthetic voice upon outputting synthetic voice in the fifth embodiment; and
FIG. 13 shows an example of tags assigned to text in the sixth embodiment.
BEST MODE OF CARRYING OUT THE INVENTION
Preferred embodiments of the present invention will now be described in detail in accordance with the accompanying drawings.
First Embodiment
The arrangement of a voice synthesis apparatus according to this embodiment will be briefly explained first with reference toFIG. 1.
FIG. 1 is a block diagram of a voice synthesis apparatus of the first embodiment. As hardware that can be adopted, a general information processing apparatus such as a personal computer or the like can be adopted.
Referring toFIG. 1, the apparatus comprises atext generation module101 for generating a text body, and atag generation module102 for generating taggedtext103 by inserting predetermined tags at desired positions in that text, and also attributes in these tags, in association with generation of tagged text to be output as voice. Thetext generation module101 generates text on the basis of various information sources such as mail messages, news articles, magazines, printed books, and the like. In this case, editor software used to write tags and text is not particularly limited.
Note that a module indicates a functional unit of a software program executed by hardware of the voice synthesis apparatus according to this embodiment.
Note that thetext generation module101 andtag generation module102 can be either external modules or internal modules of the voice synthesis apparatus itself.
The taggedtext103 is input to atext input module104 via a communication line or a portable storage medium (CD-R or the like). A text part of the taggedtext103 input to thetext input module104 is analyzed by atext analysis module105, and its tag part is analyzed by atag analysis module106. Furthermore, in this embodiment, attribute information contained in a tag is analyzed by a tag attribute analysis module107 (details will be explained later).
Alanguage processing module108 processes language information (e.g., accent and the like) required upon outputting voice with reference to alanguage dictionary110 that pre-stores language information. Avoice synthesis module109 generates a synthetic waveform that expresses voice to be actually output with reference to a prosody model/waveform dictionary111 that pre-stores prosodic phonemes and the like, and outputs synthetic voice from a loudspeaker (not shown) on the basis of that synthetic waveform.
Arrangements as a characteristic feature of this embodiment will be explained below.
Thetag generation module102 inserts predetermined tags and attributes into text generated by thetext generation module101. In this case, tags can be inserted at positions of user's choice, and can be assigned to a portion where a feature of synthetic voice is to be smoothly changed like in so-called morphing in an image process. In each tag, additional information called an attribute (attribute information) can be written. More specifically, predetermined tags “<morphing . . . >” and “</morphing>” are assigned to the start and end points of a portion where a feature of synthetic voice is to be smoothly changed of text in which characters and words line up, and attribute information that represents an object whose feature of synthetic voice is to be continuously changed, in other words, a change pattern upon continuously changing the feature of synthetic voice, is written in each tag.
In this embodiment, the changing of the feature of synthetic voice includes not only so-called prosody of voice but also e.g., speaker, the number of speakers, emotion, and the like.
Note that the user writes the attribute information upon generation of text. Also, the user sets tags and various attributes in the tags. Note that the tags and attribute values may be automatically or semi-automatically set by a multi-function editor or the like.
Attribute information embedded in each tag is information, which is representing the feature of synthetic voice, associated with, e.g., a volume, speaker, output device, the number of speakers, emotion, uttering speed, fundamental frequency, and the like. In addition, other events which can be continuously changed upon outputting synthetic voice (to be referred to as “morphing” in this embodiment) may be used.
Start and end point tags set in text may have the same or different kinds of attribute information. When the start and end points have the same attribute information, voice according to the attribute information set by the start point tag is output without changing any feature of synthetic voice in association with that attribute information upon actually outputting synthetic voice.
A value corresponding to attribute information embedded in each tag is a numerical value if an attribute is a volume. If an attribute is a speaker, a male or female, or an identification number (ID) of the speaker can be designated.
FIG. 2 shows an example of tags assigned to text. In this example, a range where a feature of synthetic voice is to be continuously changed corresponds to a range bounded by a start tag “<morphing . . . >” and end tag “</morphing>”. Attributes in the start tag “<morphing . . . >” describe an emotion (emotion) as an object whose feature of synthetic voice is to be continuously changed, an emotion (happy) at the start point (start), and an emotion (angry) at the end point (end). Hence, when synthetic voice of this text is actually output, a sentence bounded by the tags is uttered while its voice gradually changes from a happy voice to an angry voice.
Thetext input module104 of the voice synthesis apparatus according to this embodiment receives the taggedtext103 assigned with tags, as described above, and thetext analysis module105 acquires information associated with the type, contents, and the like of text on the basis of the format of the input taggedtext103 and information in the header field of text.
Thetag analysis module106 determines the types of tags embedded in the input taggedtext103. The tagattribute analysis module107 analyzes attributes and attribute values described in the tags.
Thelanguage processing module108 andvoice synthesis module109 generate a voice waveform to be output by processing data, which is read out from the prosody model/waveform dictionary111, as phonemes corresponding to the text analyzed by thetext analysis module105 on the basis of the attribute values acquired by the tagattribute analysis module107, and output synthetic voice according to that voice waveform (note that the processing based on the attribute values will be explained later).
A method of extracting attribute values in “<morphing> . . . </morphing>” tags by thetag analysis module106 will be explained below usingFIGS. 3A and 3B.
FIGS. 3A and 3B are flow charts showing the control process of the voice synthesis apparatus of the first embodiment, i.e., the sequence of processes to be executed by a CPU (not shown) of the apparatus.
Referring toFIGS. 3A and 3B, the taggedtext103 input by thetext input module104 undergoes text analysis, tag analysis, and tag attribute analysis by thetext analysis module105,tag analysis module106, and tag attribute analysis107 (steps S301 to S303).
It is checked if the start tag “<morphing . . . >” includes objects and start and end points (step S304). It is checked first if an attribute value to be morphed is included. If no attribute value to be morphed is found, characters and words bounded by the start and end tags are read aloud in accordance with voice that has been read aloud in a sentence before that tag (step S305). On the other hand, if an attribute value to be morphed is found, it is checked if either one of attributes of start and end points is found (step S306).
If neither of the start and end points have attributes, characters and words bounded by the start and end tags are read aloud using a synthetic tone according to a default attribute value to be morphed, which is set in advance (step S307). On the other hand, if either the start or end point has an attribute value, it is checked if it is an attribute value of the start point (step S308). If it is not an attribute value of the start point, whether or not the attribute value of the end point and attribute value to be morphed are valid (they match) is determined by checking if these values match (step S309). If the two values match, the attribute value of the end point is used (step S311). In step S309, for example, if an object to be morphed is a volume, it is checked if the attribute value of the end point is a volume value, and if they match, characters and words bounded by the start and end tags are read aloud based on information of the end point; if they do not match, characters and words bounded by the start and end tags are read aloud using a default synthetic tone which is prepared in advance in correspondence with the attribute value of the object (step S310).
If it is determined in step S308 that the start point has an attribute value, and if the end point does not have an attribute value, text is read aloud according to the attribute value of the start point (step S312, step S315). In this case, the validity with an object is similarly checked, and if the two values match, text is read aloud according to the attribute value of the start point (step S313, step S314).
If both the start and end points have attribute values, and their values for the object are valid (match), a synthetic tone is output after interpolation based on the attribute values (step S316, S320). That is, if the object is a volume, it is determined that the attribute values of the start and end points are valid only when both the start and end points assume volume values. For example, if the attribute values of the start and end points are different (e.g., the start point is a volume value and the end point is an emotion), the attribute value which matches the object is used (step S317, step S319). If the attribute values of the start and end points are different and are also different from the object to be morphed, characters and words bounded by the start and end tags are read aloud using a default synthetic tone corresponding to the attribute value of the object (step S318). When tags to be checked have different attribute values, the priority of a voice output is “object”>“start point”>“end point”.
Interpolation which is made based on an attribute value as a sequence of voice generation will be described below with reference toFIG. 4.
FIG. 4 is a graph for explaining an example of interpolation of an uttering speed upon outputting synthetic voice.
As an example of an interpolation method, when the uttering speed is to be interpolated, the time required to output the waveform of full text (
Figure US07487093-20090203-P00001
(a),
Figure US07487093-20090203-P00002
(i),
Figure US07487093-20090203-P00003
(u),
Figure US07487093-20090203-P00004
(e) inFIG. 4) is calculated in accordance with that text to be output, and time durations t for respective phonemes which form that text are also calculated. In this embodiment, since standard prosodic models and voice waveforms are registered in advance in the prosody model/waveform dictionary111, the time required to output the waveform of full text to be output can be calculated by summing up time durations t for respective phonemes (
Figure US07487093-20090203-P00001
(a),
Figure US07487093-20090203-P00002
(i),
Figure US07487093-20090203-P00003
(u),
Figure US07487093-20090203-P00004
(e) inFIG. 4) required to output synthetic voice read out from the prosody model/waveform dictionary111.
Then, ratio r between the values set as the attribute values of the start and end points, and the current uttering speed is calculated. In this case, if values set as the attribute values of the start and end points are equal to the current speed, since r=1, this interpolation process is not required.
Based on the calculated ratio, an interpolation function of each phoneme is calculated by (interpolation value)=t×r. By reducing or extending the period of a waveform in accordance with the calculated interpolation value, the uttering speed can be changed. Alternatively, a process for changing the time duration in correspondence with a certain feature of each phoneme may be made.
Upon interpolation of a volume, time durations t of respective phonemes which form text to be output (
Figure US07487093-20090203-P00001
(a),
Figure US07487093-20090203-P00002
(i),
Figure US07487093-20090203-P00003
(u),
Figure US07487093-20090203-P00004
(e) inFIG. 5) are used in accordance with that text as in interpolation of the uttering speed. Then, ratio r′ between values set as the attribute values of the start and end points and the current volume is calculated.
FIG. 5 is a graph for explaining interpolation of a volume upon outputting synthetic voice. InFIG. 5, an interpolation function is calculated by (interpolation value)=f×r′. Note that f is the amplitude of a synthetic voice waveform obtained from the phoneme/waveform dictionary111.
Amplitude f is reduced or extended in accordance with the calculated interpolation value. In place of changing the amplitude, a method of directly changing the volume of output hardware may be adopted. The same method applies to the fundamental frequency.
Furthermore, upon interpolating an emotion or uttering style, voice synthesis data corresponding to values set as the attribute values of the start and end points of text to be output are interpolated, thereby generating synthetic voice.
For example, in a voice synthesis method based on a waveform edit method such as PSOLA or the like, a voice segment in a voice waveform dictionary corresponding to an emotion set at the start position in text to be output, and that in the voice waveform dictionary corresponding to an emotion set at the end position undergo a PSOLA process with respect to a desired continuation time duration and fundamental frequency, and the voice waveform segments or synthetic voice waveform are interpolated in accordance with an interpolation function obtained in the same manner as in a volume.
In addition, in a voice synthesis method based on a parameter analysis synthesis method such as cepstrum or the like, a parameter sequence obtained from a voice parameter dictionary corresponding to an emotion set at the start position in text to be output, and that obtained from the voice parameter dictionary corresponding to an emotion set at the end position are interpolated to generate a parameter, and synthetic voice corresponding to a desired continuation time duration and fundamental frequency is generated using this parameter.
Furthermore, like in a change from a male voice to a female voice, interpolation between speakers can be made by similar methods. Moreover, when an output device comprises stereophonic loudspeakers, an output may be continuously changed from a left loudspeaker to a right loudspeaker. Or when an output device comprises a headphone and external loudspeaker, an output may be continuously changed from the head phone to the external loudspeaker.
Upon interpolation of the number of speakers (the number of persons who speak), an interpolation function shown inFIG. 6 is calculated.
FIG. 6 is a graph for explaining an example of interpolation of the number of speakers upon outputting synthetic voice. In the example shown inFIG. 6, morphing from one speaker to five speakers is implemented. In this case, the time duration of a waveform obtained from text to be output is divided into five periods. Every time a divided period elapses, the number of speakers is increased one by one, and the volume of the synthetic tone is changed on the basis of an interpolation function (a function that changes between 0 and 1) shown inFIG. 6. Also, the waveform level is normalized to prevent the amplitude from exceeding a predetermined value.
Note that speakers may be added in a predetermined order or randomly.
In this embodiment, synthetic voice is output in accordance with a voice waveform generated by executing the aforementioned various interpolation processes. In this manner, natural synthetic voice, whose feature of synthetic voice changes continuously, can be implemented compared to a conventional voice synthesis apparatus with which a feature of synthetic voice changes discretely.
Second Embodiment
The second embodiment based on the voice synthesis apparatus according to the first embodiment mentioned above will be explained below. In the following description, a repetitive description of the same building components as those in the first embodiment will be omitted, and a characteristic feature of this embodiment will be mainly explained.
In this embodiment, predetermined tags contained in taggedtext103 adopts a nested structure of tags, as shown inFIG. 7, in addition to the two tags “<morphing . . . >” and “</morphing>” as in the first embodiment, thereby setting a plurality of objects to be changed. With this nested structure, voice synthesis morphing that can change a plurality of objects can be implemented. That is, in the example shown inFIG. 7, a feature of synthetic voice upon uttering text to be output as synthetic voice initially expresses a happy tone with a large volume, and then changes to express an angry tone, while the volume changes to be smaller than the initial volume.
Since other arrangements are the same as those in the first embodiment, a repetitive description will be omitted.
According to this embodiment with the above arrangement, a feature of synthetic voice of a desired range of text to be output can be continuously and easily changed.
Third Embodiment
The third embodiment based on the voice synthesis apparatus according to the first embodiment mentioned above will be explained below. In the following description, a repetitive description of the same building components as those in the first embodiment will be omitted, and a characteristic feature of this embodiment will be mainly explained.
In the first and second embodiments described above, attribute information contained in the start tag “<morphing . . . >” describes an object whose feature of synthetic voice is to be continuously changed, and attribute values of the start and end points of the object. By contrast, in the third embodiment, the start tag “<morphing . . . >” describes labels of an object to be changed at the start and end points.
FIG. 8 shows an example of tags assigned to text in the third embodiment, and text itself bounded by tags is the same as that in the second embodiment shown inFIG. 7. In this embodiment, an object to be changed is an emotion (emotion). Hence, the start and end points describe labels “emotionstart” and “emotionend” of an object to be changed. Since the arrangement of a voice synthesis apparatus in the third embodiment is the same as that in the first embodiment, a repetitive description thereof will be omitted. A difference between the first and third embodiments will be described below.
As in the first embodiment, thetext analysis module105 analyzes the type, contents, and the like of input taggedtext103 on the basis of the format and header information of that text, thus acquiring information associated with them. Thetag analysis module105 determines the types of tags embedded in the text. The tagattribute analysis module107 analyzes attributes and attribute values described in the tags. In this embodiment, only the start and end points are to be analyzed, and the tagattribute analysis module107 examines objects of the start and end points. Thevoice synthesis module109 makes interpolation on the basis of the attribute values obtained by the tagattribute analysis module107, and generates synthetic voice corresponding to the contents of the text in accordance with a voice waveform obtained as a result of interpolation.
Since attribute information embedded in each tag has the same configuration as in the first embodiment, a repetitive description thereof will be omitted. The difference between the first and third embodiments is as follows. That is, upon describing an emotion (emotion) as an object whose feature of synthetic voice is to be continuously changed, an emotion at the start point (start), and an emotion at the end point (end), the start point is assigned a label “emotionstart” of the object to be changed, and the end point is assigned a label “emotionend” of the object to be changed. In this embodiment, since an exception process is partially different in correspondence with such change in tag format, this difference will be explained with reference toFIG. 9.
FIG. 9 is a flow chart showing the control process of the voice synthesis apparatus in the third embodiment, i.e., the sequence of processes to be executed by a CPU (not shown) of the apparatus.
Referring toFIG. 9, taggedtext103 input by thetext input module104 undergoes text analysis, tag analysis, and tag attribute analysis by thetext analysis module105,tag analysis module106, and tag attribute analysis107 (steps S901 to S903).
It is checked if the start tag “<morphing . . . >” includes start and end points. It is checked if either one of start and end points has an attribute (step S904). If neither of the start and end points have attribute values, text is read aloud according to voice which was read aloud in a sentence before that tag (step S905). It is then checked if the start point has an attribute value. If the start point does not have an attribute value, the attribute value of the end point is used (step S906, step S907). Conversely, if the start point has an attribute value but the end point does not have an attribute value, text is read aloud according to the attribute value of the start point (step S908, S909). If both the start and end points have attribute values, and they are not different, interpolation is made based on these attribute values, and synthetic voice is output (step S910, S912).
As the attribute values of the start and end points, if an object whose feature of synthetic voice is to be continuously changed is a volume, both the start and end points must assume volume values. If the types of attribute values of the start and end points are different (e.g., the start point has a volume value, and the end point has an emotion), the attribute value of the start point is used (step S911). When the tag has wrong attribute values, the priority of a voice output is (order of start point)>(order of end point).
Since other arrangements are the same as those in the first embodiment, a repetitive description thereof will be omitted.
According to this embodiment with the above arrangement, a feature of synthetic voice of a desired range of text to be output can be continuously and easily changed.
Fourth Embodiment
The fourth embodiment based on the voice synthesis apparatus according to the first embodiment mentioned above will be explained below. In the following description, a repetitive description of the same building components as those in the first embodiment will be omitted, and a characteristic feature of this embodiment will be mainly explained.
In the first to third embodiments, a change of morphing is constant, i.e., depends on the rate of change of the morphing algorithm itself. However, the fourth embodiment is characterized in that an attribute for a morphing change can also be added.FIG. 10 shows that example.
FIG. 10 shows an example of tags assigned to text in the fourth embodiment. In this embodiment, attribute information for the rate of change of morphing is also set in attributes in the start tag “<morphing . . . >”. As an attribute value that expresses the rate of change of morphing, a type of function used in a change such as linear, non-linear, logarithm, or the like is set in “function”.
In this embodiment, upon analyzing tags, the tagattribute analysis module107 analyzes not only an object and start and end points, but also an attribute of a morphing change in accordance with an attribute value which represents the rate of change of morphing. As a result of analysis, if an attribute value such as linear, nonlinear, logarithm, or the like is described in a “function” field, interpolation is made in accordance with the rate of change given by that attribute value, and synthetic voice is output in accordance with a synthetic waveform obtained by interpolation. On the other hand, if this attribute value is not described, interpolation is made in accordance with a change method determined in advance by the morphing algorithm.
Since other arrangements are the same as those in the first embodiment, a repetitive description will be omitted.
According to this embodiment with the above arrangement, a feature of synthetic voice of a desired range of text to be output can be continuously and easily changed.
Fifth Embodiment
The fifth embodiment based on the voice synthesis apparatus according to the first embodiment mentioned above will be explained below. In the following description, a repetitive description of the same building components as those in the first embodiment will be omitted, and a characteristic feature of this embodiment will be mainly explained.
In the first to third embodiments, a change of morphing is constant, i.e., depends on the rate of change of the morphing algorithm itself. However, the fifth embodiment is characterized in that an attribute for a morphing change can be individually added in a tag.FIG. 11 shows that example.
FIG. 11 shows an example of tags assigned to text in the fifth embodiment. In this embodiment, intermediate tags for a morphing change are further inserted in text bounded by “<morphing . . . > . . . </morphing>” tags.
In this embodiment, upon analyzing tags, thetag analysis module106 analyzes not only “<morphing>” tags but also intermediate tags that generate morphing changes. The intermediate tag uses a tag like “<rate value=“*.*”/>”, and a rate of change ranging from 0 to 1 is described in a “value” attribute field. Then, such intermediate tags are individually embedded at desired positions in text whose feature of synthetic voice is to be continuously changed. In this way, upon actually outputting synthetic voice after interpolation, a further complex change in feature of synthetic voice can take place, as shown inFIG. 12.
It is noted that each of portion inserted the tag like “<rate value=“*.*”/>” are, when translating from the original Japanese application to the present PCT application in English, arranged as shown inFIG. 11 because of difference of word order between Japanese and English. Accordingly, a line graph shown inFIG. 12 is also arranged, for obviously and appropriately explaining the present invention, in accordance with the arrangement ofFIG. 11.
When a function “function” for a morphing change used in the fourth embodiment is also designated, a function designated earlier is used as an interpolation function from a given “<rate/>” tag to the next “<rate/>” tag.
Since other arrangements are the same as those in the first embodiment, a repetitive description will be omitted.
According to this embodiment with the above arrangement, a feature of synthetic voice of a desired range of text to be output can be continuously and easily changed.
Sixth Embodiment
The sixth embodiment based on the voice synthesis apparatus according to the first embodiment mentioned above will be explained below. In the following description, a repetitive description of the same building components as those in the first embodiment will be omitted, and a characteristic feature of this embodiment will be mainly explained.
In the aforementioned embodiments, the attribute values of the start and end points are set in the start tag “<morphing . . . >”. However, in this embodiment, the attribute value of the end point is set in an end portion of the tag, as shown inFIG. 13.
FIG. 13 shows an example of tags assigned to text in the sixth embodiment.
In the tag configuration of the first embodiment, “<morphing type=“emotion” start=“happy”>” is described as the attribute of the start point and object in the start tag “<morphing . . . >”, and the attribute of the end point is described in the end tag like “</morphing end=“angry”>”. By contrast, in this embodiment, “<morphing emotionstart=“happy”>” is described in the start tag, and “</morphing emotionend=“angry”>” is described in the end tag. When an interpolation function of the fourth embodiment is designated in this embodiment, it is described in the start tag.
Since other arrangements are the same as those in the first embodiment, a repetitive description will be omitted.
According to this embodiment with the above arrangement, a feature of synthetic voice of a desired range of text to be output can be continuously and easily changed.
Seventh Embodiment
The seventh embodiment based on the voice synthesis apparatus according to the first embodiment mentioned above will be explained below. In the following description, a repetitive description of the same building components as those in the first embodiment will be omitted, and a characteristic feature of this embodiment will be mainly explained.
In this embodiment, if the attributes of the start and end points in the tag are different from each other, an error is determined to inhibit the subsequent processes unlike in the above embodiments.
The tag configuration of the first embodiment will be taken as an example. That is, if attributes of “start” and “end” are different from each other like “<morphing type=“emotion” start=“happy” end=“10”>”, an error is determined and no process is done. If neither of the start and end points have attributes or if either of them does not have an attribute, the same processes as in the first embodiment are executed. In the third embodiment, if neither of the start and end points have attributes or if either of them does not have an attribute, the same processes as in the third embodiment are executed. Since other arrangements are the same as those in the first to fifth embodiments, a repetitive description thereof will be omitted.
According to this embodiment with the above arrangement, a feature of synthetic voice of a desired range of text to be output can be continuously and easily changed.
Eighth Embodiment
The eighth embodiment based on the voice synthesis apparatus according to the first embodiment mentioned above will be explained below. In the following description, a repetitive description of the same building components as those in the first embodiment will be omitted, and a characteristic feature of this embodiment will be mainly explained.
In the aforementioned embodiments, even when at least one of a plurality of pieces of attribute information to be set in the tag is not found, synthetic voice is output. However, in this embodiment, when the attributes of the start and end points are different from each other, and when the attributes of the start and end points are different from that of an object, an error is determined, and no process is done.
Since other arrangements are the same as those in the first to seventh embodiments, a repetitive description thereof will be omitted.
According to this embodiment with the above arrangement, a feature of synthetic voice of a desired range of text to be output can be continuously and easily changed.
Therefore, according to the aforementioned embodiments, by bounding a desired range of input text to be output by tags, a feature of synthetic voice can be continuously changed like in morphing upon outputting synthetic voice, and a natural text-to-voice function for a listener can be implemented unlike in the prior art that produces discrete voice.
Another Embodiment
The preferred embodiments of the present invention have been explained, and the present invention may be applied to either a system constituted by a plurality of devices, or an apparatus consisting of a single equipment.
Note that the present invention includes a case wherein the invention is achieved by directly or remotely supplying a software program that implements the functions of the aforementioned embodiments to a system or apparatus, and reading out and executing the supplied program code by a computer of that system or apparatus. In this case, the form is not limited to a program as long as it has functions of the program.
Therefore, the program code itself installed in a computer to implement the functional process of the present invention using the computer implements the present invention. That is, the claims of the present invention include the computer program itself for implementing the functional process of the present invention.
In this case, the form of program is not particularly limited, and an object code, a program to be executed by an interpreter, script data to be supplied to an OS, and the like may be used as along as they have the program function.
As a recording medium for supplying the program, for example, a floppy disk, hard disk, optical disk, magnetooptical disk, MO, CD-ROM, CD-R, CD-RW, magnetic tape, nonvolatile memory card, ROM, DVD (DVD-ROM, DVD-R) and the like may be used.
As another program supply method, the program may be supplied by establishing connection to a home page on the Internet using a browser on a client computer, and downloading the computer program itself of the present invention or a compressed file containing an automatic installation function from the home page onto a recording medium such as a hard disk or the like. Also, the program code that forms the program of the present invention may be segmented into a plurality of files, which may be downloaded from different home pages. That is, the claims of the present invention include a WWW (World Wide Web) server which makes a plurality of users download a program file required to implement the functional process of the present invention by the computer.
Also, a storage medium such as a CD-ROM or the like, which stores the encrypted program of the present invention, may be delivered to the user, the user who has cleared a predetermined condition may be allowed to download key information that is used to decrypt the program from a home page via the Internet, and the encrypted program may be executed using that key information to be installed on a computer, thus implementing the present invention.
The functions of the aforementioned embodiments may be implemented not only by executing the readout program code by the computer but also by some or all of actual processing operations executed by an OS or the like running on the computer on the basis of an instruction of that program.
Furthermore, the functions of the aforementioned embodiments may be implemented by some or all of actual processes executed by a CPU or the like arranged in a function extension board or a function extension unit, which is inserted in or connected to the computer, after the program read out from the recording medium is written in a memory of the extension board or unit.
As described above, according to the above embodiments, a feature of synthetic voice of a desired range of text to be output can be continuously and easily changed.
As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the claims.

Claims (4)

1. A voice synthesis method for synthesizing a voice waveform to continuously change a feature of synthetic voice of a range assigned a predetermined identifier included in input text upon outputting synthetic voice corresponding to the text, the method comprising:
a setting step, via a setting module, of setting a desired range of text to be output, in which the feature of synthetic voice is to be continuously changed, using a predetermined identifier including attribute information that represents a change mode of the feature of synthetic voice both at a start position and at an end position of the range set by the identifier;
a recognition step of recognizing the predetermined identifier and a type of attribute information contained in the predetermined identifier from the text with the identifier, which is set in said setting step; and
a voice synthesis step of synthesizing a voice waveform, whose feature of synthetic voice continuously changes, in accordance with the attribute information contained in the predetermined identifier, by interpolating synthetic voice corresponding to text within the desired range of the text with the identifier in accordance with a recognition result in said recognition step,
wherein the change mode of the feature of synthetic voice includes at least one of a change in output device, a change in a number of speakers and a change in emotion.
2. A voice synthesis apparatus for synthesizing a voice waveform to continuously change a feature of synthetic voice of a range assigned a predetermined identifier included in input text upon outputting synthetic voice corresponding to the text, the apparatus comprising:
recognition means for recognizing, from text with an identifier, in which a predetermined identifier that represents a desired range, in which the feature of synthetic voice is to be continuously changed, and which contains attribute information representing a change mode of the feature of synthetic voice both at a start position and at an end position of the range set by the identifier, the predetermined identifier and a type of attribute information contained in the predetermined identifier from the text with the identifier; and
voice synthesis means for synthesizing a voice waveform, whose feature of synthetic voice continuously changes, in accordance with the attribute information contained in the predetermined identifier, by interpolating synthetic voice corresponding to text within the desired range of the text with the identifier in accordance with a recognition result of said recognition means,
wherein the change mode of the feature of synthetic voice includes at least one of a change in output device, a change in a number of speakers and a change in emotion.
3. A computer-readable storage medium storing a computer program comprising program code for implementing the voice synthesis method according toclaim 1.
4. A computer-readable storage medium storing a computer program comprising program code for causing a computer to serve as a voice synthesis apparatus for synthesizing a voice waveform to change a feature of synthetic voice of a range assigned a predetermined identifier included in input text upon outputting synthetic voice corresponding to the text, the program code comprising:
program code for a recognition function of recognizing, from text with an identifier, in which a predetermined identifier that represents a desired range, in which the feature of synthetic voice is to be continuously changed, and which contains attribute information representing a change mode of the feature of synthetic voice both at a start position and at an end position of the range set by the identifier, the predetermined identifier and a type of attribute information contained in the predetermined identifier from the text with the identifier; and
program code for a voice synthesis function of synthesizing a voice waveform, whose feature of synthetic voice continuously changes, in accordance with the attribute information contained in the predetermined identifier, by interpolating synthetic voice corresponding to text within the desired range of the text with the identifier in accordance with a recognition result of the recognition function,
wherein the change mode of the feature of synthetic voice includes at least one of a change in output device, a change in a number of speakers and a change in emotion.
US10/914,1692002-04-022004-08-10Text structure for voice synthesis, voice synthesis method, voice synthesis apparatus, and computer program thereofExpired - Fee RelatedUS7487093B2 (en)

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
JP2002100467AJP2003295882A (en)2002-04-022002-04-02 Text structure for speech synthesis, speech synthesis method, speech synthesis apparatus, and computer program therefor
JP2002-1004672002-04-02
PCT/JP2003/004231WO2003088208A1 (en)2002-04-022003-04-02Text structure for voice synthesis, voice synthesis method, voice synthesis apparatus, and computer program thereof

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/JP2003/004231ContinuationWO2003088208A1 (en)2002-04-022003-04-02Text structure for voice synthesis, voice synthesis method, voice synthesis apparatus, and computer program thereof

Publications (2)

Publication NumberPublication Date
US20050065795A1 US20050065795A1 (en)2005-03-24
US7487093B2true US7487093B2 (en)2009-02-03

Family

ID=29241389

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US10/914,169Expired - Fee RelatedUS7487093B2 (en)2002-04-022004-08-10Text structure for voice synthesis, voice synthesis method, voice synthesis apparatus, and computer program thereof

Country Status (9)

CountryLink
US (1)US7487093B2 (en)
EP (1)EP1490861B1 (en)
JP (1)JP2003295882A (en)
KR (1)KR100591655B1 (en)
CN (1)CN1269104C (en)
AU (1)AU2003226446A1 (en)
DE (1)DE60325191D1 (en)
ES (1)ES2316786T3 (en)
WO (1)WO2003088208A1 (en)

Cited By (181)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20070156408A1 (en)*2004-01-272007-07-05Natsuki SaitoVoice synthesis device
US20090177300A1 (en)*2008-01-032009-07-09Apple Inc.Methods and apparatus for altering audio output signals
US20100114556A1 (en)*2008-10-312010-05-06International Business Machines CorporationSpeech translation method and apparatus
US20100131260A1 (en)*2008-11-262010-05-27At&T Intellectual Property I, L.P.System and method for enriching spoken language translation with dialog acts
US20130080175A1 (en)*2011-09-262013-03-28Kabushiki Kaisha ToshibaMarkup assistance apparatus, method and program
US8892446B2 (en)2010-01-182014-11-18Apple Inc.Service orchestration for intelligent automated assistant
US8990087B1 (en)*2008-09-302015-03-24Amazon Technologies, Inc.Providing text to speech from digital content on an electronic device
US9262612B2 (en)2011-03-212016-02-16Apple Inc.Device access using voice authentication
US9300784B2 (en)2013-06-132016-03-29Apple Inc.System and method for emergency calls initiated by voice command
US9338493B2 (en)2014-06-302016-05-10Apple Inc.Intelligent automated assistant for TV user interactions
US9368114B2 (en)2013-03-142016-06-14Apple Inc.Context-sensitive handling of interruptions
US9430463B2 (en)2014-05-302016-08-30Apple Inc.Exemplar-based natural language processing
US9483461B2 (en)2012-03-062016-11-01Apple Inc.Handling speech synthesis of content for multiple languages
US9495129B2 (en)2012-06-292016-11-15Apple Inc.Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en)2014-05-272016-11-22Apple Inc.Method for supporting dynamic grammars in WFST-based ASR
US9535906B2 (en)2008-07-312017-01-03Apple Inc.Mobile device having human language translation capability with positional feedback
US20170004834A1 (en)*2014-03-192017-01-05Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using an adaptive noise estimation
US9576574B2 (en)2012-09-102017-02-21Apple Inc.Context-sensitive handling of interruptions by intelligent digital assistant
US9582608B2 (en)2013-06-072017-02-28Apple Inc.Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9606986B2 (en)2014-09-292017-03-28Apple Inc.Integrated word N-gram and class M-gram language models
US9620105B2 (en)2014-05-152017-04-11Apple Inc.Analyzing audio input for efficient speech and music recognition
US9620104B2 (en)2013-06-072017-04-11Apple Inc.System and method for user-specified pronunciation of words for speech synthesis and recognition
US9626955B2 (en)2008-04-052017-04-18Apple Inc.Intelligent text-to-speech conversion
US9633674B2 (en)2013-06-072017-04-25Apple Inc.System and method for detecting errors in interactions with a voice-based digital assistant
US9633660B2 (en)2010-02-252017-04-25Apple Inc.User profiling for voice input processing
US9633004B2 (en)2014-05-302017-04-25Apple Inc.Better resolution when referencing to concepts
US9646609B2 (en)2014-09-302017-05-09Apple Inc.Caching apparatus for serving phonetic pronunciations
US9646614B2 (en)2000-03-162017-05-09Apple Inc.Fast, language-independent method for user authentication by voice
US9668121B2 (en)2014-09-302017-05-30Apple Inc.Social reminders
US9697822B1 (en)2013-03-152017-07-04Apple Inc.System and method for updating an adaptive speech recognition model
US9697820B2 (en)2015-09-242017-07-04Apple Inc.Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9711141B2 (en)2014-12-092017-07-18Apple Inc.Disambiguating heteronyms in speech synthesis
US9715875B2 (en)2014-05-302017-07-25Apple Inc.Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en)2015-03-082017-08-01Apple Inc.Competing devices responding to voice triggers
US9734193B2 (en)2014-05-302017-08-15Apple Inc.Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en)2014-05-302017-09-12Apple Inc.Predictive text input
US9785630B2 (en)2014-05-302017-10-10Apple Inc.Text prediction using combined word N-gram and unigram language models
US9798393B2 (en)2011-08-292017-10-24Apple Inc.Text correction processing
US9818400B2 (en)2014-09-112017-11-14Apple Inc.Method and apparatus for discovering trending terms in speech requests
US9842105B2 (en)2015-04-162017-12-12Apple Inc.Parsimonious continuous-space phrase representations for natural language processing
US9842101B2 (en)2014-05-302017-12-12Apple Inc.Predictive conversion of language input
US9858925B2 (en)2009-06-052018-01-02Apple Inc.Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en)2015-03-062018-01-09Apple Inc.Structured dictation using intelligent automated assistants
US9886432B2 (en)2014-09-302018-02-06Apple Inc.Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9886953B2 (en)2015-03-082018-02-06Apple Inc.Virtual assistant activation
US9899019B2 (en)2015-03-182018-02-20Apple Inc.Systems and methods for structured stem and suffix language models
US9922642B2 (en)2013-03-152018-03-20Apple Inc.Training an at least partial voice command system
US9934775B2 (en)2016-05-262018-04-03Apple Inc.Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en)2012-05-142018-04-24Apple Inc.Crowd sourcing information to fulfill user requests
US9959870B2 (en)2008-12-112018-05-01Apple Inc.Speech recognition involving a mobile device
US9966065B2 (en)2014-05-302018-05-08Apple Inc.Multi-command single utterance input method
US9966068B2 (en)2013-06-082018-05-08Apple Inc.Interpreting and acting upon commands that involve sharing information with remote devices
US9971774B2 (en)2012-09-192018-05-15Apple Inc.Voice-based media searching
US9972304B2 (en)2016-06-032018-05-15Apple Inc.Privacy preserving distributed evaluation framework for embedded personalized systems
US10043516B2 (en)2016-09-232018-08-07Apple Inc.Intelligent automated assistant
US10049668B2 (en)2015-12-022018-08-14Apple Inc.Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en)2016-06-082018-08-14Apple, Inc.Intelligent automated assistant for media exploration
US10057736B2 (en)2011-06-032018-08-21Apple Inc.Active transport based notifications
US10067938B2 (en)2016-06-102018-09-04Apple Inc.Multilingual word prediction
US10074360B2 (en)2014-09-302018-09-11Apple Inc.Providing an indication of the suitability of speech recognition
US10079014B2 (en)2012-06-082018-09-18Apple Inc.Name recognition system
US10078631B2 (en)2014-05-302018-09-18Apple Inc.Entropy-guided text prediction using combined word and character n-gram language models
US10083688B2 (en)2015-05-272018-09-25Apple Inc.Device voice control for selecting a displayed affordance
US10089072B2 (en)2016-06-112018-10-02Apple Inc.Intelligent device arbitration and control
US10101822B2 (en)2015-06-052018-10-16Apple Inc.Language input correction
US10127220B2 (en)2015-06-042018-11-13Apple Inc.Language identification from short strings
US10127911B2 (en)2014-09-302018-11-13Apple Inc.Speaker identification and unsupervised speaker adaptation techniques
US10134385B2 (en)2012-03-022018-11-20Apple Inc.Systems and methods for name pronunciation
US10140993B2 (en)2014-03-192018-11-27Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using individual replacement LPC representations for individual codebook information
US10170123B2 (en)2014-05-302019-01-01Apple Inc.Intelligent assistant for home automation
US10176167B2 (en)2013-06-092019-01-08Apple Inc.System and method for inferring user intent from speech inputs
US10186254B2 (en)2015-06-072019-01-22Apple Inc.Context-based endpoint detection
US10185542B2 (en)2013-06-092019-01-22Apple Inc.Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10192552B2 (en)2016-06-102019-01-29Apple Inc.Digital assistant providing whispered speech
US10199051B2 (en)2013-02-072019-02-05Apple Inc.Voice trigger for a digital assistant
US10217454B2 (en)2014-10-302019-02-26Kabushiki Kaisha ToshibaVoice synthesizer, voice synthesis method, and computer program product
US10223066B2 (en)2015-12-232019-03-05Apple Inc.Proactive assistance based on dialog communication between devices
US10224041B2 (en)2014-03-192019-03-05Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus, method and corresponding computer program for generating an error concealment signal using power compensation
US10241644B2 (en)2011-06-032019-03-26Apple Inc.Actionable reminder entries
US10241752B2 (en)2011-09-302019-03-26Apple Inc.Interface for a virtual digital assistant
US10249300B2 (en)2016-06-062019-04-02Apple Inc.Intelligent list reading
US10255907B2 (en)2015-06-072019-04-09Apple Inc.Automatic accent detection using acoustic models
US10269345B2 (en)2016-06-112019-04-23Apple Inc.Intelligent task discovery
US10276170B2 (en)2010-01-182019-04-30Apple Inc.Intelligent automated assistant
US10283110B2 (en)2009-07-022019-05-07Apple Inc.Methods and apparatuses for automatic speech recognition
US10289433B2 (en)2014-05-302019-05-14Apple Inc.Domain specific language for encoding assistant dialog
US10297253B2 (en)2016-06-112019-05-21Apple Inc.Application integration with a digital assistant
US10303715B2 (en)2017-05-162019-05-28Apple Inc.Intelligent automated assistant for media exploration
US10311144B2 (en)2017-05-162019-06-04Apple Inc.Emoji word sense disambiguation
US10318871B2 (en)2005-09-082019-06-11Apple Inc.Method and apparatus for building an intelligent automated assistant
US10332518B2 (en)2017-05-092019-06-25Apple Inc.User interface for correcting recognition errors
US10354011B2 (en)2016-06-092019-07-16Apple Inc.Intelligent automated assistant in a home environment
US10356243B2 (en)2015-06-052019-07-16Apple Inc.Virtual assistant aided communication with 3rd party service in a communication session
US10366158B2 (en)2015-09-292019-07-30Apple Inc.Efficient word encoding for recurrent neural network language models
US10395654B2 (en)2017-05-112019-08-27Apple Inc.Text normalization based on a data-driven learning network
US10403283B1 (en)2018-06-012019-09-03Apple Inc.Voice interaction at a primary device to access call functionality of a companion device
US10403278B2 (en)2017-05-162019-09-03Apple Inc.Methods and systems for phonetic matching in digital assistant services
US10410637B2 (en)2017-05-122019-09-10Apple Inc.User-specific acoustic models
US10417266B2 (en)2017-05-092019-09-17Apple Inc.Context-aware ranking of intelligent response suggestions
US10446141B2 (en)2014-08-282019-10-15Apple Inc.Automatic speech recognition based on user feedback
US10445429B2 (en)2017-09-212019-10-15Apple Inc.Natural language understanding using vocabularies with compressed serialized tries
US10446143B2 (en)2016-03-142019-10-15Apple Inc.Identification of voice inputs providing credentials
US10474753B2 (en)2016-09-072019-11-12Apple Inc.Language identification using recurrent neural networks
US10482874B2 (en)2017-05-152019-11-19Apple Inc.Hierarchical belief states for digital assistants
US10490187B2 (en)2016-06-102019-11-26Apple Inc.Digital assistant providing automated status report
US10496753B2 (en)2010-01-182019-12-03Apple Inc.Automatically adapting user interfaces for hands-free interaction
US10496705B1 (en)2018-06-032019-12-03Apple Inc.Accelerated task performance
US10509862B2 (en)2016-06-102019-12-17Apple Inc.Dynamic phrase expansion of language input
US10521466B2 (en)2016-06-112019-12-31Apple Inc.Data driven natural language event detection and classification
US10552013B2 (en)2014-12-022020-02-04Apple Inc.Data detection
US10553209B2 (en)2010-01-182020-02-04Apple Inc.Systems and methods for hands-free notification summaries
US10567477B2 (en)2015-03-082020-02-18Apple Inc.Virtual assistant continuity
US10568032B2 (en)2007-04-032020-02-18Apple Inc.Method and system for operating a multi-function portable electronic device using voice-activation
US10579742B1 (en)*2016-08-302020-03-03United Services Automobile Association (Usaa)Biometric signal analysis for communication enhancement and transformation
US10593346B2 (en)2016-12-222020-03-17Apple Inc.Rank-reduced token representation for automatic speech recognition
US10592095B2 (en)2014-05-232020-03-17Apple Inc.Instantaneous speaking of content on touch devices
US10592604B2 (en)2018-03-122020-03-17Apple Inc.Inverse text normalization for automatic speech recognition
US10607140B2 (en)2010-01-252020-03-31Newvaluexchange Ltd.Apparatuses, methods and systems for a digital conversation management platform
US10636424B2 (en)2017-11-302020-04-28Apple Inc.Multi-turn canned dialog
US10643611B2 (en)2008-10-022020-05-05Apple Inc.Electronic devices with voice command and contextual data processing capabilities
US10657328B2 (en)2017-06-022020-05-19Apple Inc.Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10659851B2 (en)2014-06-302020-05-19Apple Inc.Real-time digital assistant knowledge updates
US10671428B2 (en)2015-09-082020-06-02Apple Inc.Distributed personal assistant
US10679605B2 (en)2010-01-182020-06-09Apple Inc.Hands-free list-reading by intelligent automated assistant
US10684703B2 (en)2018-06-012020-06-16Apple Inc.Attention aware virtual assistant dismissal
US10691473B2 (en)2015-11-062020-06-23Apple Inc.Intelligent automated assistant in a messaging environment
US10705794B2 (en)2010-01-182020-07-07Apple Inc.Automatically adapting user interfaces for hands-free interaction
US10706373B2 (en)2011-06-032020-07-07Apple Inc.Performing actions associated with task items that represent tasks to perform
US10726832B2 (en)2017-05-112020-07-28Apple Inc.Maintaining privacy of personal information
US10733375B2 (en)2018-01-312020-08-04Apple Inc.Knowledge-based framework for improving natural language understanding
US10733982B2 (en)2018-01-082020-08-04Apple Inc.Multi-directional dialog
US10733993B2 (en)2016-06-102020-08-04Apple Inc.Intelligent digital assistant in a multi-tasking environment
US10748546B2 (en)2017-05-162020-08-18Apple Inc.Digital assistant services based on device capabilities
US10747498B2 (en)2015-09-082020-08-18Apple Inc.Zero latency digital assistant
US10755703B2 (en)2017-05-112020-08-25Apple Inc.Offline personal assistant
US10755051B2 (en)2017-09-292020-08-25Apple Inc.Rule-based natural language processing
US10762293B2 (en)2010-12-222020-09-01Apple Inc.Using parts-of-speech tagging and named entity recognition for spelling correction
US10791176B2 (en)2017-05-122020-09-29Apple Inc.Synchronization and task delegation of a digital assistant
US10789041B2 (en)2014-09-122020-09-29Apple Inc.Dynamic thresholds for always listening speech trigger
US10789945B2 (en)2017-05-122020-09-29Apple Inc.Low-latency intelligent automated assistant
US10791216B2 (en)2013-08-062020-09-29Apple Inc.Auto-activating smart responses based on activities from remote devices
US10789959B2 (en)2018-03-022020-09-29Apple Inc.Training speaker recognition models for digital assistants
US10810274B2 (en)2017-05-152020-10-20Apple Inc.Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10818288B2 (en)2018-03-262020-10-27Apple Inc.Natural assistant interaction
US10839159B2 (en)2018-09-282020-11-17Apple Inc.Named entity normalization in a spoken dialog system
US10892996B2 (en)2018-06-012021-01-12Apple Inc.Variable latency device coordination
US10909331B2 (en)2018-03-302021-02-02Apple Inc.Implicit identification of translation payload with neural machine translation
US10928918B2 (en)2018-05-072021-02-23Apple Inc.Raise to speak
US10984780B2 (en)2018-05-212021-04-20Apple Inc.Global semantic word embeddings using bi-directional recurrent neural networks
US11010550B2 (en)2015-09-292021-05-18Apple Inc.Unified language modeling framework for word prediction, auto-completion and auto-correction
US11010127B2 (en)2015-06-292021-05-18Apple Inc.Virtual assistant for media playback
US11010561B2 (en)2018-09-272021-05-18Apple Inc.Sentiment prediction from textual data
US11023513B2 (en)2007-12-202021-06-01Apple Inc.Method and apparatus for searching using an active ontology
US11025565B2 (en)2015-06-072021-06-01Apple Inc.Personalized prediction of responses for instant messaging
US11140099B2 (en)2019-05-212021-10-05Apple Inc.Providing message response suggestions
US11145294B2 (en)2018-05-072021-10-12Apple Inc.Intelligent automated assistant for delivering content from user experiences
US11170166B2 (en)2018-09-282021-11-09Apple Inc.Neural typographical error modeling via generative adversarial networks
US11204787B2 (en)2017-01-092021-12-21Apple Inc.Application integration with a digital assistant
US11217251B2 (en)2019-05-062022-01-04Apple Inc.Spoken notifications
US11227589B2 (en)2016-06-062022-01-18Apple Inc.Intelligent list reading
US11231904B2 (en)2015-03-062022-01-25Apple Inc.Reducing response latency of intelligent automated assistants
US11237797B2 (en)2019-05-312022-02-01Apple Inc.User activity shortcut suggestions
US11269678B2 (en)2012-05-152022-03-08Apple Inc.Systems and methods for integrating third party services with a digital assistant
US11281993B2 (en)2016-12-052022-03-22Apple Inc.Model and ensemble compression for metric learning
US11289073B2 (en)2019-05-312022-03-29Apple Inc.Device text to speech
US11301477B2 (en)2017-05-122022-04-12Apple Inc.Feedback analysis of a digital assistant
US11307752B2 (en)2019-05-062022-04-19Apple Inc.User configurable task triggers
US11314370B2 (en)2013-12-062022-04-26Apple Inc.Method for extracting salient dialog usage from live data
US11348573B2 (en)2019-03-182022-05-31Apple Inc.Multimodality in digital assistant systems
US11360641B2 (en)2019-06-012022-06-14Apple Inc.Increasing the relevance of new available information
US11386266B2 (en)2018-06-012022-07-12Apple Inc.Text correction
US11423908B2 (en)2019-05-062022-08-23Apple Inc.Interpreting spoken requests
US11462215B2 (en)2018-09-282022-10-04Apple Inc.Multi-modal inputs for voice commands
US11468282B2 (en)2015-05-152022-10-11Apple Inc.Virtual assistant in a communication session
US11475898B2 (en)2018-10-262022-10-18Apple Inc.Low-latency multi-speaker speech recognition
US11475884B2 (en)2019-05-062022-10-18Apple Inc.Reducing digital assistant latency when a language is incorrectly determined
US11488406B2 (en)2019-09-252022-11-01Apple Inc.Text detection using global geometry estimators
US11495218B2 (en)2018-06-012022-11-08Apple Inc.Virtual assistant operation in multi-device environments
US11496600B2 (en)2019-05-312022-11-08Apple Inc.Remote execution of machine-learned models
US11587559B2 (en)2015-09-302023-02-21Apple Inc.Intelligent device identification
US11638059B2 (en)2019-01-042023-04-25Apple Inc.Content playback on multiple devices

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1260704C (en)*2003-09-292006-06-21摩托罗拉公司Method for voice synthesizing
JP2005234337A (en)*2004-02-202005-09-02Yamaha CorpDevice, method, and program for speech synthesis
JP4587160B2 (en)*2004-03-262010-11-24キヤノン株式会社 Signal processing apparatus and method
JP4720974B2 (en)*2004-12-212011-07-13株式会社国際電気通信基礎技術研究所 Audio generator and computer program therefor
US7983910B2 (en)*2006-03-032011-07-19International Business Machines CorporationCommunicating across voice and text channels with emotion preservation
JP5321058B2 (en)*2006-05-262013-10-23日本電気株式会社 Information grant system, information grant method, information grant program, and information grant program recording medium
CN101295504B (en)*2007-04-282013-03-27诺基亚公司 Entertainment audio for text-only apps
US20090157407A1 (en)*2007-12-122009-06-18Nokia CorporationMethods, Apparatuses, and Computer Program Products for Semantic Media Conversion From Source Files to Audio/Video Files
US8374873B2 (en)*2008-08-122013-02-12Morphism, LlcTraining and applying prosody models
JP5275102B2 (en)*2009-03-252013-08-28株式会社東芝 Speech synthesis apparatus and speech synthesis method
GB0906470D0 (en)2009-04-152009-05-20Astex Therapeutics LtdNew compounds
US8996384B2 (en)*2009-10-302015-03-31Vocollect, Inc.Transforming components of a web page to voice prompts
US8965768B2 (en)2010-08-062015-02-24At&T Intellectual Property I, L.P.System and method for automatic detection of abnormal stress patterns in unit selection synthesis
US8731932B2 (en)*2010-08-062014-05-20At&T Intellectual Property I, L.P.System and method for synthetic voice generation and modification
US20130030789A1 (en)*2011-07-292013-01-31Reginald DalceUniversal Language Translator
CN102426838A (en)*2011-08-242012-04-25华为终端有限公司Voice signal processing method and user equipment
KR20180055189A (en)2016-11-162018-05-25삼성전자주식회사Method and apparatus for processing natural languages, method and apparatus for training natural language processing model
US11393451B1 (en)*2017-03-292022-07-19Amazon Technologies, Inc.Linked content in voice user interface
CN108305611B (en)*2017-06-272022-02-11腾讯科技(深圳)有限公司Text-to-speech method, device, storage medium and computer equipment
US10600404B2 (en)*2017-11-292020-03-24Intel CorporationAutomatic speech imitation
US10706347B2 (en)2018-09-172020-07-07Intel CorporationApparatus and methods for generating context-aware artificial intelligence characters
CN110138654B (en)*2019-06-062022-02-11北京百度网讯科技有限公司 Method and apparatus for processing speech
CN112349271B (en)*2020-11-062024-07-16北京乐学帮网络技术有限公司Voice information processing method and device, electronic equipment and storage medium
CN118280342B (en)*2024-05-312024-08-09贵阳朗玛信息技术股份有限公司Method for reading streaming MarkDown text and tracking and displaying reading progress

Citations (19)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPS63253996A (en)1987-04-101988-10-20富士通株式会社 Sentence-speech conversion device
JPH06236197A (en)1992-07-301994-08-23Ricoh Co LtdPitch pattern generation device
JPH07191695A (en)1993-11-171995-07-28Sanyo Electric Co LtdSpeaking speed conversion device
JPH09152892A (en)1995-09-261997-06-10Nippon Telegr & Teleph Corp <Ntt> Audio signal transformation connection method
JPH09160582A (en)1995-12-061997-06-20Fujitsu Ltd Speech synthesizer
JPH09244693A (en)1996-03-071997-09-19N T T Data Tsushin KkMethod and device for speech synthesis
JPH1078952A (en)1996-07-291998-03-24Internatl Business Mach Corp <Ibm>Voice synthesizing method and device therefor and hypertext control method and controller
US5745651A (en)1994-05-301998-04-28Canon Kabushiki KaishaSpeech synthesis apparatus and method for causing a computer to perform speech synthesis by calculating product of parameters for a speech waveform and a read waveform generation matrix
US5745650A (en)1994-05-301998-04-28Canon Kabushiki KaishaSpeech synthesis apparatus and method for synthesizing speech from a character series comprising a text and pitch information
EP0880127A2 (en)1997-05-211998-11-25Nippon Telegraph and Telephone CorporationMethod and apparatus for editing/creating synthetic speech message and recording medium with the method recorded thereon
US5845047A (en)1994-03-221998-12-01Canon Kabushiki KaishaMethod and apparatus for processing speech information using a phoneme environment
JPH11202884A (en)1997-05-211999-07-30Nippon Telegr & Teleph Corp <Ntt> Synthetic voice message editing / creating method, apparatus and recording medium recording the method
US20010032078A1 (en)2000-03-312001-10-18Toshiaki FukadaSpeech information processing method and apparatus and storage medium
EP1160764A1 (en)2000-06-022001-12-05Sony France S.A.Morphological categories for voice synthesis
US20020049590A1 (en)2000-10-202002-04-25Hiroaki YoshinoSpeech data recording apparatus and method for speech recognition learning
US20020051955A1 (en)2000-03-312002-05-02Yasuo OkutaniSpeech signal processing apparatus and method, and storage medium
US20030158735A1 (en)2002-02-152003-08-21Canon Kabushiki KaishaInformation processing apparatus and method with speech synthesis function
US20030229496A1 (en)2002-06-052003-12-11Canon Kabushiki KaishaSpeech synthesis method and apparatus, and dictionary generation method and apparatus
US6778960B2 (en)2000-03-312004-08-17Canon Kabushiki KaishaSpeech information processing method and apparatus and storage medium

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPS63253996A (en)1987-04-101988-10-20富士通株式会社 Sentence-speech conversion device
JPH06236197A (en)1992-07-301994-08-23Ricoh Co LtdPitch pattern generation device
JPH07191695A (en)1993-11-171995-07-28Sanyo Electric Co LtdSpeaking speed conversion device
US5845047A (en)1994-03-221998-12-01Canon Kabushiki KaishaMethod and apparatus for processing speech information using a phoneme environment
US5745651A (en)1994-05-301998-04-28Canon Kabushiki KaishaSpeech synthesis apparatus and method for causing a computer to perform speech synthesis by calculating product of parameters for a speech waveform and a read waveform generation matrix
US5745650A (en)1994-05-301998-04-28Canon Kabushiki KaishaSpeech synthesis apparatus and method for synthesizing speech from a character series comprising a text and pitch information
JPH09152892A (en)1995-09-261997-06-10Nippon Telegr & Teleph Corp <Ntt> Audio signal transformation connection method
JPH09160582A (en)1995-12-061997-06-20Fujitsu Ltd Speech synthesizer
JPH09244693A (en)1996-03-071997-09-19N T T Data Tsushin KkMethod and device for speech synthesis
US5983184A (en)1996-07-291999-11-09International Business Machines CorporationHyper text control through voice synthesis
JPH1078952A (en)1996-07-291998-03-24Internatl Business Mach Corp <Ibm>Voice synthesizing method and device therefor and hypertext control method and controller
US6334106B1 (en)1997-05-212001-12-25Nippon Telegraph And Telephone CorporationMethod for editing non-verbal information by adding mental state information to a speech message
JPH11202884A (en)1997-05-211999-07-30Nippon Telegr & Teleph Corp <Ntt> Synthetic voice message editing / creating method, apparatus and recording medium recording the method
US6226614B1 (en)1997-05-212001-05-01Nippon Telegraph And Telephone CorporationMethod and apparatus for editing/creating synthetic speech message and recording medium with the method recorded thereon
EP0880127A2 (en)1997-05-211998-11-25Nippon Telegraph and Telephone CorporationMethod and apparatus for editing/creating synthetic speech message and recording medium with the method recorded thereon
US20010032078A1 (en)2000-03-312001-10-18Toshiaki FukadaSpeech information processing method and apparatus and storage medium
US20020051955A1 (en)2000-03-312002-05-02Yasuo OkutaniSpeech signal processing apparatus and method, and storage medium
US6778960B2 (en)2000-03-312004-08-17Canon Kabushiki KaishaSpeech information processing method and apparatus and storage medium
EP1160764A1 (en)2000-06-022001-12-05Sony France S.A.Morphological categories for voice synthesis
JP2002023775A (en)2000-06-022002-01-25Sony France SaImprovement of expressive power for voice synthesis
US20020026315A1 (en)2000-06-022002-02-28Miranda Eduardo ReckExpressivity of voice synthesis
US20020049590A1 (en)2000-10-202002-04-25Hiroaki YoshinoSpeech data recording apparatus and method for speech recognition learning
US20030158735A1 (en)2002-02-152003-08-21Canon Kabushiki KaishaInformation processing apparatus and method with speech synthesis function
US20030229496A1 (en)2002-06-052003-12-11Canon Kabushiki KaishaSpeech synthesis method and apparatus, and dictionary generation method and apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Japanese Office Action dated Feb. 9, 2007, issued in corresponding Japanese patent application No. 2002-100467, with partial English-language translation.
Masanobu Abe, "Speech Morphing by Gradually Changing Spectrum Parameter and Fundamental Frequency," Institute of Electronics, Information and Communication Engineers Technical Report of IEICE (Jul. 1996), pp. 25-32, with English-language translation.
Note: English-language counterpart document(s) also cited (see text of IDS).
Office Action dated Jun. 15, 2007, issued in Japanese patent application No. 2002-100467, with English-language translation.

Cited By (279)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9646614B2 (en)2000-03-162017-05-09Apple Inc.Fast, language-independent method for user authentication by voice
US20070156408A1 (en)*2004-01-272007-07-05Natsuki SaitoVoice synthesis device
US7571099B2 (en)*2004-01-272009-08-04Panasonic CorporationVoice synthesis device
US10318871B2 (en)2005-09-082019-06-11Apple Inc.Method and apparatus for building an intelligent automated assistant
US11928604B2 (en)2005-09-082024-03-12Apple Inc.Method and apparatus for building an intelligent automated assistant
US8930191B2 (en)2006-09-082015-01-06Apple Inc.Paraphrasing of user requests and results by automated digital assistant
US8942986B2 (en)2006-09-082015-01-27Apple Inc.Determining user intent based on ontologies of domains
US9117447B2 (en)2006-09-082015-08-25Apple Inc.Using event alert text as input to an automated assistant
US10568032B2 (en)2007-04-032020-02-18Apple Inc.Method and system for operating a multi-function portable electronic device using voice-activation
US11023513B2 (en)2007-12-202021-06-01Apple Inc.Method and apparatus for searching using an active ontology
US10381016B2 (en)2008-01-032019-08-13Apple Inc.Methods and apparatus for altering audio output signals
US9330720B2 (en)*2008-01-032016-05-03Apple Inc.Methods and apparatus for altering audio output signals
US20090177300A1 (en)*2008-01-032009-07-09Apple Inc.Methods and apparatus for altering audio output signals
US9865248B2 (en)2008-04-052018-01-09Apple Inc.Intelligent text-to-speech conversion
US9626955B2 (en)2008-04-052017-04-18Apple Inc.Intelligent text-to-speech conversion
US10108612B2 (en)2008-07-312018-10-23Apple Inc.Mobile device having human language translation capability with positional feedback
US9535906B2 (en)2008-07-312017-01-03Apple Inc.Mobile device having human language translation capability with positional feedback
US8990087B1 (en)*2008-09-302015-03-24Amazon Technologies, Inc.Providing text to speech from digital content on an electronic device
US10643611B2 (en)2008-10-022020-05-05Apple Inc.Electronic devices with voice command and contextual data processing capabilities
US11348582B2 (en)2008-10-022022-05-31Apple Inc.Electronic devices with voice command and contextual data processing capabilities
US20100114556A1 (en)*2008-10-312010-05-06International Business Machines CorporationSpeech translation method and apparatus
US9342509B2 (en)*2008-10-312016-05-17Nuance Communications, Inc.Speech translation method and apparatus utilizing prosodic information
US9501470B2 (en)2008-11-262016-11-22At&T Intellectual Property I, L.P.System and method for enriching spoken language translation with dialog acts
US8374881B2 (en)*2008-11-262013-02-12At&T Intellectual Property I, L.P.System and method for enriching spoken language translation with dialog acts
US20100131260A1 (en)*2008-11-262010-05-27At&T Intellectual Property I, L.P.System and method for enriching spoken language translation with dialog acts
US9959870B2 (en)2008-12-112018-05-01Apple Inc.Speech recognition involving a mobile device
US11080012B2 (en)2009-06-052021-08-03Apple Inc.Interface for a virtual digital assistant
US10795541B2 (en)2009-06-052020-10-06Apple Inc.Intelligent organization of tasks items
US9858925B2 (en)2009-06-052018-01-02Apple Inc.Using context information to facilitate processing of commands in a virtual assistant
US10475446B2 (en)2009-06-052019-11-12Apple Inc.Using context information to facilitate processing of commands in a virtual assistant
US10283110B2 (en)2009-07-022019-05-07Apple Inc.Methods and apparatuses for automatic speech recognition
US9548050B2 (en)2010-01-182017-01-17Apple Inc.Intelligent automated assistant
US10706841B2 (en)2010-01-182020-07-07Apple Inc.Task flow identification based on user intent
US12087308B2 (en)2010-01-182024-09-10Apple Inc.Intelligent automated assistant
US10553209B2 (en)2010-01-182020-02-04Apple Inc.Systems and methods for hands-free notification summaries
US10496753B2 (en)2010-01-182019-12-03Apple Inc.Automatically adapting user interfaces for hands-free interaction
US10276170B2 (en)2010-01-182019-04-30Apple Inc.Intelligent automated assistant
US9318108B2 (en)2010-01-182016-04-19Apple Inc.Intelligent automated assistant
US10741185B2 (en)2010-01-182020-08-11Apple Inc.Intelligent automated assistant
US11423886B2 (en)2010-01-182022-08-23Apple Inc.Task flow identification based on user intent
US8892446B2 (en)2010-01-182014-11-18Apple Inc.Service orchestration for intelligent automated assistant
US10679605B2 (en)2010-01-182020-06-09Apple Inc.Hands-free list-reading by intelligent automated assistant
US10705794B2 (en)2010-01-182020-07-07Apple Inc.Automatically adapting user interfaces for hands-free interaction
US8903716B2 (en)2010-01-182014-12-02Apple Inc.Personalized vocabulary for digital assistant
US11410053B2 (en)2010-01-252022-08-09Newvaluexchange Ltd.Apparatuses, methods and systems for a digital conversation management platform
US10607140B2 (en)2010-01-252020-03-31Newvaluexchange Ltd.Apparatuses, methods and systems for a digital conversation management platform
US12307383B2 (en)2010-01-252025-05-20Newvaluexchange Global Ai LlpApparatuses, methods and systems for a digital conversation management platform
US10607141B2 (en)2010-01-252020-03-31Newvaluexchange Ltd.Apparatuses, methods and systems for a digital conversation management platform
US10984327B2 (en)2010-01-252021-04-20New Valuexchange Ltd.Apparatuses, methods and systems for a digital conversation management platform
US10984326B2 (en)2010-01-252021-04-20Newvaluexchange Ltd.Apparatuses, methods and systems for a digital conversation management platform
US9633660B2 (en)2010-02-252017-04-25Apple Inc.User profiling for voice input processing
US10049675B2 (en)2010-02-252018-08-14Apple Inc.User profiling for voice input processing
US10692504B2 (en)2010-02-252020-06-23Apple Inc.User profiling for voice input processing
US10762293B2 (en)2010-12-222020-09-01Apple Inc.Using parts-of-speech tagging and named entity recognition for spelling correction
US10417405B2 (en)2011-03-212019-09-17Apple Inc.Device access using voice authentication
US9262612B2 (en)2011-03-212016-02-16Apple Inc.Device access using voice authentication
US10102359B2 (en)2011-03-212018-10-16Apple Inc.Device access using voice authentication
US11350253B2 (en)2011-06-032022-05-31Apple Inc.Active transport based notifications
US10241644B2 (en)2011-06-032019-03-26Apple Inc.Actionable reminder entries
US10057736B2 (en)2011-06-032018-08-21Apple Inc.Active transport based notifications
US11120372B2 (en)2011-06-032021-09-14Apple Inc.Performing actions associated with task items that represent tasks to perform
US10706373B2 (en)2011-06-032020-07-07Apple Inc.Performing actions associated with task items that represent tasks to perform
US9798393B2 (en)2011-08-292017-10-24Apple Inc.Text correction processing
US8965769B2 (en)*2011-09-262015-02-24Kabushiki Kaisha ToshibaMarkup assistance apparatus, method and program
US9626338B2 (en)2011-09-262017-04-18Kabushiki Kaisha ToshibaMarkup assistance apparatus, method and program
US20130080175A1 (en)*2011-09-262013-03-28Kabushiki Kaisha ToshibaMarkup assistance apparatus, method and program
US10241752B2 (en)2011-09-302019-03-26Apple Inc.Interface for a virtual digital assistant
US11069336B2 (en)2012-03-022021-07-20Apple Inc.Systems and methods for name pronunciation
US10134385B2 (en)2012-03-022018-11-20Apple Inc.Systems and methods for name pronunciation
US9483461B2 (en)2012-03-062016-11-01Apple Inc.Handling speech synthesis of content for multiple languages
US9953088B2 (en)2012-05-142018-04-24Apple Inc.Crowd sourcing information to fulfill user requests
US11269678B2 (en)2012-05-152022-03-08Apple Inc.Systems and methods for integrating third party services with a digital assistant
US10079014B2 (en)2012-06-082018-09-18Apple Inc.Name recognition system
US9495129B2 (en)2012-06-292016-11-15Apple Inc.Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en)2012-09-102017-02-21Apple Inc.Context-sensitive handling of interruptions by intelligent digital assistant
US9971774B2 (en)2012-09-192018-05-15Apple Inc.Voice-based media searching
US10978090B2 (en)2013-02-072021-04-13Apple Inc.Voice trigger for a digital assistant
US10714117B2 (en)2013-02-072020-07-14Apple Inc.Voice trigger for a digital assistant
US10199051B2 (en)2013-02-072019-02-05Apple Inc.Voice trigger for a digital assistant
US9368114B2 (en)2013-03-142016-06-14Apple Inc.Context-sensitive handling of interruptions
US9697822B1 (en)2013-03-152017-07-04Apple Inc.System and method for updating an adaptive speech recognition model
US9922642B2 (en)2013-03-152018-03-20Apple Inc.Training an at least partial voice command system
US9582608B2 (en)2013-06-072017-02-28Apple Inc.Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9633674B2 (en)2013-06-072017-04-25Apple Inc.System and method for detecting errors in interactions with a voice-based digital assistant
US9620104B2 (en)2013-06-072017-04-11Apple Inc.System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966060B2 (en)2013-06-072018-05-08Apple Inc.System and method for user-specified pronunciation of words for speech synthesis and recognition
US10657961B2 (en)2013-06-082020-05-19Apple Inc.Interpreting and acting upon commands that involve sharing information with remote devices
US9966068B2 (en)2013-06-082018-05-08Apple Inc.Interpreting and acting upon commands that involve sharing information with remote devices
US10185542B2 (en)2013-06-092019-01-22Apple Inc.Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10769385B2 (en)2013-06-092020-09-08Apple Inc.System and method for inferring user intent from speech inputs
US10176167B2 (en)2013-06-092019-01-08Apple Inc.System and method for inferring user intent from speech inputs
US11048473B2 (en)2013-06-092021-06-29Apple Inc.Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US9300784B2 (en)2013-06-132016-03-29Apple Inc.System and method for emergency calls initiated by voice command
US10791216B2 (en)2013-08-062020-09-29Apple Inc.Auto-activating smart responses based on activities from remote devices
US11314370B2 (en)2013-12-062022-04-26Apple Inc.Method for extracting salient dialog usage from live data
US20190074018A1 (en)*2014-03-192019-03-07Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using individual replacement LPC representations for individual codebook information
US20170004834A1 (en)*2014-03-192017-01-05Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using an adaptive noise estimation
US10614818B2 (en)*2014-03-192020-04-07Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using individual replacement LPC representations for individual codebook information
US11393479B2 (en)*2014-03-192022-07-19Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using individual replacement LPC representations for individual codebook information
US20190066700A1 (en)*2014-03-192019-02-28Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using an adaptive noise estimation
US10621993B2 (en)*2014-03-192020-04-14Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using an adaptive noise estimation
US10224041B2 (en)2014-03-192019-03-05Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus, method and corresponding computer program for generating an error concealment signal using power compensation
US11367453B2 (en)2014-03-192022-06-21Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using power compensation
US10733997B2 (en)2014-03-192020-08-04Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using power compensation
US10163444B2 (en)*2014-03-192018-12-25Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using an adaptive noise estimation
US11423913B2 (en)*2014-03-192022-08-23Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using an adaptive noise estimation
US10140993B2 (en)2014-03-192018-11-27Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus and method for generating an error concealment signal using individual replacement LPC representations for individual codebook information
US9620105B2 (en)2014-05-152017-04-11Apple Inc.Analyzing audio input for efficient speech and music recognition
US10592095B2 (en)2014-05-232020-03-17Apple Inc.Instantaneous speaking of content on touch devices
US9502031B2 (en)2014-05-272016-11-22Apple Inc.Method for supporting dynamic grammars in WFST-based ASR
US9760559B2 (en)2014-05-302017-09-12Apple Inc.Predictive text input
US10699717B2 (en)2014-05-302020-06-30Apple Inc.Intelligent assistant for home automation
US10497365B2 (en)2014-05-302019-12-03Apple Inc.Multi-command single utterance input method
US10878809B2 (en)2014-05-302020-12-29Apple Inc.Multi-command single utterance input method
US9430463B2 (en)2014-05-302016-08-30Apple Inc.Exemplar-based natural language processing
US10289433B2 (en)2014-05-302019-05-14Apple Inc.Domain specific language for encoding assistant dialog
US10170123B2 (en)2014-05-302019-01-01Apple Inc.Intelligent assistant for home automation
US9633004B2 (en)2014-05-302017-04-25Apple Inc.Better resolution when referencing to concepts
US11257504B2 (en)2014-05-302022-02-22Apple Inc.Intelligent assistant for home automation
US9715875B2 (en)2014-05-302017-07-25Apple Inc.Reducing the need for manual start/end-pointing and trigger phrases
US10714095B2 (en)2014-05-302020-07-14Apple Inc.Intelligent assistant for home automation
US9734193B2 (en)2014-05-302017-08-15Apple Inc.Determining domain salience ranking from ambiguous words in natural speech
US9785630B2 (en)2014-05-302017-10-10Apple Inc.Text prediction using combined word N-gram and unigram language models
US10083690B2 (en)2014-05-302018-09-25Apple Inc.Better resolution when referencing to concepts
US9842101B2 (en)2014-05-302017-12-12Apple Inc.Predictive conversion of language input
US11133008B2 (en)2014-05-302021-09-28Apple Inc.Reducing the need for manual start/end-pointing and trigger phrases
US10657966B2 (en)2014-05-302020-05-19Apple Inc.Better resolution when referencing to concepts
US10417344B2 (en)2014-05-302019-09-17Apple Inc.Exemplar-based natural language processing
US9966065B2 (en)2014-05-302018-05-08Apple Inc.Multi-command single utterance input method
US10169329B2 (en)2014-05-302019-01-01Apple Inc.Exemplar-based natural language processing
US10078631B2 (en)2014-05-302018-09-18Apple Inc.Entropy-guided text prediction using combined word and character n-gram language models
US10659851B2 (en)2014-06-302020-05-19Apple Inc.Real-time digital assistant knowledge updates
US9668024B2 (en)2014-06-302017-05-30Apple Inc.Intelligent automated assistant for TV user interactions
US10904611B2 (en)2014-06-302021-01-26Apple Inc.Intelligent automated assistant for TV user interactions
US9338493B2 (en)2014-06-302016-05-10Apple Inc.Intelligent automated assistant for TV user interactions
US10446141B2 (en)2014-08-282019-10-15Apple Inc.Automatic speech recognition based on user feedback
US10431204B2 (en)2014-09-112019-10-01Apple Inc.Method and apparatus for discovering trending terms in speech requests
US9818400B2 (en)2014-09-112017-11-14Apple Inc.Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en)2014-09-122020-09-29Apple Inc.Dynamic thresholds for always listening speech trigger
US9606986B2 (en)2014-09-292017-03-28Apple Inc.Integrated word N-gram and class M-gram language models
US10438595B2 (en)2014-09-302019-10-08Apple Inc.Speaker identification and unsupervised speaker adaptation techniques
US10453443B2 (en)2014-09-302019-10-22Apple Inc.Providing an indication of the suitability of speech recognition
US9886432B2 (en)2014-09-302018-02-06Apple Inc.Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9668121B2 (en)2014-09-302017-05-30Apple Inc.Social reminders
US10074360B2 (en)2014-09-302018-09-11Apple Inc.Providing an indication of the suitability of speech recognition
US9646609B2 (en)2014-09-302017-05-09Apple Inc.Caching apparatus for serving phonetic pronunciations
US9986419B2 (en)2014-09-302018-05-29Apple Inc.Social reminders
US10127911B2 (en)2014-09-302018-11-13Apple Inc.Speaker identification and unsupervised speaker adaptation techniques
US10390213B2 (en)2014-09-302019-08-20Apple Inc.Social reminders
US10217454B2 (en)2014-10-302019-02-26Kabushiki Kaisha ToshibaVoice synthesizer, voice synthesis method, and computer program product
US11556230B2 (en)2014-12-022023-01-17Apple Inc.Data detection
US10552013B2 (en)2014-12-022020-02-04Apple Inc.Data detection
US9711141B2 (en)2014-12-092017-07-18Apple Inc.Disambiguating heteronyms in speech synthesis
US9865280B2 (en)2015-03-062018-01-09Apple Inc.Structured dictation using intelligent automated assistants
US11231904B2 (en)2015-03-062022-01-25Apple Inc.Reducing response latency of intelligent automated assistants
US9886953B2 (en)2015-03-082018-02-06Apple Inc.Virtual assistant activation
US10930282B2 (en)2015-03-082021-02-23Apple Inc.Competing devices responding to voice triggers
US9721566B2 (en)2015-03-082017-08-01Apple Inc.Competing devices responding to voice triggers
US10529332B2 (en)2015-03-082020-01-07Apple Inc.Virtual assistant activation
US11087759B2 (en)2015-03-082021-08-10Apple Inc.Virtual assistant activation
US10567477B2 (en)2015-03-082020-02-18Apple Inc.Virtual assistant continuity
US10311871B2 (en)2015-03-082019-06-04Apple Inc.Competing devices responding to voice triggers
US9899019B2 (en)2015-03-182018-02-20Apple Inc.Systems and methods for structured stem and suffix language models
US9842105B2 (en)2015-04-162017-12-12Apple Inc.Parsimonious continuous-space phrase representations for natural language processing
US11468282B2 (en)2015-05-152022-10-11Apple Inc.Virtual assistant in a communication session
US11127397B2 (en)2015-05-272021-09-21Apple Inc.Device voice control
US10083688B2 (en)2015-05-272018-09-25Apple Inc.Device voice control for selecting a displayed affordance
US10127220B2 (en)2015-06-042018-11-13Apple Inc.Language identification from short strings
US10681212B2 (en)2015-06-052020-06-09Apple Inc.Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en)2015-06-052018-10-16Apple Inc.Language input correction
US10356243B2 (en)2015-06-052019-07-16Apple Inc.Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en)2015-06-072021-06-01Apple Inc.Personalized prediction of responses for instant messaging
US10186254B2 (en)2015-06-072019-01-22Apple Inc.Context-based endpoint detection
US10255907B2 (en)2015-06-072019-04-09Apple Inc.Automatic accent detection using acoustic models
US11010127B2 (en)2015-06-292021-05-18Apple Inc.Virtual assistant for media playback
US10671428B2 (en)2015-09-082020-06-02Apple Inc.Distributed personal assistant
US10747498B2 (en)2015-09-082020-08-18Apple Inc.Zero latency digital assistant
US11500672B2 (en)2015-09-082022-11-15Apple Inc.Distributed personal assistant
US9697820B2 (en)2015-09-242017-07-04Apple Inc.Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en)2015-09-292021-05-18Apple Inc.Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en)2015-09-292019-07-30Apple Inc.Efficient word encoding for recurrent neural network language models
US11587559B2 (en)2015-09-302023-02-21Apple Inc.Intelligent device identification
US11526368B2 (en)2015-11-062022-12-13Apple Inc.Intelligent automated assistant in a messaging environment
US10691473B2 (en)2015-11-062020-06-23Apple Inc.Intelligent automated assistant in a messaging environment
US10354652B2 (en)2015-12-022019-07-16Apple Inc.Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049668B2 (en)2015-12-022018-08-14Apple Inc.Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en)2015-12-232019-03-05Apple Inc.Proactive assistance based on dialog communication between devices
US10942703B2 (en)2015-12-232021-03-09Apple Inc.Proactive assistance based on dialog communication between devices
US10446143B2 (en)2016-03-142019-10-15Apple Inc.Identification of voice inputs providing credentials
US9934775B2 (en)2016-05-262018-04-03Apple Inc.Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en)2016-06-032018-05-15Apple Inc.Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en)2016-06-062019-04-02Apple Inc.Intelligent list reading
US11227589B2 (en)2016-06-062022-01-18Apple Inc.Intelligent list reading
US10049663B2 (en)2016-06-082018-08-14Apple, Inc.Intelligent automated assistant for media exploration
US11069347B2 (en)2016-06-082021-07-20Apple Inc.Intelligent automated assistant for media exploration
US10354011B2 (en)2016-06-092019-07-16Apple Inc.Intelligent automated assistant in a home environment
US10490187B2 (en)2016-06-102019-11-26Apple Inc.Digital assistant providing automated status report
US10509862B2 (en)2016-06-102019-12-17Apple Inc.Dynamic phrase expansion of language input
US10067938B2 (en)2016-06-102018-09-04Apple Inc.Multilingual word prediction
US11037565B2 (en)2016-06-102021-06-15Apple Inc.Intelligent digital assistant in a multi-tasking environment
US10733993B2 (en)2016-06-102020-08-04Apple Inc.Intelligent digital assistant in a multi-tasking environment
US10192552B2 (en)2016-06-102019-01-29Apple Inc.Digital assistant providing whispered speech
US10269345B2 (en)2016-06-112019-04-23Apple Inc.Intelligent task discovery
US11152002B2 (en)2016-06-112021-10-19Apple Inc.Application integration with a digital assistant
US10521466B2 (en)2016-06-112019-12-31Apple Inc.Data driven natural language event detection and classification
US10580409B2 (en)2016-06-112020-03-03Apple Inc.Application integration with a digital assistant
US10089072B2 (en)2016-06-112018-10-02Apple Inc.Intelligent device arbitration and control
US10297253B2 (en)2016-06-112019-05-21Apple Inc.Application integration with a digital assistant
US10942702B2 (en)2016-06-112021-03-09Apple Inc.Intelligent device arbitration and control
US10579742B1 (en)*2016-08-302020-03-03United Services Automobile Association (Usaa)Biometric signal analysis for communication enhancement and transformation
US10474753B2 (en)2016-09-072019-11-12Apple Inc.Language identification using recurrent neural networks
US10043516B2 (en)2016-09-232018-08-07Apple Inc.Intelligent automated assistant
US10553215B2 (en)2016-09-232020-02-04Apple Inc.Intelligent automated assistant
US11281993B2 (en)2016-12-052022-03-22Apple Inc.Model and ensemble compression for metric learning
US10593346B2 (en)2016-12-222020-03-17Apple Inc.Rank-reduced token representation for automatic speech recognition
US11204787B2 (en)2017-01-092021-12-21Apple Inc.Application integration with a digital assistant
US11656884B2 (en)2017-01-092023-05-23Apple Inc.Application integration with a digital assistant
US10741181B2 (en)2017-05-092020-08-11Apple Inc.User interface for correcting recognition errors
US10417266B2 (en)2017-05-092019-09-17Apple Inc.Context-aware ranking of intelligent response suggestions
US10332518B2 (en)2017-05-092019-06-25Apple Inc.User interface for correcting recognition errors
US10726832B2 (en)2017-05-112020-07-28Apple Inc.Maintaining privacy of personal information
US10395654B2 (en)2017-05-112019-08-27Apple Inc.Text normalization based on a data-driven learning network
US10755703B2 (en)2017-05-112020-08-25Apple Inc.Offline personal assistant
US10847142B2 (en)2017-05-112020-11-24Apple Inc.Maintaining privacy of personal information
US10789945B2 (en)2017-05-122020-09-29Apple Inc.Low-latency intelligent automated assistant
US11405466B2 (en)2017-05-122022-08-02Apple Inc.Synchronization and task delegation of a digital assistant
US11301477B2 (en)2017-05-122022-04-12Apple Inc.Feedback analysis of a digital assistant
US10410637B2 (en)2017-05-122019-09-10Apple Inc.User-specific acoustic models
US10791176B2 (en)2017-05-122020-09-29Apple Inc.Synchronization and task delegation of a digital assistant
US10482874B2 (en)2017-05-152019-11-19Apple Inc.Hierarchical belief states for digital assistants
US10810274B2 (en)2017-05-152020-10-20Apple Inc.Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10403278B2 (en)2017-05-162019-09-03Apple Inc.Methods and systems for phonetic matching in digital assistant services
US10311144B2 (en)2017-05-162019-06-04Apple Inc.Emoji word sense disambiguation
US11217255B2 (en)2017-05-162022-01-04Apple Inc.Far-field extension for digital assistant services
US10748546B2 (en)2017-05-162020-08-18Apple Inc.Digital assistant services based on device capabilities
US10303715B2 (en)2017-05-162019-05-28Apple Inc.Intelligent automated assistant for media exploration
US10909171B2 (en)2017-05-162021-02-02Apple Inc.Intelligent automated assistant for media exploration
US10657328B2 (en)2017-06-022020-05-19Apple Inc.Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en)2017-09-212019-10-15Apple Inc.Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en)2017-09-292020-08-25Apple Inc.Rule-based natural language processing
US10636424B2 (en)2017-11-302020-04-28Apple Inc.Multi-turn canned dialog
US10733982B2 (en)2018-01-082020-08-04Apple Inc.Multi-directional dialog
US10733375B2 (en)2018-01-312020-08-04Apple Inc.Knowledge-based framework for improving natural language understanding
US10789959B2 (en)2018-03-022020-09-29Apple Inc.Training speaker recognition models for digital assistants
US10592604B2 (en)2018-03-122020-03-17Apple Inc.Inverse text normalization for automatic speech recognition
US10818288B2 (en)2018-03-262020-10-27Apple Inc.Natural assistant interaction
US10909331B2 (en)2018-03-302021-02-02Apple Inc.Implicit identification of translation payload with neural machine translation
US11145294B2 (en)2018-05-072021-10-12Apple Inc.Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en)2018-05-072021-02-23Apple Inc.Raise to speak
US10984780B2 (en)2018-05-212021-04-20Apple Inc.Global semantic word embeddings using bi-directional recurrent neural networks
US11495218B2 (en)2018-06-012022-11-08Apple Inc.Virtual assistant operation in multi-device environments
US10984798B2 (en)2018-06-012021-04-20Apple Inc.Voice interaction at a primary device to access call functionality of a companion device
US10892996B2 (en)2018-06-012021-01-12Apple Inc.Variable latency device coordination
US11386266B2 (en)2018-06-012022-07-12Apple Inc.Text correction
US10684703B2 (en)2018-06-012020-06-16Apple Inc.Attention aware virtual assistant dismissal
US10720160B2 (en)2018-06-012020-07-21Apple Inc.Voice interaction at a primary device to access call functionality of a companion device
US10403283B1 (en)2018-06-012019-09-03Apple Inc.Voice interaction at a primary device to access call functionality of a companion device
US11009970B2 (en)2018-06-012021-05-18Apple Inc.Attention aware virtual assistant dismissal
US10944859B2 (en)2018-06-032021-03-09Apple Inc.Accelerated task performance
US10496705B1 (en)2018-06-032019-12-03Apple Inc.Accelerated task performance
US10504518B1 (en)2018-06-032019-12-10Apple Inc.Accelerated task performance
US11010561B2 (en)2018-09-272021-05-18Apple Inc.Sentiment prediction from textual data
US11462215B2 (en)2018-09-282022-10-04Apple Inc.Multi-modal inputs for voice commands
US10839159B2 (en)2018-09-282020-11-17Apple Inc.Named entity normalization in a spoken dialog system
US11170166B2 (en)2018-09-282021-11-09Apple Inc.Neural typographical error modeling via generative adversarial networks
US11475898B2 (en)2018-10-262022-10-18Apple Inc.Low-latency multi-speaker speech recognition
US11638059B2 (en)2019-01-042023-04-25Apple Inc.Content playback on multiple devices
US11348573B2 (en)2019-03-182022-05-31Apple Inc.Multimodality in digital assistant systems
US11475884B2 (en)2019-05-062022-10-18Apple Inc.Reducing digital assistant latency when a language is incorrectly determined
US11307752B2 (en)2019-05-062022-04-19Apple Inc.User configurable task triggers
US11423908B2 (en)2019-05-062022-08-23Apple Inc.Interpreting spoken requests
US11217251B2 (en)2019-05-062022-01-04Apple Inc.Spoken notifications
US11140099B2 (en)2019-05-212021-10-05Apple Inc.Providing message response suggestions
US11496600B2 (en)2019-05-312022-11-08Apple Inc.Remote execution of machine-learned models
US11360739B2 (en)2019-05-312022-06-14Apple Inc.User activity shortcut suggestions
US11289073B2 (en)2019-05-312022-03-29Apple Inc.Device text to speech
US11237797B2 (en)2019-05-312022-02-01Apple Inc.User activity shortcut suggestions
US11360641B2 (en)2019-06-012022-06-14Apple Inc.Increasing the relevance of new available information
US11488406B2 (en)2019-09-252022-11-01Apple Inc.Text detection using global geometry estimators

Also Published As

Publication numberPublication date
EP1490861A4 (en)2007-04-18
EP1490861B1 (en)2008-12-10
ES2316786T3 (en)2009-04-16
CN1643572A (en)2005-07-20
CN1269104C (en)2006-08-09
DE60325191D1 (en)2009-01-22
KR100591655B1 (en)2006-06-20
US20050065795A1 (en)2005-03-24
EP1490861A1 (en)2004-12-29
KR20040086432A (en)2004-10-08
JP2003295882A (en)2003-10-15
WO2003088208A1 (en)2003-10-23
AU2003226446A1 (en)2003-10-27

Similar Documents

PublicationPublication DateTitle
US7487093B2 (en)Text structure for voice synthesis, voice synthesis method, voice synthesis apparatus, and computer program thereof
US6175820B1 (en)Capture and application of sender voice dynamics to enhance communication in a speech-to-text environment
US9318100B2 (en)Supplementing audio recorded in a media file
US8886538B2 (en)Systems and methods for text-to-speech synthesis using spoken example
US9196241B2 (en)Asynchronous communications using messages recorded on handheld devices
US7792673B2 (en)Method of generating a prosodic model for adjusting speech style and apparatus and method of synthesizing conversational speech using the same
US7454345B2 (en)Word or collocation emphasizing voice synthesizer
CN100547654C (en) speech synthesis device
CN112185341A (en)Dubbing method, apparatus, device and storage medium based on speech synthesis
US8265936B2 (en)Methods and system for creating and editing an XML-based speech synthesis document
US20080162559A1 (en)Asynchronous communications regarding the subject matter of a media file stored on a handheld recording device
US6546369B1 (en)Text-based speech synthesis method containing synthetic speech comparisons and updates
JP6289950B2 (en) Reading apparatus, reading method and program
KR100806287B1 (en) Speech intonation prediction method and speech synthesis method and system based on the same
JPH06337876A (en)Sentence reader
JP4409279B2 (en) Speech synthesis apparatus and speech synthesis program
CN116978381A (en)Audio data processing method, device, computer equipment and storage medium
JP2006139162A (en)Language learning system
JPS6073589A (en) speech synthesizer
US20080162130A1 (en)Asynchronous receipt of information from a user
KR102747987B1 (en)Voice synthesizer learning method using synthesized sounds for disentangling language, pronunciation/prosody, and speaker information
JP2001013982A (en)Voice synthesizer
JP2001350490A (en) Text-to-speech converter and method
JP2000231396A (en) Dialogue data creation device, dialogue playback device, voice analysis / synthesis device, and voice information transfer device
JP3292218B2 (en) Voice message composer

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:CANON KABUSHIKI KAISHA, JAPAN

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MUTSUNO, MASAHIRO;FUKADA, TOSHIAKI;REEL/FRAME:015674/0526;SIGNING DATES FROM 20040802 TO 20040803

FPAYFee payment

Year of fee payment:4

REMIMaintenance fee reminder mailed
LAPSLapse for failure to pay maintenance fees
STCHInformation on status: patent discontinuation

Free format text:PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FPLapsed due to failure to pay maintenance fee

Effective date:20170203


[8]ページ先頭

©2009-2025 Movatter.jp