Movatterモバイル変換


[0]ホーム

URL:


US20230012984A1 - Generation of automated message responses - Google Patents

Generation of automated message responses
Download PDF

Info

Publication number
US20230012984A1
US20230012984A1US17/946,748US202217946748AUS2023012984A1US 20230012984 A1US20230012984 A1US 20230012984A1US 202217946748 AUS202217946748 AUS 202217946748AUS 2023012984 A1US2023012984 A1US 2023012984A1
Authority
US
United States
Prior art keywords
recipient
audio data
response
speech
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/946,748
Inventor
Ariya Rastrow
Tony Hardie
Rohit Prasad
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amazon Technologies Inc
Original Assignee
Amazon Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amazon Technologies IncfiledCriticalAmazon Technologies Inc
Priority to US17/946,748priorityCriticalpatent/US20230012984A1/en
Assigned to AMAZON TECHNOLOGIES, INC.reassignmentAMAZON TECHNOLOGIES, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: RASTROW, Ariya, PRASAD, ROHIT, HARDIE, TONY
Publication of US20230012984A1publicationCriticalpatent/US20230012984A1/en
Abandonedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

Systems, methods, and devices for computer-generating responses and sending responses to communications when the recipient of the communication is unavailable are disclosed. An individual may send a message (either audio or text) to a recipient. The recipient may be unavailable to contemporaneously respond to the message (e.g., the recipient may be performing an action that makes is difficult or impractical for the recipient to contemporaneously respond to the audio message). When the recipient is unavailable, a response to the message is generated and sent without receiving an instruction from the recipient to do so. The response may be sent to the message originating individual, and content of the response may thereafter be sent to the recipient to receive feedback regarding the correctness of the response. Alternatively, the response content may first be sent to the recipient to receive the feedback, and thereafter the response may be sent to the message originating individual.

Description

Claims (20)

What is claimed is:
1. A computer implemented method comprising:
receiving, from a first speech-controlled device, input audio data including a spoken utterance;
performing speech processing on the input audio data;
determine the input audio data comprises a message intended for a second device associated with a user profile;
generating output text data responding to the input audio data, the output text data being based on the input audio data being received from the first speech-controlled device;
identifying a plurality of stored audio segments associated with the user profile, the stored audio segments comprising previous speech spoken by a user;
performing text-to-speech (TTS) processing on the output text data to generate output audio data, wherein the TTS processing uses the plurality of stored audio segments;
determining that the user profile includes an unavailable indicator associated with the second device, the unavailable indicator being based on the second device being in operation; and
sending, to the first speech-controlled device in response to the user profile including the unavailable indicator, the output audio data.
2. The computer-implemented method ofclaim 1, further comprising:
receiving, from a third device, input text data including a text message;
performing natural language processing on the input text data;
determining the input text data is intended for the second device;
generating second output text data responding to the second input text;
sending, to the third device in response to the user profile including the unavailable indicator, the second output text;
determining a fourth device associated with the user profile;
sending, to the fourth device, a first signal causing the fourth device to output content corresponding to the second output text;
receiving, from the fourth device, a second signal including an indication of correctness of the second output text; and
using the indication to inform further generation of output text data.
3. The computer-implemented method ofclaim 1, further comprising:
associating, by a first machine learning model, the unavailable indicator with the second device in the user profile based on at least one of determining the second device is outputting multimedia content, determining a calendar application associated with the user profile indicates the second device is presently busy, message content, content of the input audio data, a time of day, determining a do not disturb setting of the second device is activated, or an idle time since a last communication of the second device,
wherein generating the output text data is performed using a second machine learning model.
4. The computer-implemented method ofclaim 1, further comprising:
generating output text data corresponding to a plurality of responses to the input audio data;
sending, to a third device associated with the user profile, a first signal causing the third device to output text corresponding to the plurality of responses; and
receiving, from the third device, a second signal including a selection of one of the plurality of responses,
wherein sending, to the first speech-controlled device, the output audio data occurs in response to receiving the second signal.
5. A system comprising:
at least one processor; and
memory including instructions operable to be executed by the at least one processor to perform a set of actions to configure the at least one processor to:
receive, from a first device, input audio data;
perform speech processing on the input audio data;
determine the input audio data represents a message intended for a recipient device associated with a recipient profile;
generate response text data;
perform text-to-speech processing on the response text data to generate output audio data, the output audio data including prosodic characteristics corresponding to voice data associated with the recipient profile;
determine the recipient profile includes an unavailable indicator; and
send, to the first device in response to the recipient profile including the unavailable indicator, the output audio data.
6. The system ofclaim 5, wherein the instructions further configure the at least one processor to:
receive, from a third device, input text data;
determine the input text data represents a second message intended for the recipient device;
generate second response text data; and
send, to the third device in response to the recipient profile including the unavailable indicator, the second response text data.
7. The system ofclaim 5, wherein the output audio data comprises stored speech units corresponding to utterances previously spoken by a user.
8. The system ofclaim 5, wherein the instructions configuring the at least one processor to determine the recipient profile including the unavailable indicator are implemented by a first machine learning model, the first machine learning model determining to send the output audio data based on the recipient device outputting multimedia content, a calendar application indicating the recipient device is presently busy, passed message exchange content, content of the input audio data, an identity of a user that sent the message, a time of day, a do not disturb setting of the recipient device being activated, or idle time since a last communication of the recipient device.
9. The system ofclaim 8, wherein the instructions configuring the at least one processor to generate response text data are implemented by a second machine learning model, the second machine learning model generating the response text data based on the recipient device outputting multimedia content, a calendar application indicating the recipient device is presently busy, passed message exchange content, content of the input audio data, an identity of a user that sent the message, a time of day, a do not disturb setting of the recipient device being activated, or idle time since a last communication of the recipient device.
10. The system ofclaim 5, wherein the instructions further configure the at least one processor to:
cause a second device associated with the recipient profile to output content corresponding to the output audio data;
receive, from the second device, an indication of correctness of the output audio data; and
use the indication to inform future response generation.
11. The system ofclaim 5, wherein the instructions further configure the at least one processor to:
receive, from a third device, second input audio data;
perform speech processing on the second input audio data;
determine the second input audio data represents a second message intended for the recipient device; and
cause, in response to the recipient profile including the unavailable indicator, the third device to obtain audio corresponding to a reason for the second message.
12. The system ofclaim 11, wherein the instructions further configure the at least one processor to:
determine the third device is associated with a first user profile; and
determine the first user profile is represented in a current time slot of a calendar application associated with the recipient profile,
wherein causing the third device to obtain the audio is in response to determining the first user profile is represented in the current time slot.
13. A computer-implemented method comprising:
receiving, from a first device, input audio data;
performing speech processing on the input audio data;
determine the input audio data represents a message intended for a recipient device associated with a recipient profile
generating response text data;
performing text-to-speech processing on the response text data to generate output audio data, the output audio data including prosodic characteristics corresponding to voice data associated with the recipient profile;
determine the recipient profile includes an unavailable indicator; and
sending, to the first device in response to the recipient profile including the unavailable indicator, the output audio data.
14. The computer-implemented method ofclaim 13, further comprising:
receiving, from a third device, input text data;
determining the input text data represents a second message intended for the recipient device;
generating second response text data; and
sending, to the third device in response to the recipient profile including the unavailable indicator, the second response text data.
15. The computer-implemented method ofclaim 13, wherein the output audio data comprises stored speech units corresponding to utterances previously spoken by a user.
16. The computer-implemented method ofclaim 13, wherein determining the recipient profile includes the unavailable indicator is performed by a first machine learning model, the first machine learning model determining to send the output audio data based on the recipient device outputting multimedia content, a calendar application indicating the recipient device is presently busy, passed message exchange content, content of the input audio data, an identity of a user that sent the message, a time of day, a do not disturb setting of the recipient device being activated, or idle time since a last communication of the recipient device.
17. The computer-implemented method ofclaim 16, wherein generating response text data is performed by a second machine learning model, the second machine learning model generating the response text data based on at least the recipient device outputting multimedia content, a calendar application indicating the recipient device is presently busy, passed message exchange content, content of the input audio data, an identity of a user that sent the message, a time of day, a do not disturb setting of the recipient device being activated, or idle time since a last communication of the recipient device.
18. The computer-implemented method ofclaim 13, further comprising:
causing a second device associated with the recipient profile to output content corresponding to the output audio data;
receiving, from the second device, an indication of correctness of the output audio data; and
using the indication to inform at future response generation.
19. The computer-implemented method ofclaim 13, further comprising:
receiving, from a third device, second input audio data;
performing speech processing on the second input audio data;
determining the second input audio data represents a second message intended for the recipient device; and
causing, in response to the recipient profile including the unavailable indicator, the third device to obtain audio corresponding to a reason for the second message.
20. The computer-implemented method ofclaim 19, further comprising:
determining the third device is associated with a first user profile; and
determining the first user profile is represented in a current time slot of a calendar application associated with the recipient profile,
wherein causing the third device to obtain the audio is in response to determining the first user profile is represented in the current time slot.
US17/946,7482016-09-262022-09-16Generation of automated message responsesAbandonedUS20230012984A1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US17/946,748US20230012984A1 (en)2016-09-262022-09-16Generation of automated message responses

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
US15/276,316US10339925B1 (en)2016-09-262016-09-26Generation of automated message responses
US16/455,604US11496582B2 (en)2016-09-262019-06-27Generation of automated message responses
US17/946,748US20230012984A1 (en)2016-09-262022-09-16Generation of automated message responses

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US16/455,604ContinuationUS11496582B2 (en)2016-09-262019-06-27Generation of automated message responses

Publications (1)

Publication NumberPublication Date
US20230012984A1true US20230012984A1 (en)2023-01-19

Family

ID=67069387

Family Applications (3)

Application NumberTitlePriority DateFiling Date
US15/276,316ActiveUS10339925B1 (en)2016-09-262016-09-26Generation of automated message responses
US16/455,604Active2037-02-20US11496582B2 (en)2016-09-262019-06-27Generation of automated message responses
US17/946,748AbandonedUS20230012984A1 (en)2016-09-262022-09-16Generation of automated message responses

Family Applications Before (2)

Application NumberTitlePriority DateFiling Date
US15/276,316ActiveUS10339925B1 (en)2016-09-262016-09-26Generation of automated message responses
US16/455,604Active2037-02-20US11496582B2 (en)2016-09-262019-06-27Generation of automated message responses

Country Status (1)

CountryLink
US (3)US10339925B1 (en)

Families Citing this family (60)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9318108B2 (en)2010-01-182016-04-19Apple Inc.Intelligent automated assistant
US8977255B2 (en)2007-04-032015-03-10Apple Inc.Method and system for operating a multi-function portable electronic device using voice-activation
US8676904B2 (en)2008-10-022014-03-18Apple Inc.Electronic devices with voice command and contextual data processing capabilities
DE212014000045U1 (en)2013-02-072015-09-24Apple Inc. Voice trigger for a digital assistant
US9715875B2 (en)2014-05-302017-07-25Apple Inc.Reducing the need for manual start/end-pointing and trigger phrases
US10170123B2 (en)2014-05-302019-01-01Apple Inc.Intelligent assistant for home automation
US9338493B2 (en)2014-06-302016-05-10Apple Inc.Intelligent automated assistant for TV user interactions
US9886953B2 (en)2015-03-082018-02-06Apple Inc.Virtual assistant activation
US10460227B2 (en)2015-05-152019-10-29Apple Inc.Virtual assistant in a communication session
US10747498B2 (en)2015-09-082020-08-18Apple Inc.Zero latency digital assistant
US10671428B2 (en)2015-09-082020-06-02Apple Inc.Distributed personal assistant
US10331312B2 (en)2015-09-082019-06-25Apple Inc.Intelligent automated assistant in a media environment
US10691473B2 (en)2015-11-062020-06-23Apple Inc.Intelligent automated assistant in a messaging environment
US10586535B2 (en)2016-06-102020-03-10Apple Inc.Intelligent digital assistant in a multi-tasking environment
DK201670540A1 (en)2016-06-112018-01-08Apple IncApplication integration with a digital assistant
US12197817B2 (en)2016-06-112025-01-14Apple Inc.Intelligent device arbitration and control
US10339925B1 (en)*2016-09-262019-07-02Amazon Technologies, Inc.Generation of automated message responses
US11204787B2 (en)2017-01-092021-12-21Apple Inc.Application integration with a digital assistant
US11553089B2 (en)*2017-01-202023-01-10Virtual Hold Technology Solutions, LlcSystem and method for mobile device active callback prioritization
US20230109840A1 (en)*2017-01-202023-04-13Virtual Hold Technology Solutions, LlcSystem and method for mobile device multitenant active and ambient callback management
US10735589B1 (en)*2019-03-182020-08-04Virtual Hold Technology Solutions, LlcSystem and method for mobile device active callback integration
DK201770427A1 (en)2017-05-122018-12-20Apple Inc.Low-latency intelligent automated assistant
DK179496B1 (en)2017-05-122019-01-15Apple Inc. USER-SPECIFIC Acoustic Models
DK201770411A1 (en)*2017-05-152018-12-20Apple Inc. MULTI-MODAL INTERFACES
US10303715B2 (en)2017-05-162019-05-28Apple Inc.Intelligent automated assistant for media exploration
US10943583B1 (en)*2017-07-202021-03-09Amazon Technologies, Inc.Creation of language models for speech recognition
KR102441067B1 (en)*2017-10-122022-09-06현대자동차주식회사 Vehicle user input processing device and user input processing method
US10645035B2 (en)*2017-11-022020-05-05Google LlcAutomated assistants with conference capabilities
KR101891489B1 (en)*2017-11-032018-08-24주식회사 머니브레인Method, computer device and computer readable recording medium for providing natural language conversation by timely providing a interjection response
US10600408B1 (en)*2018-03-232020-03-24Amazon Technologies, Inc.Content output management based on speech quality
US10818288B2 (en)2018-03-262020-10-27Apple Inc.Natural assistant interaction
US10928918B2 (en)2018-05-072021-02-23Apple Inc.Raise to speak
CN108877765A (en)*2018-05-312018-11-23百度在线网络技术(北京)有限公司Processing method and processing device, computer equipment and the readable medium of voice joint synthesis
DK180639B1 (en)2018-06-012021-11-04Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
DK201870355A1 (en)2018-06-012019-12-16Apple Inc.Virtual assistant operation in multi-device environments
CN108737872A (en)*2018-06-082018-11-02百度在线网络技术(北京)有限公司Method and apparatus for output information
WO2019246314A1 (en)*2018-06-202019-12-26Knowles Electronics, LlcAcoustic aware voice user interface
US11462215B2 (en)2018-09-282022-10-04Apple Inc.Multi-modal inputs for voice commands
US20200288204A1 (en)*2019-03-052020-09-10Adobe Inc.Generating and providing personalized digital content in real time based on live user context
US11010596B2 (en)*2019-03-072021-05-1815 Seconds of Fame, Inc.Apparatus and methods for facial recognition systems to identify proximity-based connections
US11348573B2 (en)2019-03-182022-05-31Apple Inc.Multimodality in digital assistant systems
DK201970509A1 (en)2019-05-062021-01-15Apple IncSpoken notifications
US11307752B2 (en)2019-05-062022-04-19Apple Inc.User configurable task triggers
US11227599B2 (en)2019-06-012022-01-18Apple Inc.Methods and user interfaces for voice-based control of electronic devices
US11282500B2 (en)*2019-07-192022-03-22Cisco Technology, Inc.Generating and training new wake words
US11455984B1 (en)*2019-10-292022-09-27United Services Automobile Association (Usaa)Noise reduction in shared workspaces
US11328721B2 (en)*2020-02-042022-05-10Soundhound, Inc.Wake suppression for audio playing and listening devices
US11263527B2 (en)2020-03-042022-03-01Kyndryl, Inc.Cognitive switching logic for multiple knowledge domains
US11393471B1 (en)*2020-03-302022-07-19Amazon Technologies, Inc.Multi-device output management based on speech characteristics
US12301635B2 (en)2020-05-112025-05-13Apple Inc.Digital assistant hardware abstraction
US11061543B1 (en)2020-05-112021-07-13Apple Inc.Providing relevant data items based on context
US11490204B2 (en)2020-07-202022-11-01Apple Inc.Multi-device audio adjustment coordination
US11438683B2 (en)2020-07-212022-09-06Apple Inc.User identification using headphones
US10997369B1 (en)*2020-09-152021-05-04Cognism LimitedSystems and methods to generate sequential communication action templates by modelling communication chains and optimizing for a quantified objective
CN116018639B (en)*2020-10-272024-11-29谷歌有限责任公司Method and system for text-to-speech synthesis of streaming text
GB2601543B (en)*2020-12-042023-07-26Rolls Royce PlcMethod of training a neural network
US20220229999A1 (en)*2021-01-192022-07-21Palo Alto Research Center IncorporatedService platform for generating contextual, style-controlled response suggestions for an incoming message
US11996087B2 (en)2021-04-302024-05-28Comcast Cable Communications, LlcMethod and apparatus for intelligent voice recognition
JP2024518170A (en)*2021-05-072024-04-25グーグル エルエルシー Message-based navigation assistance
US12380475B2 (en)2021-09-172025-08-05The Toronto-Dominion BankSystems and methods for automated response to online reviews

Citations (28)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US2106653A (en)*1936-09-031938-01-25Podgurski WalterTruing tool for watch balance wheels
US5860064A (en)*1993-05-131999-01-12Apple Computer, Inc.Method and apparatus for automatic generation of vocal emotion in a synthetic text-to-speech system
US6029195A (en)*1994-11-292000-02-22Herz; Frederick S. M.System for customized electronic identification of desirable objects
US20040148172A1 (en)*2003-01-242004-07-29Voice Signal Technologies, Inc,Prosodic mimic method and apparatus
US20040193421A1 (en)*2003-03-252004-09-30International Business Machines CorporationSynthetically generated speech responses including prosodic characteristics of speech inputs
US6810378B2 (en)*2001-08-222004-10-26Lucent Technologies Inc.Method and apparatus for controlling a speech synthesis system to provide multiple styles of speech
US20050096909A1 (en)*2003-10-292005-05-05Raimo BakisSystems and methods for expressive text-to-speech
US20050256716A1 (en)*2004-05-132005-11-17At&T Corp.System and method for generating customized text-to-speech voices
US20070192418A1 (en)*2006-02-132007-08-16Research In Motion LimitedSystem and method of sharing auto-reply information
US20080235024A1 (en)*2007-03-202008-09-25Itzhack GoldbergMethod and system for text-to-speech synthesis with personalized voice
US20110184721A1 (en)*2006-03-032011-07-28International Business Machines CorporationCommunicating Across Voice and Text Channels with Emotion Preservation
US20110320960A1 (en)*2010-06-292011-12-29Yigang CaiFlexible automatic reply features for text messaging
US20120134480A1 (en)*2008-02-282012-05-31Richard LeedsContextual conversation processing in telecommunication applications
US20120221336A1 (en)*2008-06-172012-08-30Voicesense Ltd.Speaker characterization through speech analysis
US20120253816A1 (en)*2005-10-032012-10-04Nuance Communications, Inc.Text-to-speech user's voice cooperative server for instant messaging clients
US20120265533A1 (en)*2011-04-182012-10-18Apple Inc.Voice assignment for text-to-speech output
US20130080177A1 (en)*2011-09-282013-03-28Lik Harry ChenSpeech recognition repair using contextual information
US20130203393A1 (en)*2012-02-022013-08-08Samsung Electronics Co., Ltd.Apparatus and method for generating smart reply in a mobile device
US20130275875A1 (en)*2010-01-182013-10-17Apple Inc.Automatically Adapting User Interfaces for Hands-Free Interaction
US8645841B2 (en)*2009-08-212014-02-04Avaya Inc.Unified greetings for social media
US8918322B1 (en)*2000-06-302014-12-23At&T Intellectual Property Ii, L.P.Personalized text-to-speech services
US20150072739A1 (en)*2008-04-142015-03-12At&T Intellectual Property I, L.P.System and Method for Answering a Communication Notification
US20150287410A1 (en)*2013-03-152015-10-08Google Inc.Speech and semantic parsing for content selection
US20160093289A1 (en)*2014-09-292016-03-31Nuance Communications, Inc.Systems and methods for multi-style speech synthesis
US9886953B2 (en)*2015-03-082018-02-06Apple Inc.Virtual assistant activation
US10339925B1 (en)*2016-09-262019-07-02Amazon Technologies, Inc.Generation of automated message responses
US10448115B1 (en)*2016-09-282019-10-15Amazon Technologies, Inc.Speech recognition for localized content
US20190324527A1 (en)*2018-04-202019-10-24Facebook Technologies, LlcAuto-completion for Gesture-input in Assistant Systems

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US2106653A (en)*1936-09-031938-01-25Podgurski WalterTruing tool for watch balance wheels
US5860064A (en)*1993-05-131999-01-12Apple Computer, Inc.Method and apparatus for automatic generation of vocal emotion in a synthetic text-to-speech system
US6029195A (en)*1994-11-292000-02-22Herz; Frederick S. M.System for customized electronic identification of desirable objects
US8918322B1 (en)*2000-06-302014-12-23At&T Intellectual Property Ii, L.P.Personalized text-to-speech services
US6810378B2 (en)*2001-08-222004-10-26Lucent Technologies Inc.Method and apparatus for controlling a speech synthesis system to provide multiple styles of speech
US20040148172A1 (en)*2003-01-242004-07-29Voice Signal Technologies, Inc,Prosodic mimic method and apparatus
US20040193421A1 (en)*2003-03-252004-09-30International Business Machines CorporationSynthetically generated speech responses including prosodic characteristics of speech inputs
US20050096909A1 (en)*2003-10-292005-05-05Raimo BakisSystems and methods for expressive text-to-speech
US20050256716A1 (en)*2004-05-132005-11-17At&T Corp.System and method for generating customized text-to-speech voices
US8666746B2 (en)*2004-05-132014-03-04At&T Intellectual Property Ii, L.P.System and method for generating customized text-to-speech voices
US20120253816A1 (en)*2005-10-032012-10-04Nuance Communications, Inc.Text-to-speech user's voice cooperative server for instant messaging clients
US8428952B2 (en)*2005-10-032013-04-23Nuance Communications, Inc.Text-to-speech user's voice cooperative server for instant messaging clients
US20070192418A1 (en)*2006-02-132007-08-16Research In Motion LimitedSystem and method of sharing auto-reply information
US20110184721A1 (en)*2006-03-032011-07-28International Business Machines CorporationCommunicating Across Voice and Text Channels with Emotion Preservation
US20080235024A1 (en)*2007-03-202008-09-25Itzhack GoldbergMethod and system for text-to-speech synthesis with personalized voice
US8886537B2 (en)*2007-03-202014-11-11Nuance Communications, Inc.Method and system for text-to-speech synthesis with personalized voice
US8638908B2 (en)*2008-02-282014-01-28Computer Products Introductions, CorpContextual conversation processing in telecommunication applications
US20120134480A1 (en)*2008-02-282012-05-31Richard LeedsContextual conversation processing in telecommunication applications
US9319504B2 (en)*2008-04-142016-04-19At&T Intellectual Property I, LpSystem and method for answering a communication notification
US20150072739A1 (en)*2008-04-142015-03-12At&T Intellectual Property I, L.P.System and Method for Answering a Communication Notification
US20120221336A1 (en)*2008-06-172012-08-30Voicesense Ltd.Speaker characterization through speech analysis
US8645841B2 (en)*2009-08-212014-02-04Avaya Inc.Unified greetings for social media
US20130275875A1 (en)*2010-01-182013-10-17Apple Inc.Automatically Adapting User Interfaces for Hands-Free Interaction
US20110320960A1 (en)*2010-06-292011-12-29Yigang CaiFlexible automatic reply features for text messaging
US20120265533A1 (en)*2011-04-182012-10-18Apple Inc.Voice assignment for text-to-speech output
US20130080177A1 (en)*2011-09-282013-03-28Lik Harry ChenSpeech recognition repair using contextual information
US20130203393A1 (en)*2012-02-022013-08-08Samsung Electronics Co., Ltd.Apparatus and method for generating smart reply in a mobile device
US20150287410A1 (en)*2013-03-152015-10-08Google Inc.Speech and semantic parsing for content selection
US20160093289A1 (en)*2014-09-292016-03-31Nuance Communications, Inc.Systems and methods for multi-style speech synthesis
US9886953B2 (en)*2015-03-082018-02-06Apple Inc.Virtual assistant activation
US10339925B1 (en)*2016-09-262019-07-02Amazon Technologies, Inc.Generation of automated message responses
US10448115B1 (en)*2016-09-282019-10-15Amazon Technologies, Inc.Speech recognition for localized content
US20190324527A1 (en)*2018-04-202019-10-24Facebook Technologies, LlcAuto-completion for Gesture-input in Assistant Systems

Also Published As

Publication numberPublication date
US20200045130A1 (en)2020-02-06
US10339925B1 (en)2019-07-02
US11496582B2 (en)2022-11-08

Similar Documents

PublicationPublication DateTitle
US20230012984A1 (en)Generation of automated message responses
US11062694B2 (en)Text-to-speech processing with emphasized output audio
US12100396B2 (en)Indicator for voice-based communications
US12243532B2 (en)Privacy mode based on speaker identifier
US12230268B2 (en)Contextual voice user interface
US10140973B1 (en)Text-to-speech processing using previously speech processed data
US12190885B2 (en)Configurable output data formats
US10276149B1 (en)Dynamic text-to-speech output
US10074369B2 (en)Voice-based communications
US10453449B2 (en)Indicator for voice-based communications
US10176809B1 (en)Customized compression and decompression of audio data
US10163436B1 (en)Training a speech processing system using spoken utterances
US20160379638A1 (en)Input speech quality matching
US11837225B1 (en)Multi-portion spoken command framework
US12424223B2 (en)Voice-controlled communication requests and responses
US11393451B1 (en)Linked content in voice user interface

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:AMAZON TECHNOLOGIES, INC., WASHINGTON

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RASTROW, ARIYA;HARDIE, TONY;PRASAD, ROHIT;SIGNING DATES FROM 20170922 TO 20170925;REEL/FRAME:061124/0780

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STCBInformation on status: application discontinuation

Free format text:ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION


[8]ページ先頭

©2009-2025 Movatter.jp