Movatterモバイル変換


[0]ホーム

URL:


US11700134B2 - Bot permissions - Google Patents

Bot permissions
Download PDF

Info

Publication number
US11700134B2
US11700134B2US17/732,778US202217732778AUS11700134B2US 11700134 B2US11700134 B2US 11700134B2US 202217732778 AUS202217732778 AUS 202217732778AUS 11700134 B2US11700134 B2US 11700134B2
Authority
US
United States
Prior art keywords
user
bot
action
messages
conversation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/732,778
Other versions
US20220255760A1 (en
Inventor
Shelbian Fung
Richard Dunn
Anton Volkov
Adam Rodriguez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLCfiledCriticalGoogle LLC
Priority to US17/732,778priorityCriticalpatent/US11700134B2/en
Assigned to GOOGLE LLCreassignmentGOOGLE LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: DUNN, RICHARD, Fung, Shelbian, VOLKOV, ANTON, RODRIGUEZ, ADAM
Publication of US20220255760A1publicationCriticalpatent/US20220255760A1/en
Priority to US18/327,459prioritypatent/US12126739B2/en
Application grantedgrantedCritical
Publication of US11700134B2publicationCriticalpatent/US11700134B2/en
Priority to US18/921,622prioritypatent/US20250047508A1/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Permission control and management for messaging application bots is described. A method can include providing a messaging application, on a first computing device associated with a first user, to enable communication between the first user and another user, and detecting, at the messaging application, a user request. The method can also include programmatically determining that an action in response to the user request requires access to data associated with the first user, and causing a permission interface to be rendered in the messaging application, the permission interface enabling the first user to approve or prohibit access to the data associated with the first user. The method can include accessing the data associated with the first user and performing the action in response to the user request, upon receiving user input from the first user indicating approval of the access to the data associated with the first user.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of U.S. patent application Ser. No. 16/695,967, filed Nov. 26, 2019 and titled BOT PERMISSIONS, which is a continuation of U.S. patent application Ser. No. 15/709,440, filed Sep. 19, 2017 and titled BOT PERMISSIONS (now U.S. Pat. No. 10,511,450), which claims the benefit of U.S. Provisional Application No. 62/397,047, filed Sep. 20, 2016 and titled BOT PERMISSIONS, all of which are incorporated herein by reference in their entirety.
BACKGROUND
Users conduct messaging conversations, e.g., chat, instant message, etc. using messaging services. Messaging conversations may be conducted using any user device, e.g., a computer, a mobile device, a wearable device, etc. As users conduct more conversations and perform more tasks using messaging applications, automated assistance with messaging conversations or tasks (e.g., via a bot or other automated assistant application) may be useful to improve efficiency. While automation may help make messaging communications more efficient for users, there may be a need to manage permissions relating to when and how a messaging bot accesses user information and what user information the messaging bot is permitted to access.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
SUMMARY
Some implementations can include a computer-implemented method comprising providing a messaging application, on a first computing device associated with a first user, to enable communication between the first user and at least one other user, and detecting, at the messaging application, a user request. The method can also include programmatically determining that an action in response to the user request requires access to data associated with the first user, and causing a permission interface to be rendered in the messaging application on the first computing device, the permission interface enabling the first user to approve or prohibit the access to the data associated with the first user. The method can further include upon receiving user input from the first user indicating approval of the access to the data associated with the first user, accessing the data associated with the first user and performing the action in response to the user request.
The method can also include upon receiving user input from the first user prohibiting the access to the data associated with the first user, providing an indication in the messaging application that the task is not performed. In some implementations the first user can include a human user and the at least one other user can include an assistive agent.
In some implementations, the first user is a human user and the at least one other user includes a second human user, different from the first user, associated with a second computing device. The permission interface can be rendered in the messaging application on the first computing device associated with the first user and the permission interface is not displayed on the second computing device associated with the second human user.
The method can further include, upon receiving user input from the first user prohibiting access of the data associated with the first user, providing a first indication for rendering on the first computing device associated with the first user. The method can also include providing a second indication for rendering on a second computing device associated with the at least one other user, the first and second indications indicating failure to serve the user request, wherein the first and second indications are different.
In some implementations, the first and second indications can include have one or more of: different textual content, different style, and different format. In some implementations, the first user includes a human user and the at least one other user includes a second human user, different from the first user and an assistive agent. The user request can be received from the first computing device associated with the first user. The method can also include initiating, in response to the user request, a separate conversation in the messaging application. The separate conversation can include the first user and the assistive agent, and may not include the second human user.
In some implementations, detecting the user request comprises analyzing one or more messages received in the messaging application from one or more of the first user and the at least one other user. The one or more messages can include one or more of a text message, a multimedia message, and a command to an assistive agent. Performing the action in response to the user request can include providing one or more suggestions to the first messaging application.
The method can also include causing the one or more suggestions to be rendered in the messaging application. The one or more suggestions can be rendered as suggestion elements that, when selected by the first user, cause details about the suggestion to be displayed.
Some implementations can include a computer-implemented method. The method can include detecting, at a messaging application, a user request, and programmatically determining that an action in response to the user request requires access to data associated with the first user. The method can also include causing a permission interface to be rendered in the messaging application on the first computing device, the permission interface enabling the first user to approve or prohibit the access to the data associated with the first user. The method can further include upon receiving approval from the first user at the permission interface, accessing the data associated with the first user and performing the action in response to the user request.
The method can also include, upon receiving user input from the first user prohibiting the access to the data associated with the first user, providing an indication in the messaging application that the task is not performed. The method can further include upon receiving user input from the first user prohibiting access of the data associated with the first user, and providing a first indication for rendering in the messaging application. The method can also include providing a second indication for rendering in a second messaging application associated with at least one other user, the first and second indications indicating failure to serve the user request, wherein the first and second indications are different.
Some implementations can include a system comprising one or more processors coupled to a nontransitory computer readable medium having stored thereon instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations can include providing a messaging application, on a first computing device associated with a first user, to enable communication between the first user and at least one other user, and detecting, at the messaging application, a user request. The operations can also include programmatically determining that an action in response to the user request requires access to data associated with the first user, and causing a permission interface to be rendered in the messaging application on the first computing device, the permission interface enabling the first user to approve or prohibit the access to the data associated with the first user. The operations can further include, upon receiving user input from the first user indicating approval of the access to the data associated with the first user, accessing the data associated with the first user and performing the action in response to the user request.
The operations can also include, upon receiving user input from the first user prohibiting the access to the data associated with the first user, providing an indication in the messaging application that the task is not performed. In some implementations, the first user can include a human user and the at least one other user can include an assistive agent. In some implementations, the first user can include a human user and the at least one other user can include a second human user, different from the first user, associated with a second computing device. The permission interface can be rendered in the messaging application on the first computing device associated with the first user and the permission interface is not displayed on the second computing device associated with the second human user.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG.1 shows a block diagram of an example environment in which messages may be exchanged between users and bots in accordance with some implementations.
FIG.2 is a diagram of an example arrangement of user device and bot communication in accordance with some implementations.
FIG.3 is a diagram of an example arrangement of user device and bot communication in accordance with some implementations.
FIG.4 is a diagram of an example arrangement of user device and bot communication in accordance with some implementations.
FIG.5 is a flow diagram of an example method to manage bot permissions in accordance with some implementations.
FIG.6 is a diagram of an example user interface with bot messaging in accordance with some implementations.
FIG.7 is a diagram of an example user interface with bot messaging in accordance with some implementations.
FIG.8 is a flow diagram of an example method to manage bot permissions in accordance with some implementations.
FIG.9 is a diagram of an example computing device configured to manage bot permissions in accordance with some implementations.
DETAILED DESCRIPTION
One or more implementations described herein relate to permission control and management for messaging application bots.
FIG.1 illustrates a block diagram of anexample environment100 for providing messaging services that enable and, in some embodiments, provide automatic assistive agents, e.g., bots. Theexemplary environment100 includesmessaging server101, one ormore client devices115a,115n,server135, andnetwork140. Users125a-125nmay be associated withrespective client devices115a,115n.Server135 may be a third-party server, e.g., controlled by a party different from the party that provides messaging services. In various implementations,server135 may implement bot services, as described in further detail below. In some implementations,environment100 may not include one or more servers or devices shown inFIG.1 or may include other servers or devices not shown inFIG.1. InFIG.1 and the remaining figures, a letter after a reference number, e.g., “115a,” represents a reference to the element having that particular reference number. A reference number in the text without a following letter, e.g., “115,” represents a general reference to implementations of the element bearing that reference number.
In the illustrated implementation,messaging server101, client devices115, andserver135 are communicatively coupled via anetwork140. In various implementations,network140 may be a conventional type, wired or wireless, and may have numerous different configurations including a star configuration, token ring configuration or other configurations. Furthermore,network140 may include a local area network (LAN), a wide area network (WAN) (e.g., the Internet), and/or other interconnected data paths across which multiple devices may communicate. In some implementations,network140 may be a peer-to-peer network.Network140 may also be coupled to or include portions of a telecommunications network for sending data in a variety of different communication protocols. In some implementations,network140 includes Bluetooth® communication networks, Wi-Fi®, or a cellular communications network for sending and receiving data including via short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, email, etc. AlthoughFIG.1 illustrates onenetwork140 coupled to client devices115,messaging server101, andserver135, in practice one ormore networks140 may be coupled to these entities.
Messaging server101 may include a processor, a memory, and network communication capabilities. In some implementations,messaging server101 is a hardware server. In some implementation,messaging server101 may be implanted in a virtualized environment, e.g.,messaging server101 may be a virtual machine that is executed on a hardware server that may include one or more other virtual machines.Messaging server101 is communicatively coupled to thenetwork140 viasignal line102.Signal line102 may be a wired connection, such as Ethernet, coaxial cable, fiber-optic cable, etc., or a wireless connection, such as Wi-Fi, Bluetooth, or other wireless technology. In some implementations,messaging server101 sends and receives data to and from one or more of client devices115a-115n,server135, andbot113 vianetwork140. In some implementations,messaging server101 may includemessaging application103athat provides client functionality to enable a user (e.g., any of users125) to exchange messages with other users and/or with a bot.Messaging application103amay be a server application, a server module of a client-server application, or a distributed application (e.g., with a correspondingclient messaging application103bon one or more client devices115).
Messaging server101 may also includedatabase199 which may store messages exchanged viamessaging server101, data and/or configuration of one or more bots, and user data associated with one or more users125, all upon explicit permission from a respective user to store such data. In some embodiments,messaging server101 may include one or more assistive agents, e.g.,bots107aand111. In other embodiments, the assistive agents may be implemented on the client devices115a-nand not on themessaging server101.
Messaging application103amay be code and routines operable by the processor to enable exchange of messages among users125 and one ormore bots105,107a,107b,109a,109b,111, and113. In some implementations,messaging application103amay be implemented using hardware including a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). In some implementations,messaging application103amay be implemented using a combination of hardware and software.
In various implementations, when respective users associated with client devices115 provide consent for storage of messages,database199 may store messages exchanged between one or more client devices115. In some implementations, when respective users associated with client devices115 provide consent for storage of messages,database199 may store messages exchanged between one or more client devices115 and one or more bots implemented on a different device, e.g., another client device,messaging server101, andserver135, etc. In the implementations where one or more users do not provide consent, messages received and sent by those users are not stored.
In some implementations, messages may be encrypted, e.g., such that only a sender and recipient of a message can view the encrypted messages. In some implementations, messages are stored. In some implementations,database199 may further store data and/or configuration of one or more bots, e.g.,bot107a,bot111, etc. In some implementations when a user125 provides consent for storage of user data (such as social network data, contact information, images, etc.)database199 may also store user data associated with the respective user125 that provided such consent.
In some implementations,messaging application103a/103bmay provide a user interface that enables a user125 to create new bots. In these implementations,messaging application103a/103bmay include functionality that enables user-created bots to be included in conversations between users ofmessaging application103a/103b.
Client device115 may be a computing device that includes a memory and a hardware processor, for example, a camera, a laptop computer, a tablet computer, a mobile telephone, a wearable device, a mobile email device, a portable game player, a portable music player, a reader device, head mounted display or other electronic device capable of wirelessly accessingnetwork140.
In the illustrated implementation,client device115ais coupled to thenetwork140 viasignal line108 andclient device115nis coupled to thenetwork140 viasignal line110.Signal lines108 and110 may be wired connections, e.g., Ethernet, or wireless connections, such as Wi-Fi, Bluetooth, or other wireless technology.Client devices115a,115nare accessed byusers125a,125n, respectively. Theclient devices115a,115ninFIG.1 are used by way of example. WhileFIG.1 illustrates two client devices,115aand115n, the disclosure applies to a system architecture having one or more client devices115.
In some implementations, client device115 may be a wearable device worn by a user125. For example, client device115 may be included as part of a clip (e.g., a wristband), part of jewelry, or part of a pair of glasses. In another example, client device115 can be a smartwatch. In various implementations, user125 may view messages from themessaging application103a/103bon a display of the device, may access the messages via a speaker or other output device of the device, etc. For example, user125 may view the messages on a display of a smartwatch or a smart wristband. In another example, user125 may access the messages via headphones (not shown) coupled to or part of client device115, a speaker of client device115, a haptic feedback element of client device115, etc.
In some implementations,messaging application103bis stored on aclient device115a. In some implementations,messaging application103b(e.g., a thin-client application, a client module, etc.) may be a client application stored onclient device115awith a corresponding amessaging application103a(e.g., a server application, a server module, etc.) that is stored onmessaging server101. For example,messaging application103bmay transmit messages created byuser125aonclient device115atomessaging application103astored onmessaging server101.
In some implementations,messaging application103amay be a standalone application stored onmessaging server101. Auser125amay access themessaging application103avia a web page using a browser or other software onclient device115a. In some implementations,messaging application103bthat is implemented on theclient device115amay include the same or similar modules as those included onmessaging server101. In some implementations,messaging application103bmay be implemented as a standalone client application, e.g., in a peer-to-peer or other configuration where one or more client devices115 include functionality to enable exchange of messages with other client devices115. In these implementations,messaging server101 may include limited or no messaging functionality (e.g., client authentication, backup, etc.). In some implementations,messaging server101 may implement one or more bots, e.g.,bot107aandbot111.
Server135 may include a processor, a memory and network communication capabilities. In some implementations,server135 is a hardware server.Server135 is communicatively coupled to thenetwork140 viasignal line128.Signal line128 may be a wired connection, such as Ethernet, coaxial cable, fiber-optic cable, etc., or a wireless connection, such as Wi-Fi, Bluetooth, or other wireless technology. In some implementations,server135 sends and receives data to and from one or more ofmessaging server101 and client devices115 vianetwork140. Althoughserver135 is illustrated as being one server, various implementations may include one ormore servers135.Server135 may implement one or more bots as server applications or server modules, e.g.,bot109aandbot113.
In various implementations,server135 may be part of the same entity that managesmessaging server101, e.g., a provider of messaging services. In some implementations,server135 may be a third party server, e.g., controlled by an entity different than the entity that providesmessaging application103a/103b. In some implementations,server135 provides or hosts bots.
A bot is an automated service, implemented on one or more computers, that users interact with primarily through text, e.g., viamessaging application103a/103b. A bot may be implemented by a bot provider such that the bot can interact with users of various messaging applications. In some implementations, a provider ofmessaging application103a/103bmay also provide one or more bots. In some implementations, bots provided by the provider ofmessaging application103a/103bmay be configured such that the bots can be included in other messaging applications, e.g., provided by other providers. A bot may provide several advantages over other modes. For example, a bot may permit a user to try a new service (e.g., a taxi booking service, a restaurant reservation service, etc.) without having to install an application on a client device, or accessing a web site. Further, a user may interact with a bot via text, which requires minimal or no learning, compared with that required to use a website, software application, a telephone call, e.g., to an interactive voice response (IVR) service, or other manners of interacting with a service. Incorporating a bot within a messaging service or application may also permit users to collaborate with other users to accomplish various tasks such as travel planning, shopping, scheduling events, obtaining information, etc. within the messaging service, and eliminate cumbersome operations such as switching between various applications (e.g., a taxi booking application, a restaurant reservation application, a calendar application, etc.) or websites to accomplish the tasks.
A bot may be implemented as a computer program or application (e.g., a software application) that is configured to interact with one or more users (e.g., any of the users125a-n) viamessaging application103a/103bto provide information or to perform specific actions within the messaging application103. As one example, an information retrieval bot may search for information on the Internet and present the most relevant search result within the messaging app. As another example, a travel bot may have the ability to make travel arrangements via messaging application103, e.g., by enabling purchase of travel and hotel tickets within the messaging app, making hotel reservations within the messaging app, making rental car reservations within the messaging app, and the like. As another example, a taxi bot may have the ability to call a taxi, e.g., to the user's location (obtained by the taxi bot from client device115, when a user125 permits access to location information) without having to invoke or call a separate taxi reservation app. As another example, a coach/tutor bot may tutor a user to instruct the user in some subject matter within a messaging app, e.g., by asking questions that are likely to appear on an examination and providing feedback on whether the user's responses were correct or incorrect. As another example, a game bot may play a game on the opposite side or the same side as a user within a messaging app. As another example, a commercial bot may provide services from a specific merchant, e.g., by retrieving product information from the merchant's catalog and enabling purchase through a messaging app. As another example, an interface bot may interface a remote device or vehicle so that a user of a messaging app can chat with, retrieve information from, and/or provide instructions to the remote device or vehicle.
A bot's capabilities may include understanding a user's intent and executing on it. The user's intent may be understood by analyzing and understanding the user's conversation and its context. A bot may also understand the changing context of a conversation or the changing sentiments and/or intentions of the users based on a conversation evolving over time. For example, if user A suggests meeting for coffee but if user B states that he does not like coffee, then a bot may assign a negative sentiment score for coffee to user B and may not suggest a coffee shop for the meeting.
Implementing bots that can communicate with users ofmessaging application103a/103bmay provide many advantages. Conventionally, a user may utilize a software application or a web site to perform activities such as paying bills, ordering food, booking tickets, etc. A problem with such implementations is that a user is required to install or use multiple software applications, and websites, in order to perform the multiple activities. For example, a user may have to install different software applications to pay a utility bill (e.g., from the utility company), to buy movie tickets (e.g., a ticket reservation application from a ticketing service provider), to make restaurant reservations (e.g., from respective restaurants), or may need to visit a respective website for each activity. Another problem with such implementations is that the user may need to learn a complex user interface, e.g., a user interface implemented using multiple user interface elements, such as windows, buttons, checkboxes, dialog boxes, etc.
Consequently, an advantage of one or more described implementations is that a single application enables a user to perform activities that involve interaction with any number of parties, without being required to access a separate website or install and run software applications, which has a technical effect of reducing consumption of memory, storage, and processing resources on a client device. An advantage of the described implementations is that the conversational interface makes it easier and faster for the user to complete such activities, e.g., without having to learn a complex user interface, which has a technical effect of reducing consumption of computational resources. Another advantage of the described implementations is that implementing bots may enable various participating entities to provide user interaction at a lower cost, which has a technical effect of reducing the need for computational resources that are deployed to enable user interaction, such as a toll-free number implemented using one or more of a communications server, a web site that is hosted on one or more web servers, a customer support email hosted on an email server, etc. Another technical effect of described features is a reduction in the problem of consumption of system processing and transmission resources required for completing user tasks across communication networks.
While certain examples herein describe interaction between a bot and one or more users, various types of interactions, such as one-to-one interaction between a bot and a user125, one-to-many interactions between a bot and two or more users (e.g., in a group messaging conversation), many-to-one interactions between multiple bots and a user, and many-to-many interactions between multiple bots and multiple users are be possible. Further, in some implementations, a bot may also be configured to interact with another bot (e.g.,bots107a/107b,109a/109b,111,113, etc.) via messaging application103, via direct communication between bots, or a combination. For example, a restaurant reservation bot may interact with a bot for a particular restaurant in order to reserve a table.
In certain embodiments, a bot may use a conversational interface to use natural language to interact conversationally with a user. In certain embodiments, a bot may use a template-based format to create sentences with which to interact with a user, e.g., in response to a request for a restaurant address, using a template such as “the location of restaurant R is L.” In certain cases, a user may be enabled to select a bot interaction format, e.g., whether the bot is to use natural language to interact with the user, whether the bot is to use template-based interactions, etc.
In cases in which a bot interacts conversationally using natural language, the content and/or style of the bot's interactions may dynamically vary based on one or more of: the content of the conversation determined using natural language processing, the identities of the users in the conversations, and one or more conversational contexts (e.g., historical information on the user's interactions, connections between the users in the conversation based on a social graph), external conditions (e.g., weather, traffic), the user's schedules, related context associated with the users, and the like. In these cases, the content and style of the bot's interactions is varied based on only such factors for which users participating in the conversation have provided consent.
As one example, if the users of a conversation are determined to be using formal language (e.g., no or minimal slang terms or emojis), then a bot may also interact within that conversation using formal language, and vice versa. As another example, if a user in a conversation is determined (based on the present and/or past conversations) to be a heavy user of emojis, then a bot may also interact with that user using one or more emojis. As another example, if it is determined that two users in a conversation are in remotely connected in a social graph (e.g., having two or more intermediate nodes between them denoting, e.g., that they are friends of friends of friends), then a bot may use more formal language in that conversation. In the cases where users participating in a conversation have not provided consent for the bot to utilize factors such as the users' social graph, schedules, location, or other context associated with the users, the content and style of interaction of the bot may be a default style, e.g., a neutral style, that doesn't require utilization of such factors.
Further, in some implementations, one or more bots may include functionality to engage in a back-and-forth conversation with a user. For example, if the user requests information about movies, e.g., by entering “@moviebot Can you recommend a movie?”, the bot “moviebot” may respond with “Are you in the mood for a comedy?” The user may then respond, e.g., “nope” to which the bot may respond with “OK. The sci-fi movie entitled Space and Stars has got great reviews. Should I book you a ticket?” The user may then indicate “Yeah, I can go after 6 pm. Please check if Steve can join”. Upon user's consent to the bot accessing information about their contacts and upon the friend Steve's consent to receiving messages from the bot, the bot may send a message to user's friend Steve and perform further actions to book movie tickets at a suitable time.
In certain embodiments, a user participating in a conversation may be enabled to invoke a specific bot or a bot performing a specific task, e.g., by typing a bot name or bot handle (e.g., taxi, @taxibot, @movies, etc.), by using a voice command (e.g., “invoke bankbot”, etc.), by activation of a user interface element (e.g., a button or other element labeled with the bot name or handle), etc. Once a bot is invoked, a user125 may send a message to the bot viamessaging application103a/103bin a manner similar to sending messages to other users125. For example, to order a taxi, a user may type “@taxibot get me a cab”; to make hotel reservations, a user may type “@hotelbot book a table for 4 at a Chinese restaurant near me.”
In certain embodiments, a bot may automatically suggest information or actions within a messaging conversation without being specifically invoked. That is, the users may not need to specifically invoke the bot. In these embodiments, the bot may depend on analysis and understanding of the conversation on a continual basis or at discrete points of time. The analysis of the conversation may be used to understand specific user needs and to identify when assistance should be suggested by a bot. As one example, a bot may search for some information and suggest the answer if it is determined that a user needs information (e.g., based on the user asking a question to another user, based on multiple users indicating they don't have some information). As another example, if it is determined that multiple users have expressed interest in eating Chinese food, a bot may automatically suggest a set of Chinese restaurants in proximity to the users, including optional information such as locations, ratings and links to the websites of the restaurants.
In certain embodiments, rather than automatically invoking a bot or waiting for a user to explicitly invoke a bot, an automatic suggestion may be made to one or more users in a messaging conversation to invoke one or more bots. In these embodiments, the conversation may be analyzed on a continual basis or at discrete points of time, and the analysis of the conversation may be used to understand specific user needs and to identify when a bot should be suggested within the conversation.
In the embodiments in which a bot may automatically suggest information or actions within a messaging conversation without being specifically invoked, such functionality is disabled, e.g., if one or more users participating in the messaging conversation do not provide consent to a bot performing analysis of the users' conversation. Further, such functionality may also be disabled temporarily based on user input. For example, when the users indicate that a conversation is private or sensitive, analysis of conversational context is suspended until users provide input for the bot to be activated. Further, indications that analysis functionality is disabled may be provided to participants in the conversation, e.g., with a user interface element.
In various implementations, a bot may be implemented in a variety of configurations. For example, as shown inFIG.1,bot105 is implemented onclient device115a. In this example, the bot may be a module in a software application that is local toclient device115a. For example, if a user has installed a taxi hailing application onclient device115a, bot functionality may be incorporated as a module in the taxi hailing application. In this example, a user may invoke a taxi bot, e.g., by sending a message “@taxibot get me a cab.”Messaging application103bmay automatically cause the bot module in the taxi hailing application be launched. In this manner, a bot may be implemented locally on a client device such that the user can engage in conversation with the bot via messaging application103.
In another example shown inFIG.1,bot107ais shown implemented onclient device115aandbot107bis shown as implemented onmessaging server101. In this example, the bot may be implemented, e.g., as a client-server computer program, with portions of the bot functionality provided by each ofbot107a(server module) andbot107b(client module). For example, if the bot is a scheduling bot with the handle @calendar,user115amay schedule a reminder, by typing “@calendar remind me to pick up laundry in the evening,” which may be handled bybot107b(client module). Continuing with this example, ifuser115atells the bot “check if Jim is free to meet at 4,”bot107a(server module) may contact user Jim (or Jim's scheduling bot) to exchange messages, and provide a response touser115a.
In another example,bot109a(server module) is implemented onserver135 andbot109b(client module) is implemented on client devices115. In this example, the bot functionality is provided by modules implemented on client devices115 andserver135, which is distinct frommessaging server101. In some implementations, a bot may be implemented as a distributed application, e.g., with modules distributed across multiple client devices and servers (e.g., client devices115,server135,messaging server101, etc.). In some implementations, a bot may be implemented as a server application, e.g.,bot111 that is implemented onmessaging server101 andbot113 that is implemented onserver135.
Different implementations such as client-only, server-only, client-server, distributed, etc. may provide different advantages. For example, client-only implementations permit bot functionality to be provided locally, e.g., without network access, which may be advantageous in certain contexts, e.g., when a user is outside of network coverage area or in any area with low or limited network bandwidth. Implementations that include one or more servers, such as server-only, client-server, or distributed configurations may permit certain functionality, e.g., financial transactions, ticket reservations, etc. that may not be possible to provide locally on a client device.
WhileFIG.1 shows bots as distinct from messaging application103, in some implementations, one or more bots may be implemented as part of messaging application103. In the implementations in which bots are implemented as part of messaging application103, user permission is obtained before implementing bots. For example, where bots are implemented as part ofmessaging application103a/103b,messaging application103a/103bmay provide bots that can perform certain activities, e.g., a translation bot that translates incoming and outgoing messages, a scheduling bot that schedules events on a user's calendar, etc. In this example, translation bot is activated only upon user's specific permission. If the user does not provide consent, bots withinmessaging application103a/103bare not implemented (e.g., disabled, removed, etc.). If the user provides consent, a bot ormessaging application103a/103bmay make limited use of messages exchanged between users viamessaging application103a/103bto provide specific functionality, e.g., translation, scheduling, etc.
In some implementations, third parties distinct from a provider ofmessaging application103a/103band users125, may provide bots that can communicate with users125 viamessaging application103a/103bfor specific purposes. For example, a taxi service provider may provide a taxi bot, a ticketing service may provide a bot that can book event tickets, a bank bot may provide capability to conduct financial transactions, etc.
In implementing bots via messaging application103, bots are permitted to communicate with users only upon specific user authorization. For example, if a user invokes a bot, the bot can reply, e.g., based on the user's action of invoking the bot. In another example, a user may indicate particular bots or types of bots that may contact the user. For example, a user may permit travel bots to communicate with her, but not provide authorization for shopping bots. In this example,messaging application103a/103bmay permit travel bots to exchange messages with the user, but filter or deny messages from shopping bots.
Further, in order to provide some functionality (e.g., ordering a taxi, making a flight reservation, contacting a friend, etc.), bots may request that the user permit the bot to access user data, such as location, payment information, contact list, etc. In such instances, a user is presented with options to permit or deny access to the bot. If the user denies access, the bot may respond via a message, e.g., “Sorry, I am not able to book a taxi for you.” Further, the user may provide access to information on a limited basis, e.g., the user may permit the taxi bot to access a current location only upon specific invocation of the bot, but not otherwise. In different implementations, the user can control the type, quantity, and granularity of information that a bot can access, and is provided with the ability (e.g., via a user interface) to change such permissions at any time. In some implementations, user data may be processed, e.g., to remove personally identifiable information, to limit information to specific data elements, etc. before a bot can access such data. Further, users can control usage of user data bymessaging application103a/103band one or more bots. For example, a user can specify that a bot that offers capability to make financial transactions require user authorization before a transaction is completed, e.g., the bot may send a message “Tickets for the movie Space and Starts are $12 each. Shall I go ahead and book?” or “The best price for this shirt is $125, including shipping. Shall I charge your credit card ending1234?” etc.
In some implementations,messaging application103a/103bmay also provide one or more suggestions, e.g., suggested responses, to users125 via a user interface, e.g., as a button, or other user interface element. Suggested responses may enable faster interaction, e.g., by reducing or eliminating the need for a user to type a response. Suggested responses may enable users to respond to a message quickly and easily, e.g., when a client device lacks text input functionality (e.g., a smartwatch that does not include a keyboard or microphone). Suggested responses may also enable users to respond quickly to messages, e.g., when the user selects suggested response (e.g., by selecting a corresponding a user interface element on a touchscreen). Suggested responses may be generated using predictive models, e.g., machine learning models, that are trained to generate responses.
For example,messaging application103a/103bmay implement machine learning, e.g., a deep learning model, that can enhance user interaction with messaging application103. Machine-learning models may be trained using synthetic data, e.g., data that is automatically generated by a computer, with no use of user information. In some implementations, machine-learning models may be trained, e.g., based on sample data, for which permissions to utilize user data for training have been obtained expressly from users. For example, sample data may include received messages and responses that were sent to the received messages. Based on the sample data, the machine-learning model can predict responses to received messages, which may then be provided as suggested responses. User interaction is enhanced, e.g., by reducing burden on the user to compose a response to a received message, by providing a choice of responses that are customized based on the received message and the user's context. For example, when users provide consent, suggested responses may be customized based on the user's prior activity, e.g., earlier messages in a conversation, messages in different conversations, etc. For example, such activity may be used to determine an appropriate suggested response for the user, e.g., a playful response, a formal response, etc. based on the user's interaction style. In another example, when the user specifies one or more preferred languages and/or locales,messaging application103a/103bmay generate suggested responses in the user's preferred language. In various examples, suggested responses may be text responses, images, multimedia, etc.
In some implementations, machine learning may be implemented onmessaging server101, on client devices115, or on bothmessaging server101 and client devices115. In some implementations, a simple machine learning model may be implemented on client device115 (e.g., to permit operation of the model within memory, storage, and processing constraints of client devices) and a complex machine learning model may be implemented onmessaging server101. If a user does not provide consent for use of machine learning techniques, such techniques are not implemented. In some implementations, a user may selectively provide consent for machine learning to be implemented only on a client device115. In these implementations, machine learning may be implemented on client device115, such that updates to a machine learning model or user information used by the machine learning model are stored or used locally, and are not shared to other devices such asmessaging server101,server135, or other client devices115.
For the users that provide consent to receiving suggestions, e.g., based on machine-learning techniques, suggestions may be provided by messaging application103. For example, suggestions may include suggestions of content (e.g., movies, books, etc.), schedules (e.g., available time on a user's calendar), events/venues (e.g., restaurants, concerts, etc.), and so on. In some implementations, if users participating in a conversation provide consent to use of conversation data, suggestions may include suggested responses to incoming messages that are based on conversation content. For example, if a first user of two users that have consented to suggestions based on conversation content, sends a message “do you want to grab a bite? How about Italian?” a response may be suggested to the second user, e.g. “@assistant lunch, Italian, table for 2”. In this example, the suggested response includes a bot (identified by the symbol @ and bot handle assistant). If the second user selects this response, the assistant bot is added to the conversation and the message is sent to the bot. A response from the bot may then be displayed in the conversation, and either of the two users may send further messages to the bot. In this example, the assistant bot is not provided access to the content of the conversation, and suggested responses are generated by the messaging application103.
In certain implementations, the content of a suggested response may be customized based on whether a bot is already present in a conversation or is able to be incorporated into the conversation. For example, if it is determined that a travel bot could be incorporated into the messaging app, a suggested response to a question about the cost of plane tickets to France could be “Let's ask travel bot!”
In different implementations, suggestions, e.g., suggested responses, may include one or more of: text (e.g., “Terrific!”), emoji (e.g., a smiley face, a sleepy face, etc.), images (e.g., photos from a user's photo library), text generated based on templates with user data inserted in a field of the template (e.g., “her number is <Phone Number>” where the field “Phone Number” is filled in based on user data, if the user provides access to user data), links (e.g., Uniform Resource Locators), etc. In some implementations, suggested responses may be formatted and/or styled, e.g., using colors, fonts, layout, etc. For example, a suggested response that includes a movie recommendation may include descriptive text about the movie, an image from the movie, and a link to buy tickets. In different implementations, suggested responses may be presented as different types of user interface elements, e.g., text boxes, information cards, etc.
In different implementations, users are offered control over whether they receive suggestions, what types of suggestions they receive, a frequency of the suggestions, etc. For example, users may decline to receive suggestions altogether, or may choose specific types of suggestions, or to receive suggestions only during certain times of day. In another example, users may choose to receive personalized suggestions. In this example, machine learning may be used to provide suggestions, based on the user's preferences relating to use of their data and use of machine learning techniques.
FIG.2 is a diagram of an example arrangement of one user device and one bot or assistive agent in communication in accordance with some implementations. In the example arrangement shown inFIG.2, the user device202 (e.g.,115a-115nfromFIG.1) is in a one-to-one conversation with a bot204 (e.g.,105,107a,107b,109a,109b,111, and/or113). A user associated with the user device202 (e.g., user125a-125n) may invoke thebot204 and engage in a communication session with thebot204. Alternatively, thebot204 may automatically initiate communication with the user associated with the user device202. It will be appreciated that “user” as described in the examples refers to a human user. However, it will also be appreciated that a user can include a computer or other non-human system and that communications between a user and a bot can include communications between a human user and a bot, between a non-human such as a computer (e.g., software application executing on a computer, etc.) and a bot, and/or between one or more bots and one or more other bots.
FIG.3 is a diagram of an example arrangement of two or more user devices and a single bot or assistive agent in communication in accordance with some implementations. In the example arrangement show inFIG.3, user devices302-306 (e.g.,115a-115nfromFIG.1) may be in a group messaging conversation that includes a bot308 (e.g.,105,107a,107b,109a,109b,111, and/or113). One or more of the users associated with the user devices302-306 (e.g., one or more of users125a-125n) may interact with thebot308 and engage in a communication session with thebot308. Some or all of the communications from the bot may be placed into the group messaging conversation. Also, some information provided to and from thebot308 may only be available, e.g., displayed, to the user associated with that information. Thebot308 may automatically initiate communications with one or more of the users associated with the user devices302-306.
FIG.4 is a diagram of an example arrangement of two or more user devices and two or more bots or assistive agents in communication in accordance with some implementations. In the example arrangement show inFIG.4, user devices402 and optionally404 (e.g.,115a-115nfromFIG.1) may be in a group messaging conversation that includes a plurality of bots406-408 (e.g.,105,107a,107b,109a,109b,111, and/or113 ofFIG.1). One or more of the users associated with the user devices402-404 (e.g., one or more of users125a-125n) may interact with one or both of the bots406-408 and engage in a communication session with the bots406-408. Some or all of the communications from the bots may be placed into the group messaging conversation. Also, some information provided to and from the bots406-408 may only be available, e.g., displayed, to the user associated with that information. The bots406-408 may automatically initiate communications with one or more of the users associated with the user devices402-404.
FIG.5 is a flow diagram of an example method to manage bot permissions in accordance with some implementations. Processing begins at502, where a request from a user is received at a bot. The request may include a requested task for the bot to perform. In some implementations, the request may be a command for the bot. For example, a request that includes a command for a reservation bot may be “@reservationbot find a hotel nearby,” a request that includes a command for an assistant bot may be “@assistant send my flight details to Jim,” etc. In this example, the bot is identified by a bot handle, e.g., the “@” symbol followed by a name of the bot (e.g., reservationbot, assistant, etc.) In order to perform the task and/or provide a response to the request, the bot may require access to user data. The user and the bot may be in a one-to-one communication arrangement (e.g.,FIG.2). For example, a user may request a car service pick up and the car service bot may need to know the user's location in order to determine which cars may be available for picking up the user. In another example, a user may wish to make a hotel reservation at a nearby hotel and the hotel reservation bot may need to know the user's location. In yet another example, a bot may provide suggested responses for a user that include sharing user information (e.g., photos, calendar entries, flight schedules, etc.) and the suggestion bot may need to obtain the user's permission to access data that may be helpful for a suggested response and to provide such data as an actual response. The request may be a request from a user or may be an automatically generated request (e.g., from a suggested response bot, etc.). Processing continues to504.
At504, a permission user interface element is caused to be displayed to the user associated with the request. An example of a permission request user interface element is shown inFIG.6 and described below. The permission user interface element may also be presented as an audio prompt or using other user interface and/or output methods. Processing continues to506. At506, an indication is received of whether the user grants the bot permission to access or obtain the user data. The indication can be received in the form of a user interface element selection (e.g., touching, tapping, selecting an on screen user interface button, via typing, audio input, gesture input, etc.) that indicates whether the user grants permission or not. For example, the user could select one of “NOT NOW” or “ALLOW” options shown in the permission user interface element ofFIG.6. Processing continues to508.
At508, the bot permission system determines whether permission was granted or not. Determining whether permission was granted can be accomplished by evaluating the indication received instep506. If permission was granted, processing continues to510. If permission was not granted, processing continues to514.
At510, an indication of user data being shared with the bot optionally can be provided. For example, the indication “Sharing location data” shown inFIG.7 can be provided as an indication that user data was shared with the bot according to the permission granted by the user. Processing continues to512. At512, the bot may perform further processing to complete the task associated with the permissions that were granted. For example, a car service bot can continue to determine which cars may be in a location to provide car service to the user. In another example, a lodging bot can use shared user location to determine nearby accommodations that are vacant and available for rental.
At514, the bot can cause an indication of declining the task to be displayed to the user. For example, the bot could provide an indication such as “Sorry I didn't get your location—I'm unable to schedule a car” or the like. The indication could be displayed on a graphical user interface and/or provided in the form of an audio cue or other output indication.
FIG.6 is a diagram of anexample user interface600 with bot messaging in accordance with some implementations. In particular, theuser interface600 includes a message (602) from a user to a bot.Message602 includes a request (“Find me a hotel nearby”) that may require use of the user's personal information, e.g., location information (e.g., finding a nearby hotel). In response to the request from the user, the bot may send a message (604) to the user indicating that the bot needs access to the user's location data to complete the request.
The bot can cause to be displayed a permission allow/disallowinterface element606. Thepermission element606 can include a description (608) of what type of permission is needed and input elements for not allowing or allowing the bot permission to access (or receive) user data,610 and612 respectively.
FIG.7 is a diagram of anexample user interface700 which follows fromFIG.6 and in which the user has granted the bot permission to use the user's location data. Theuser interface700 includes elements602-604 described above in connection withFIG.6. Theuser interface700 also includes an indication (702) that user data was shared with the bot in response to user granting the permission, e.g., by selectinginput element612. Theuser interface700 also includes a message (704) from the bot that indicates that the bot is working on the request, and one or more optional suggestions from the bot (706 and708). If a user selects one of the suggestion elements (706,708), the bot can cause to be displayed details about the suggestion from the bot, in this example details about suggested nearby hotels identified by the bot.
FIG.8 is a flow diagram of an example method to manage bot permissions within a group messaging context (e.g., within a “group chat”) in accordance with some implementations. Processing begins at802, where a request from a user is received at a bot. In order to perform a requested task and/or provide a response to the request, the bot may require access to user data. The user and the bot may be in a group communication arrangement (e.g., as illustrated inFIG.3 orFIG.4) with multiple users and/or bot. For example, in a communication session with multiple users, a user may request a car service pick up, e.g., for the multiple users. The car service bot may need to know the location of each user that is to be included in the pickup in order to determine which cars may be available for picking up the users. In another example, a user may wish to make a hotel reservation at a nearby hotel for the group of users participating in the conversation. In this example, the hotel reservation bot may need to know information about the group of users, e.g., names, payment information, etc. The request may be a request from a user or may be an automatically generated request (e.g., from a suggested response bot, etc.). Processing continues to804.
At804, a progress indication is optionally displayed by the bot and may be visible in the group conversation to the group or to the individual user making the request. For example, a car service bot may display a message such as “I'm working on it” in the group conversation. Processing continues to806.
At806, a permission user interface element is caused to be displayed to the user associated with the request. An example of a permission request user interface element is shown inFIG.6 and described above. The permission user interface element may also be presented as an audio prompt or using other user interface and/or output methods. Processing continues to808.
At808, an indication is received of whether one or more users grant the bot permission to access or obtain respective user data. The indication can be received in the form of a user interface element selection (e.g., touching, tapping, selecting an on screen user interface button, via typing, audio input, gesture input, etc.) that indicates whether the user grants permission or not. For example, the user could select one of “NOT NOW” or “ALLOW” shown in the permission user interface element ofFIG.6. Processing continues to810.
At810, the bot permission system determines whether permission was granted or not. Determining whether permission was granted can be accomplished by evaluating the indication received instep808. If permission was granted, processing continues to812. If permission was not granted, processing continues to816.
At812, the bot can start a one-to-one chat with the user. The one-to-one chat and the messages exchanged in the one-to-one chat are not visible to the group of users in the group messaging conversation. Processing continues to814.
At814, the bot may perform further processing to complete the task associated with the permissions that were granted within the one-to-one user messaging conversation. For example, a car service bot could continue to determine which cars may be in a location to provide car service to the user. In another example, a lodging bot could use shared user location to determine nearby accommodations that are vacant and available for rental.
At816, the bot can cause a “graceful” indication of declining the task to be displayed to the user within the group messaging conversation. For example, the bot could provide an indication such as “I wasn't able to get your location—I'm unable to schedule a car” or the like. The indication could be displayed on a graphical user interface or provided in the form of an audio cue or other output indication. The graceful aspect of the decline message can include a message that does not explicitly indicate that a user did not grant the bot permission to use the user's data. In different implementations, the indication may include different textual content, e.g., based on the request, or other factors. For example, an indication in response to user prohibiting access to location in the context of ordering a car may include textual content such as “Sorry, unable to get location,” “I'm unable to find cars near you,” “Car service not available,” etc. In some implementations, different indications may be sent to different participants in a group conversation. In some implementations, indications may use different formats, e.g., text box, graphical indication, animated indication, etc. In some implementations, the indications may use different styles, e.g., boldface text, italicized text, fonts, colors, etc.
FIG.9 is a block diagram of anexample computing device900 which may be used to implement one or more features described herein. In one example,computing device900 may be used to implement a client (or user) device, e.g., any of client devices115a-115nshown inFIG.1.Computing device900 can be any suitable computer system, server, or other electronic or hardware device as described above.
One or more methods described herein can be run in a standalone program that can be run on any type of computing device, a program run on a web browser, a mobile application (“app”) run on a mobile computing device (e.g., cell phone, smart phone, tablet computer, wearable device (wristwatch, armband, jewelry, headwear, virtual reality goggles or glasses, augmented reality goggles or glasses, etc.), laptop computer, etc.). In one example, a client/server architecture can be used, e.g., a mobile computing device (as a user device) sends user input data to a server device and receives from the server the final output data for output (e.g., for display). In another example, all computations can be performed within the mobile app (and/or other apps) on the mobile computing device. In another example, computations can be split between the mobile computing device and one or more server devices.
In some implementations,computing device900 includes aprocessor902, amemory904, and input/output (I/O) interface906.Processor902 can be one or more processors and/or processing circuits to execute program code and control basic operations of thecomputing device900. A “processor” includes any suitable hardware and/or software system, mechanism or component that processes data, signals or other information. A processor may include a system with a general-purpose central processing unit (CPU), multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a particular geographic location, or have temporal limitations. For example, a processor may perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing may be performed at different times and at different locations, by different (or the same) processing systems. A computer may be any processor in communication with a memory.
Memory904 is typically provided incomputing device900 for access by theprocessor902, and may be any suitable processor-readable storage medium, such as random access memory (RAM), read-only memory (ROM), Electrical Erasable Read-only Memory (EEPROM), Flash memory, etc., suitable for storing instructions for execution by the processor, and located separate fromprocessor902 and/or integrated therewith.Memory904 can store software operating on thecomputing device900 by theprocessor902, including anoperating system908 and one ormore applications910 such as a messaging application, a bot application, etc. In some implementations, theapplications910 can include instructions that enableprocessor902 to perform functions described herein, e.g., one or more of the methods ofFIGS.5 and/or8. For example,applications910 can include messaging and/or bot applications, including a program to manage bot permissions as described herein. One or more of the applications can, for example, provide a displayed user interface responsive to user input to display selectable options or controls, and data based on selected options. One or more methods disclosed herein can operate in several environments and platforms, e.g., as a stand-alone computer program that can run on any type of computing device, as a web application having web pages, as a mobile application (“app”) run on a mobile computing device, etc.
Any of software inmemory904 can alternatively be stored on any other suitable storage location or computer-readable medium. In addition, memory904 (and/or other connected storage device(s)) can store messages, permission settings, user preferences and related data structures, parameters, audio data, user preferences, and/or other instructions and data used in the features described herein in adatabase912.Memory904 and any other type of storage (magnetic disk, optical disk, magnetic tape, or other tangible media) can be considered “storage” or “storage devices.”
The I/O interface906 can provide functions to enable interfacing thecomputing device900 with other systems and devices. Interfaced devices can be included as part of thecomputing device900 or can be separate and communicate with thecomputing device900. For example, network communication devices, wireless communication devices, storage devices, and input/output devices can communicate via the I/O interface906. In some implementations, the I/O interface906 can connect to interface devices such as input devices (keyboard, pointing device, touch screen, microphone, camera, scanner, sensors, etc.) and/or output devices (display device, speaker devices, printer, motor, etc.).
Some examples of interfaced devices that can connect to I/O interface906 can include adisplay device914 that can be used to display content, e.g., images, video, and/or a user interface of an output application as described herein.Display device914 can be connected tocomputing device900 via local connections (e.g., display bus) and/or via networked connections and can be any suitable display device. Thedisplay device914 can include any suitable display device such as a liquid crystal display (LCD), light emitting diode (LED), or plasma display screen, cathode ray tube (CRT), television, monitor, touch screen, 3-D display screen, or other visual display device. Forexample display device914 can be a flat display screen provided on a mobile device, multiple display screens provided in a goggles device, or a monitor screen for a computer device.
The I/O interface906 can interface to other input and output devices. Some examples include one or more cameras, which can capture image frames. Orientation sensors, e.g., gyroscopes and/or accelerometers, can provide sensor data indicating device orientation (which can correspond to view orientation in some implementations) and/or camera orientation. Some implementations can provide a microphone for capturing sound (e.g., voice commands, etc.), audio speaker devices for outputting sound, or other input and output devices.
For ease of illustration,FIG.9 shows one block for theprocessor902, thememory904, the I/O interface906, theoperating system908, and thebot permissions application910, respectively. These blocks may represent one or more processors or processing circuitries, operating systems, memories, I/O interfaces, applications, and/or software modules. In other implementations,computing device900 may not have all of the components shown and/or may have other elements including other types of elements instead of, or in addition to, those shown herein. While user devices (e.g.,115a-115n) are described as performing blocks and operations as described in some implementations herein, any suitable component or combination of components of user devices (e.g.,115a-115n) or similar devices, or any suitable processor or processors associated with such a system, may perform the blocks and operations described.
Methods described herein can be implemented by computer program instructions or code, which can be executed on a computer. For example, the code can be implemented by one or more digital processors (e.g., microprocessors or other processing circuitry) and can be stored on a computer program product including a non-transitory computer readable medium (e.g., storage medium), such as a magnetic, optical, electromagnetic, or semiconductor storage medium, including semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), flash memory, a rigid magnetic disk, an optical disk, a solid-state memory drive, etc. The program instructions can also be contained in, and provided as, an electronic signal, for example in the form of software as a service (SaaS) delivered from a server (e.g., a distributed system and/or a cloud computing system). Alternatively, one or more methods can be implemented in hardware (logic gates, etc.), or in a combination of hardware and software. Example hardware can be programmable processors (e.g. Field-Programmable Gate Array (FPGA), Complex Programmable Logic Device (CPLD), etc.), general purpose processors, graphics processors, Application Specific Integrated Circuits (ASICs), and the like. One or more methods can be performed as part of or component of an application running on the system, or as an application or software running in conjunction with other applications and operating system.
Although the description has been described with respect to particular implementations thereof, these particular implementations are merely illustrative, and not restrictive. Concepts illustrated in the examples may be applied to other examples and implementations.
In situations in which certain implementations discussed herein may collect or use personal information about users (e.g., user's phone number or partial phone number, user data, information about a user's social network, user's location and time, user's biometric information, user's activities and demographic information), users are provided with one or more opportunities to control whether the personal information is collected, whether the personal information is stored, whether the personal information is used, and how the information is collected about the user, stored and used. That is, the systems and methods discussed herein collect, store and/or use user personal information specifically upon receiving explicit authorization from the relevant users to do so. In addition, certain data may be treated in one or more ways before it is stored or used so that personally identifiable information is removed. As one example, a user's identity may be treated so that no personally identifiable information can be determined. As another example, a user's geographic location may be generalized to a larger region so that the user's particular location cannot be determined.
Note that the functional blocks, operations, features, methods, devices, and systems described in the present disclosure may be integrated or divided into different combinations of systems, devices, and functional blocks as would be known to those skilled in the art. Any suitable programming language and programming techniques may be used to implement the routines of particular implementations. Different programming techniques may be employed such as procedural or object-oriented. The routines may execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, the order may be changed in different particular implementations. In some implementations, multiple steps or operations shown as sequential in this specification may be performed at the same time. Further implementations are disclosed below.

Claims (20)

What is claimed is:
1. A computer-implemented method comprising:
causing a permission interface to be rendered, the permission interface enabling a first user to approve access to user data associated with the first user;
receiving user input from the first user indicating approval of the access to the user data associated with the first user;
receiving a first communication from the first user that includes a request for information, wherein the first communication is associated with a bot name or a bot handle and uses the user data;
determining to invoke a particular bot based on the bot name or the bot handle to provide the information in the request for information;
receiving a second communication from the first user that includes a request to perform an action; and
performing, by the particular bot, the action.
2. The method ofclaim 1, wherein the action is related to a second user and the method further comprises:
obtaining permission from the second user to receive one or more messages from the particular bot; and
sending, by the particular bot, the one or more messages to the second user to obtain information associated with the action; and
wherein performing the action occurs responsive to receiving the information associated with the action from the second user.
3. The method ofclaim 1, further comprising:
assigning a sentiment score to a topic associated with the first user based on a reaction of the first user to the topic in a conversation; and
providing, by the particular bot, a particular suggestion based on the sentiment score for the topic.
4. The method ofclaim 1, wherein the action is related to a second user and the method further comprises:
obtaining permission from the second user to receive one or more messages from the particular bot;
determining a proximity between the first user and the second user in a social graph; and
providing, by the particular bot, a message to the first user and the second user with a conversation style that is based on the proximity between the first user and the second user in the social graph.
5. The method ofclaim 4, wherein the conversation style is formal if there are more than two or more intermediate nodes between the first user and the second user in the social graph.
6. The method ofclaim 1, further comprising:
obtaining permission from a second user to approve access to user data associated with the second user;
analyzing one or more messages between the first user and the second user; and
providing, by the particular bot, a suggestion to perform a second action based on analyzing the one or more messages.
7. The method ofclaim 1, further comprising:
obtaining permission from a second user to approve access to user data associated with the second user;
receiving an indication from the first user that a conversation between the first user and the second user is confidential; and
abstaining from analysis, by the bot, of the conversation until the first user or the second user reactivates the bot.
8. A non-transitory computer-readable medium with instructions stored thereon that, when executed by one or more computers, cause the one or more computers to perform operations, the operations comprising:
causing a permission interface to be rendered, the permission interface enabling a first user to approve access to user data associated with the first user;
receiving user input from the first user indicating approval of the access to the user data associated with the first user;
receiving a first communication from the first user that includes a request for information, wherein the first communication is associated with a bot name or a bot handle and uses the user data;
determining to invoke a particular bot based on the bot name or the bot handle to provide the information in the request for information;
receiving a second communication from the first user that includes a request to perform an action; and
performing, by the particular bot, the action.
9. The computer-readable medium ofclaim 8, wherein the action is related to a second user and the operations further comprises:
obtaining permission from the second user to receive one or more messages from the particular bot; and
sending, by the particular bot, the one or more messages to the second user to obtain information associated with the action; and
wherein performing the action occurs responsive to receiving the information associated with the action from the second user.
10. The computer-readable medium ofclaim 8, the operations further comprising:
assigning a sentiment score to a topic associated with the first user based on a reaction of the first user to the topic in a conversation; and
providing, by the particular bot, a particular suggestion based on the sentiment score for the topic.
11. The computer-readable medium ofclaim 8, wherein the action is related to a second user and the operations further comprises:
obtaining permission from the second user to receive one or more messages from the particular bot;
determining a proximity between the first user and the second user in a social graph; and
providing, by the particular bot, a message to the first user and the second user with a conversation style that is based on the proximity between the first user and the second user in the social graph.
12. The computer-readable medium ofclaim 11, wherein the conversation style is formal if there are more than two or more intermediate nodes between the first user and the second user in the social graph.
13. The computer-readable medium ofclaim 8, the operations further comprising:
obtaining permission from a second user to approve access to user data associated with the second user;
analyzing one or more messages between the first user and the second user; and
providing, by the particular bot, a suggestion to perform a second action based on analyzing the one or more messages.
14. The computer-readable medium ofclaim 8, the operations further comprising:
obtaining permission from a second user to approve access to user data associated with the second user;
receiving an indication from the first user that a conversation between the first user and the second user is confidential; and
abstaining from analysis, by the bot, of the conversation until the first user or the second user reactivates the bot.
15. A system comprising:
one or more processors; and
a memory coupled to the one or more processors that stores instructions that, when executed by the one or more processors cause the one or more processors to perform operations comprising:
causing a permission interface to be rendered, the permission interface enabling a first user to approve access to user data associated with the first user;
receiving user input from the first user indicating approval of the access to the user data associated with the first user;
receiving a first communication from the first user that includes a request for information, wherein the first communication is associated with a bot name or a bot handle and uses the user data;
determining to invoke a particular bot based on the bot name or the bot handle to provide the information in the request for information;
receiving a second communication from the first user that includes a request to perform an action; and
performing, by the particular bot, the action.
16. The system ofclaim 15, wherein the action is related to a second user and the operations further comprises:
obtaining permission from the second user to receive one or more messages from the particular bot; and
sending, by the particular bot, the one or more messages to the second user to obtain information associated with the action; and
wherein performing the action occurs responsive to receiving the information associated with the action from the second user.
17. The system ofclaim 15, the operations further comprising:
assigning a sentiment score to a topic associated with the first user based on a reaction of the first user to the topic in a conversation; and
providing, by the particular bot, a particular suggestion based on the sentiment score for the topic.
18. The system ofclaim 15, wherein the action is related to a second user and the operations further comprises:
obtaining permission from the second user to receive one or more messages from the particular bot;
determining a proximity between the first user and the second user in a social graph; and
providing, by the particular bot, a message to the first user and the second user with a conversation style that is based on the proximity between the first user and the second user in the social graph.
19. The system ofclaim 15, the operations further comprising:
obtaining permission from a second user to approve access to user data associated with the second user;
analyzing one or more messages between the first user and the second user; and
providing, by the particular bot, a suggestion to perform a second action based on analyzing the one or more messages.
20. The system ofclaim 15, the operations further comprising:
obtaining permission from a second user to approve access to user data associated with the second user;
receiving an indication from the first user that a conversation between the first user and the second user is confidential; and
abstaining from analysis, by the bot, of the conversation until the first user or the second user reactivates the bot.
US17/732,7782016-09-202022-04-29Bot permissionsActiveUS11700134B2 (en)

Priority Applications (3)

Application NumberPriority DateFiling DateTitle
US17/732,778US11700134B2 (en)2016-09-202022-04-29Bot permissions
US18/327,459US12126739B2 (en)2016-09-202023-06-01Bot permissions
US18/921,622US20250047508A1 (en)2016-09-202024-10-21Bot Permissions

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
US201662397047P2016-09-202016-09-20
US15/709,440US10511450B2 (en)2016-09-202017-09-19Bot permissions
US16/695,967US11336467B2 (en)2016-09-202019-11-26Bot permissions
US17/732,778US11700134B2 (en)2016-09-202022-04-29Bot permissions

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US16/695,967ContinuationUS11336467B2 (en)2016-09-202019-11-26Bot permissions

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US18/327,459ContinuationUS12126739B2 (en)2016-09-202023-06-01Bot permissions

Publications (2)

Publication NumberPublication Date
US20220255760A1 US20220255760A1 (en)2022-08-11
US11700134B2true US11700134B2 (en)2023-07-11

Family

ID=60022179

Family Applications (5)

Application NumberTitlePriority DateFiling Date
US15/709,440Active2038-02-15US10511450B2 (en)2016-09-202017-09-19Bot permissions
US16/695,967Active2038-05-10US11336467B2 (en)2016-09-202019-11-26Bot permissions
US17/732,778ActiveUS11700134B2 (en)2016-09-202022-04-29Bot permissions
US18/327,459ActiveUS12126739B2 (en)2016-09-202023-06-01Bot permissions
US18/921,622PendingUS20250047508A1 (en)2016-09-202024-10-21Bot Permissions

Family Applications Before (2)

Application NumberTitlePriority DateFiling Date
US15/709,440Active2038-02-15US10511450B2 (en)2016-09-202017-09-19Bot permissions
US16/695,967Active2038-05-10US11336467B2 (en)2016-09-202019-11-26Bot permissions

Family Applications After (2)

Application NumberTitlePriority DateFiling Date
US18/327,459ActiveUS12126739B2 (en)2016-09-202023-06-01Bot permissions
US18/921,622PendingUS20250047508A1 (en)2016-09-202024-10-21Bot Permissions

Country Status (5)

CountryLink
US (5)US10511450B2 (en)
JP (1)JP6659910B2 (en)
CN (1)CN109716727B (en)
DE (1)DE112017003594T5 (en)
WO (1)WO2018057536A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US12126739B2 (en)2016-09-202024-10-22Google LlcBot permissions

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20180070659A (en)2015-12-212018-06-26구글 엘엘씨 Automatic suggestions for messaging applications and other content
CN108781175B (en)2015-12-212021-09-21谷歌有限责任公司Method, medium, and system for automatic suggestion of message exchange contexts
WO2018057541A1 (en)2016-09-202018-03-29Google LlcSuggested responses based on message stickers
US10015124B2 (en)*2016-09-202018-07-03Google LlcAutomatic response suggestions based on images received in messaging applications
US10361975B2 (en)*2016-10-102019-07-23Microsoft Technology Licensing, LlcMessaging bot selection in multi-bot chat sessions
US10416846B2 (en)2016-11-122019-09-17Google LlcDetermining graphical element(s) for inclusion in an electronic communication
WO2018183276A1 (en)2017-03-272018-10-04Orion LabsBot group messaging method
US10333868B2 (en)*2017-04-142019-06-25Facebook, Inc.Techniques to automate bot creation for web pages
WO2018212822A1 (en)2017-05-162018-11-22Google Inc.Suggested actions for images
US10348658B2 (en)2017-06-152019-07-09Google LlcSuggested items for use with embedded applications in chat conversations
US10404636B2 (en)2017-06-152019-09-03Google LlcEmbedded programs and interfaces for chat conversations
US10659409B1 (en)*2017-09-292020-05-19Snap Inc.Media access system
US10891526B2 (en)2017-12-222021-01-12Google LlcFunctional image archiving
US20210012778A1 (en)*2018-03-202021-01-14Sony CorporationInformation processing device and information processing system
CN110308877B (en)*2018-03-202024-05-17富士胶片商业创新有限公司 Message providing device and non-transitory computer readable medium
CN110377240B (en)*2018-04-132024-06-14富士胶片商业创新有限公司Message providing apparatus, message providing method, and non-transitory computer readable medium
US10419934B1 (en)*2018-05-092019-09-17Facebook, Inc.Systems and methods for authenticating users based on enriched data
US11373640B1 (en)*2018-08-012022-06-28Amazon Technologies, Inc.Intelligent device grouping
USD950587S1 (en)2018-08-312022-05-03Zoox, Inc.Display screen or portion thereof having a graphical user interface
JP7167592B2 (en)*2018-09-252022-11-09富士フイルムビジネスイノベーション株式会社 Control device and control program
JP6938597B2 (en)*2018-11-222021-09-22株式会社カカオ Instant messaging service methods and equipment that provide scheduling services
US10970911B2 (en)*2019-02-212021-04-06Facebook Technologies, LlcGraphics processing chip with machine-learning based shader
US11331581B2 (en)2019-03-192022-05-17modl.ai ApSExperience based game development and methods for use therewith
US11596867B2 (en)2019-03-192023-03-07modl.ai ApSAI-based content generation for gaming applications
US10918948B2 (en)*2019-03-192021-02-16modl.ai ApSGame bot generation for gaming applications
JP7287040B2 (en)*2019-03-222023-06-06富士フイルムビジネスイノベーション株式会社 Message providing device, program, and display control method
JP7255474B2 (en)*2019-12-172023-04-11トヨタ自動車株式会社 Control device, system, program, terminal device, and control method
US10841251B1 (en)*2020-02-112020-11-17Moveworks, Inc.Multi-domain chatbot
JP7654681B2 (en)*2020-02-252025-04-01ライブパーソン, インコーポレイテッド Intention Analysis for Call Center Response Generation
US11888790B2 (en)2020-06-262024-01-30Cisco Technology, Inc.Dynamic skill handling mechanism for bot participation in secure multi-user collaboration workspaces
CN111680328B (en)*2020-07-092023-06-23腾讯科技(深圳)有限公司Data processing method, device, server and computer readable storage medium
US11436793B1 (en)2021-02-122022-09-06Facebook Technologies, LlcSystems and methods for graphics rendering based on machine learning
EP4396665A4 (en)2021-09-022025-07-30Yohana LlcSystems and methods for dynamic chat streams
US20230289700A1 (en)*2022-03-092023-09-14Capital One Services, LlcSystems and methods for call compliance and verification
US12184516B2 (en)2022-07-202024-12-31Cisco Technology, Inc.User lifecycle journey and asset data based bot skill selection
US12235889B2 (en)2022-08-262025-02-25Google LlcDevice messages provided in displayed image compilations based on user content
CN115570558B (en)*2022-10-282023-07-11武汉恒新动力科技有限公司Somatosensory collaborative teleoperation system and method for controlled object cluster
US20240427928A1 (en)*2023-06-212024-12-26Microsoft Technology Licensing, LlcData security for machine learning systems
WO2024263453A1 (en)*2023-06-212024-12-26Microsoft Technology Licensing, LlcData security for machine learning systems
USD1080667S1 (en)*2023-07-212025-06-24Samsung Electronics Co., Ltd.Display screen or portion thereof with graphical user interface

Citations (313)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5963649A (en)1995-12-191999-10-05Nec CorporationMessage authorization system for authorizing message for electronic document
US6092102A (en)1997-10-242000-07-18University Of Pittsburgh Of The Commonwealth System Of Higher EducationSystem and method for notifying users about information or events of an enterprise
JP2000298676A (en)1999-04-142000-10-24Bandai Co Ltd Information provision device
US20020040297A1 (en)2000-09-292002-04-04Professorq, Inc.Natural-language voice-activated personal assistant
JP2002132804A (en)2000-10-242002-05-10Sanyo Electric Co LtdUser support system
US20020103837A1 (en)2001-01-312002-08-01International Business Machines CorporationMethod for handling requests for information in a natural language understanding system
US20030105589A1 (en)2001-11-302003-06-05Wen-Yin LiuMedia agent
US20030182374A1 (en)2001-10-242003-09-25Debashis HaldarMethod and system for controlling scope of user participation in a communication session
EP1376392A2 (en)2002-06-272004-01-02Microsoft CorporationMethod and system for associating actions with semantic labels in electronic documents
EP1394713A1 (en)2002-08-282004-03-03Microsoft CorporationSystem and method for shared integrated online social interaction
WO2004104758A2 (en)2003-05-162004-12-02Picasa, Inc.Networked chat and media sharing systems and methods
US20050146621A1 (en)2001-09-102005-07-07Nikon Technologies, Inc.Digital camera system, image storage apparatus, and digital camera
US20060004685A1 (en)2004-06-302006-01-05Nokia CorporationAutomated grouping of image and other user data
US20060021023A1 (en)2004-07-212006-01-26International Business Machines CorporationReal-time voting based authorization in an autonomic workflow process using an electronic messaging system
US20060029106A1 (en)2004-06-142006-02-09Semandex Networks, Inc.System and method for providing content-based instant messaging
US20060150119A1 (en)2004-12-312006-07-06France TelecomMethod for interacting with automated information agents using conversational queries
US20060156209A1 (en)2003-02-252006-07-13Satoshi MatsuuraApplication program prediction method and mobile terminal
US20060172749A1 (en)2005-01-312006-08-03Sweeney Robert JPermission based text messaging
US20070030364A1 (en)2005-05-112007-02-08Pere ObradorImage management
US20070094217A1 (en)2005-08-042007-04-26Christopher RonnewinkelConfidence indicators for automated suggestions
CN1988461A (en)2005-12-232007-06-27腾讯科技(深圳)有限公司Chat scence music playing method and system for instant communication tool
CN1989497A (en)2004-07-272007-06-27西门子通讯公司Method and apparatus for autocorrelation of instant messages
US20070162942A1 (en)2006-01-092007-07-12Kimmo HamynenDisplaying network objects in mobile devices based on geolocation
US20070244980A1 (en)2006-04-142007-10-18Microsoft CorporationInstant Messaging Plug-Ins
CN101159576A (en)2007-08-302008-04-09腾讯科技(深圳)有限公司Chatting method, chatting room client terminal, system management background and server
US20080086522A1 (en)*2006-10-052008-04-10Microsoft CorporationBot Identification and Control
WO2008045811A2 (en)2006-10-102008-04-17Orgoo, Inc.Integrated electronic mail and instant messaging system
US20080114837A1 (en)*2006-11-102008-05-15Microsoft CorporationOn-Line Virtual Robot (Bot) Security Agent
US20080120371A1 (en)2006-11-162008-05-22Rajat GopalRelational framework for non-real-time audio/video collaboration
US20080153526A1 (en)2003-09-122008-06-26Core Mobility, Inc.Interface for message authorizing
US20080189367A1 (en)2007-02-012008-08-07Oki Electric Industry Co., Ltd.User-to-user communication method, program, and apparatus
US20090007019A1 (en)2007-06-272009-01-01Ricoh Company, LimitedImage processing device, image processing method, and computer program product
US20090076795A1 (en)2007-09-182009-03-19Srinivas BangaloreSystem And Method Of Generating Responses To Text-Based Messages
US20090119584A1 (en)2007-11-022009-05-07Steve HerbstSoftware Tool for Creating Outlines and Mind Maps that Generates Subtopics Automatically
USD599363S1 (en)2008-11-242009-09-01Microsoft CorporationTransitional cursor user interface for a portion of a display screen
US7603413B1 (en)2005-04-072009-10-13Aol LlcUsing automated agents to facilitate chat communications
US20090282114A1 (en)2008-05-082009-11-12Junlan FengSystem and method for generating suggested responses to an email
US20090327436A1 (en)2008-06-302009-12-31Chen Shihn-ChengInstant messaging network control module
JP2010044495A (en)2008-08-112010-02-25Sharp CorpInformation processor, information processing method and information processing program
USD611053S1 (en)2008-11-242010-03-02Microsoft CorporationTransitional user interface for a portion of a display screen
US20100077029A1 (en)2008-09-242010-03-25International Business Machines CorporationSystem and method for intelligent multi-person chat history injection
US20100118115A1 (en)2007-06-142010-05-13Masafumi TakahashiImage data receiving device, operation device, operation system, data structure of image data set, control method, operation method, program, and storage medium
US20100228590A1 (en)2009-03-032010-09-09International Business Machines CorporationContext-aware electronic social networking
USD624927S1 (en)2010-01-192010-10-05Microsoft CorporationUser interface for a portion of a display screen
US20100260426A1 (en)2009-04-142010-10-14Huang Joseph Jyh-HueiSystems and methods for image recognition using mobile devices
WO2011002989A1 (en)2009-07-022011-01-06Livechime, Inc.System and method for enhancing digital content
KR20110003462A (en)2007-12-172011-01-12플레이 메가폰 Systems and methods for managing interactions between users and interactive systems
CN101983396A (en)2008-03-312011-03-02皇家飞利浦电子股份有限公司Method for modifying a representation based upon a user instruction
US7904187B2 (en)1999-02-012011-03-08Hoffberg Steven MInternet appliance system and method
US20110074685A1 (en)2009-09-302011-03-31At&T Mobility Ii LlcVirtual Predictive Keypad
US20110098056A1 (en)2009-10-282011-04-28Rhoads Geoffrey BIntuitive computing methods and systems
US20110107223A1 (en)2003-01-062011-05-05Eric TiltonUser Interface For Presenting Presentations
US20110145068A1 (en)2007-09-172011-06-16King Martin TAssociating rendered advertisements with digital content
US20110164163A1 (en)2010-01-052011-07-07Apple Inc.Synchronized, interactive augmented reality displays for multifunction devices
US20110202836A1 (en)2010-02-122011-08-18Microsoft CorporationTyping assistance for editing
US20110212717A1 (en)2008-08-192011-09-01Rhoads Geoffrey BMethods and Systems for Content Processing
US20110221912A1 (en)2010-03-102011-09-15Nikon CorporationImage data processing system
US20110230174A1 (en)2008-08-252011-09-22France TelecomSystem and method to identify and transfer to a wireless device actionable items based on user selected content
US20110252108A1 (en)2010-04-082011-10-13Microsoft CorporationDesignating automated agents as friends in a social network service
US20110252207A1 (en)2010-04-082011-10-13Oracle International CorporationDynamic content archiving
CN102222079A (en)2010-04-072011-10-19佳能株式会社Image processing device and image processing method
USD648343S1 (en)2010-06-242011-11-08Microsoft CorporationDisplay screen with user interface
USD648735S1 (en)2010-06-252011-11-15Microsoft CorporationDisplay screen with animated user interface
USD651609S1 (en)2009-06-262012-01-03Microsoft CorporationDisplay screen with an animated image
US20120030289A1 (en)2010-07-302012-02-02Avaya Inc.System and method for multi-model, context-sensitive, real-time collaboration
US20120033876A1 (en)2010-08-052012-02-09Qualcomm IncorporatedIdentifying visual media content captured by camera-enabled mobile device
JP2012027950A (en)2004-04-192012-02-09Yahoo IncTechnique for inline searching in instant messenger environment
US20120042036A1 (en)2010-08-102012-02-16Microsoft CorporationLocation and contextual-based mobile application promotion and delivery
US20120041973A1 (en)2010-08-102012-02-16Samsung Electronics Co., Ltd.Method and apparatus for providing information about an identified object
US20120041941A1 (en)2004-02-152012-02-16Google Inc.Search Engines and Systems with Handheld Document Data Capture Devices
US20120089847A1 (en)2010-10-062012-04-12Research In Motion LimitedMethod of obtaining authorization for accessing a service
US20120096097A1 (en)2009-03-262012-04-19Ntt Docomo, Inc.Communication terminal and mail return method
USD658201S1 (en)2011-05-272012-04-24Microsoft CorporationDisplay screen with animated user interface
USD658678S1 (en)2011-05-272012-05-01Microsoft CorporationDisplay screen with animated user interface
USD658677S1 (en)2011-05-272012-05-01Microsoft CorporationDisplay screen with animated user interface
CN102467574A (en)2010-11-152012-05-23Lg电子株式会社Mobile terminal and metadata setting method thereof
US20120131520A1 (en)2009-05-142012-05-24Tang ding-yuanGesture-based Text Identification and Selection in Images
US20120179717A1 (en)2011-01-112012-07-12Sony CorporationSystem and method for effectively providing entertainment recommendations to device users
US20120224743A1 (en)2011-03-042012-09-06Rodriguez Tony FSmartphone-based methods and systems
US8266109B1 (en)2010-03-092012-09-11Symantec CorporationPerformance of scanning containers for archiving
CA2828011A1 (en)2011-03-152012-09-20Google, Inc.Method, product and system for managing invitations to a chat session
US20120239761A1 (en)2011-03-152012-09-20HDmessaging Inc.Linking context-based information to text messages
US20120245944A1 (en)2010-01-182012-09-27Apple Inc.Intelligent Automated Assistant
US20120278164A1 (en)2011-02-232012-11-01Nova SpivackSystems and methods for recommending advertisement placement based on in network and cross network online activity analysis
JP2012221480A (en)2011-04-062012-11-12L Is B CorpMessage processing system
EP2523436A1 (en)2011-05-112012-11-14Alcatel LucentMobile device and method of managing applications for a mobile device
US20120322428A1 (en)2004-09-302012-12-20Motedata Inc.Network of tags
WO2012173681A1 (en)2011-06-172012-12-20Ebay, Inc.Passporting credentials between a mobile app and a web browser
USD673172S1 (en)2011-11-212012-12-25Microsoft CorporationDisplay screen with animated graphical user interface
KR20130008036A (en)2010-03-052013-01-21퀄컴 인코포레이티드Automated messaging response in wireless communication systems
US20130021266A1 (en)2011-07-212013-01-24Imerj LLCMethods of displaying a second view
US20130036162A1 (en)2009-02-102013-02-07Mikekoenigs.Com, Inc.Automated Communication Techniques
EP2560104A2 (en)2011-08-192013-02-20Disney Enterprises, Inc.Phrase prediction for chat messages
US20130050507A1 (en)2011-08-292013-02-28Panasonic CorporationRecipe Based Real-time Assistance for Digital Image Capture and Other Consumer Electronics Devices
US8391618B1 (en)2008-09-192013-03-05Adobe Systems IncorporatedSemantic image classification and search
US20130061148A1 (en)2011-09-012013-03-07Qualcomm IncorporatedSystems and methods involving augmented menu using mobile device
US20130073366A1 (en)2011-09-152013-03-21Stephan HEATHSystem and method for tracking, utilizing predicting, and implementing online consumer browsing behavior, buying patterns, social networking communications, advertisements and communications, for online coupons, products, goods & services, auctions, and service providers using geospatial mapping technology, and social networking
US8423577B1 (en)2008-07-212013-04-16Sprint Communications Company L.P.Providing suggested actions in response to textual communications
KR20130050871A (en)2011-11-082013-05-16(주)카카오Method of provicing a lot of services extended from a instant messaging service and the instant messaging service
KR20130061387A (en)2011-12-012013-06-11엔에이치엔(주)System and method for providing information interactively by instant messaging application
CN103226949A (en)2011-09-302013-07-31苹果公司Using context information to facilitate processing of commands in a virtual assistant
US8515958B2 (en)2009-07-282013-08-20Fti Consulting, Inc.System and method for providing a classification suggestion for concepts
US20130218877A1 (en)2012-02-222013-08-22Salesforce.Com, Inc.Systems and methods for context-aware message tagging
US20130260727A1 (en)2012-03-292013-10-03Digimarc Corp.Image-related methods and arrangements
US8554701B1 (en)2011-03-182013-10-08Amazon Technologies, Inc.Determining sentiment of sentences from customer reviews
US8589407B2 (en)2011-06-172013-11-19Google Inc.Automated generation of suggestions for personalized reactions in a social network
USD695755S1 (en)2012-08-062013-12-17Samsung Electronics Co., Ltd.TV monitor with graphical user interface
US20130346235A1 (en)2012-06-202013-12-26Ebay, Inc.Systems, Methods, and Computer Program Products for Caching of Shopping Items
US20140004889A1 (en)2012-06-272014-01-02Braxton K. DavisMethod and apparatus for generating a suggested message to be sent over a network
US20140012927A1 (en)2012-07-092014-01-09Ben GertzfieldCreation of real-time conversations based on social location information
EP2688014A1 (en)2012-07-172014-01-22Samsung Electronics Co., LtdMethod and Apparatus for Recommending Texts
CN103548025A (en)2011-01-042014-01-29英特尔公司 Method, terminal device, and computer-readable recording medium for supporting acquisition of an object contained in an input image
US8645697B1 (en)2003-08-082014-02-04Radix Holdings, LlcMessage authorization
US20140035846A1 (en)2012-08-012014-02-06Yeonhwa LeeMobile terminal and controlling method thereof
US8650210B1 (en)2010-02-092014-02-11Google Inc.Identifying non-search actions based on a search query
US20140047413A1 (en)2012-08-092014-02-13Modit, Inc.Developing, Modifying, and Using Applications
USD699739S1 (en)2012-02-222014-02-18Microsoft CorporationDisplay screen with animated graphical user interface
USD699744S1 (en)2012-01-062014-02-18Microsoft CorporationDisplay screen with an animated graphical user interface
EP2703980A2 (en)2012-08-282014-03-05Samsung Electronics Co., Ltd.Text recognition apparatus and method for a terminal
US20140067371A1 (en)2012-08-312014-03-06Microsoft CorporationContext sensitive auto-correction
US20140071324A1 (en)2012-09-122014-03-13Panasonic CorporationImaging apparatus
USD701228S1 (en)2012-01-062014-03-18Samsung Electronics Co., Ltd.Display screen or portion thereof with transitional graphical user interface
USD701527S1 (en)2012-02-232014-03-25Htc CorporationDisplay screen with transitional graphical user interface
USD701528S1 (en)2012-02-242014-03-25Htc CorporationDisplay screen with transitional graphical user interface
US20140088954A1 (en)2012-09-272014-03-27Research In Motion LimitedApparatus and method pertaining to automatically-suggested emoticons
US8688698B1 (en)2011-02-112014-04-01Google Inc.Automatic text suggestion
US8700480B1 (en)2011-06-202014-04-15Amazon Technologies, Inc.Extracting quotes from customer reviews regarding collections of items
US20140108562A1 (en)2012-10-122014-04-17John PanzerAutomatically Suggesting Groups Based on Past User Interaction
US20140129942A1 (en)2011-05-032014-05-08Yogesh Chunilal RathodSystem and method for dynamically providing visual action or activity news feed
JP2014086088A (en)2012-10-192014-05-12Samsung Electronics Co LtdDisplay device, display device control method, and information processor for controlling display device
USD704726S1 (en)2013-03-042014-05-13Roger Leslie MaxwellDisplay screen or portion thereof with animated graphical user interface
USD705244S1 (en)2012-06-202014-05-20Microsoft CorporationDisplay screen with animated graphical user interface
USD705251S1 (en)2012-02-092014-05-20Microsoft CorporationDisplay screen with animated graphical user interface
USD705802S1 (en)2012-02-072014-05-27Microsoft CorporationDisplay screen with animated graphical user interface
US20140150068A1 (en)2010-08-172014-05-29Facebook, Inc.Managing social network accessibility based on age
CN103841007A (en)2014-03-042014-06-04腾讯科技(深圳)有限公司Data processing method, device and system in online game system
US20140156801A1 (en)2012-12-042014-06-05Mobitv, Inc.Cowatching and connected platforms using a push architecture
USD706802S1 (en)2012-08-282014-06-10Samsung Electronics Co., Ltd.Portable electronic device displaying transitional graphical user interface
US20140163954A1 (en)2012-12-062014-06-12Microsoft CorporationCommunication context based predictive-text suggestion
US20140164506A1 (en)2012-12-102014-06-12Rawllin International Inc.Multimedia message having portions of networked media content
US20140171133A1 (en)2012-12-182014-06-19Google Inc.Query response
US20140189027A1 (en)2012-12-312014-07-03Huawei Technologies Co., Ltd.Message Processing Method, Terminal and System
US20140189538A1 (en)2012-12-312014-07-03Motorola Mobility LlcRecommendations for Applications Based on Device Context
US20140195621A1 (en)2013-01-082014-07-10Vmware, Inc.Intelligent chat system
US20140201675A1 (en)2013-01-112014-07-17Samsung Electronics Co., Ltd.Method and mobile device for providing recommended items based on context awareness
KR20140093949A (en)2011-11-152014-07-29마이크로소프트 코포레이션Search augmented menu and configuration for computer applications
JP2014142919A (en)2013-01-222014-08-07Nhn Business Platform CorpMethod and system for providing multi-user messenger service
US20140228009A1 (en)2012-12-262014-08-14Tencent Technology (Shenzhen) Company LimitedMethod and system for notification between mobile terminals during communication
CN103995872A (en)2014-05-212014-08-20王青Method and system for discussions and chat on basis of scene in application
US20140232889A1 (en)2009-02-182014-08-21Google Inc.Automatically capturing information such as capturing information using a document-aware device
US20140237057A1 (en)2013-02-212014-08-21Genesys Telecommunications Laboratories, Inc.System and method for processing private messages in a contact center
US8825474B1 (en)2013-04-162014-09-02Google Inc.Text suggestion output using past interaction data
CN104035947A (en)2013-09-162014-09-10腾讯科技(深圳)有限公司Method and device for recommending interest points, and method and device for obtaining recommended interest points
JP2014170397A (en)2013-03-042014-09-18L Is B Corp Message system
USD714821S1 (en)2012-10-242014-10-07Microsoft CorporationDisplay screen with animated graphical user interface
US20140317030A1 (en)2013-04-222014-10-23Palo Alto Research Center IncorporatedMethod and apparatus for customizing conversation agents based on user characteristics
USD716338S1 (en)2012-01-092014-10-28Samsung Electronics Co., Ltd.Display screen or portion thereof for a transitional graphical user interface
US20140337438A1 (en)2013-05-092014-11-13Ebay Inc.System and method for suggesting a phrase based on a context
US20140344058A1 (en)2013-03-152014-11-20Fision Holdings, IncSystems and methods for distributed marketing automation
CN104202718A (en)2014-08-052014-12-10百度在线网络技术(北京)有限公司Method and device for providing information for user
US20140372540A1 (en)2013-06-132014-12-18Evernote CorporationInitializing chat sessions by pointing to content
US20140372349A1 (en)2012-02-142014-12-18British Sky Broadcasting LimitedMethod of machine learning classes of search queries
US20150006143A1 (en)2013-06-272015-01-01Avaya Inc.Semantic translation model training
US8938669B1 (en)2011-03-152015-01-20Google Inc.Inline user addressing in chat and document editing sessions
US20150026642A1 (en)2013-07-162015-01-22Pinterest, Inc.Object based contextual menu controls
US20150026101A1 (en)2013-07-172015-01-22Xerox CorporationImage search system and method for personalized photo applications using semantic networks
US20150032724A1 (en)2013-07-232015-01-29Xerox CorporationSystem and method for auto-suggesting responses based on social conversational contents in customer care services
US20150058720A1 (en)2013-08-222015-02-26Yahoo! Inc.System and method for automatically suggesting diverse and personalized message completions
EP2852105A1 (en)2013-09-202015-03-25Ignazio Di ToccoComputer system and related process supporting the communication of users located in the same geographical area, in order to establish a starting contact leading to a personal communication
US20150088998A1 (en)2013-09-262015-03-26International Business Machines CorporationAutomatic Question Generation and Answering Based on Monitored Messaging Sessions
US8996639B1 (en)2013-10-152015-03-31Google Inc.Predictive responses to incoming communications
US20150095855A1 (en)2013-09-272015-04-02Microsoft CorporationActionable content displayed on a touch screen
KR20150037935A (en)2012-07-302015-04-08마이크로소프트 코포레이션Generating string predictions using contexts
US20150100537A1 (en)2013-10-032015-04-09Microsoft CorporationEmoji for Text Predictions
US9019415B2 (en)2012-07-262015-04-28Qualcomm IncorporatedMethod and apparatus for dual camera shutter
US9020956B1 (en)2012-12-312015-04-28Google Inc.Sentiment and topic based content determination methods and systems
US20150127453A1 (en)2013-11-042015-05-07Meemo, LlcWord recognition and ideograph or in-app advertising system
US9043407B1 (en)2009-06-122015-05-26Avaya Inc.Interactive user interface to communication-enabled business process platforms method and apparatus
US20150171133A1 (en)2013-12-182015-06-18SK Hynix Inc.Image sensor and method for fabricating the same
US20150178371A1 (en)2013-12-232015-06-2524/7 Customer, Inc.Systems and methods for facilitating dialogue mining
US20150178388A1 (en)2013-12-192015-06-25Adobe Systems IncorporatedInteractive communication augmented with contextual information
US20150185995A1 (en)2013-12-312015-07-02Google Inc.Systems and methods for guided user actions
US20150207765A1 (en)2014-01-172015-07-23Nathaniel BrantinghamMessaging Service with Conversation Suggestions
US20150220806A1 (en)2014-01-312015-08-06WiffleDan Inc. DBA Vhoto, Inc.Intelligent determination of aesthetic preferences based on user history and properties
US20150222617A1 (en)2014-02-052015-08-06Facebook, Inc.Controlling Access to Ideograms
CN104836720A (en)2014-02-122015-08-12北京三星通信技术研究有限公司Method for performing information recommendation in interactive communication, and device
US20150227797A1 (en)2014-02-102015-08-13Google Inc.Smart camera user interface
US20150244653A1 (en)2013-07-192015-08-27Tencent Technology (Shenzhen) Company LimitedMethods and systems for creating auto-reply messages
US20150248411A1 (en)2014-03-032015-09-03Microsoft CorporationPersonalized information query suggestions
US20150250936A1 (en)2009-05-272015-09-10Thoratec CorporationMonitoring of redundant conductors
KR20150108096A (en)2014-03-172015-09-25에스케이플래닛 주식회사Method for coupling application with instant messenger, apparatus and system for the same
CN104951428A (en)2014-03-262015-09-30阿里巴巴集团控股有限公司User intention recognition method and device
US20150286371A1 (en)2012-10-312015-10-08Aniways Advertising Solutions Ltd.Custom emoticon generation
US20150288633A1 (en)2014-04-042015-10-08Blackberry LimitedSystem and Method for Conducting Private Messaging
US20150302301A1 (en)2014-04-222015-10-22Google Inc.Automatic actions based on contextual replies
JP2015531136A (en)2012-08-202015-10-29フェイスブック,インク. Provision of content using inferred topics extracted from communications in social networking systems
CN105068661A (en)2015-09-072015-11-18百度在线网络技术(北京)有限公司Man-machine interaction method and system based on artificial intelligence
WO2015183493A1 (en)2014-05-302015-12-03Apple Inc.Permission request
US20150347617A1 (en)2014-05-312015-12-03Apple Inc.Device, method, and graphical user interface for extending functionality of a host application to another application
US20150350117A1 (en)2014-06-032015-12-03International Business Machines CorporationConversation branching for more efficient resolution
CN105141503A (en)2015-08-132015-12-09北京北信源软件股份有限公司Novel instant messaging intelligent robot
US20150370830A1 (en)2014-06-242015-12-24Google Inc.Ranking and selecting images for display from a set of images
US9230241B1 (en)2011-06-162016-01-05Google Inc.Initiating a communication session based on an associated content item
US20160011725A1 (en)2014-07-082016-01-14Verizon Patent And Licensing Inc.Accessible contextual controls within a graphical user interface
CN105262675A (en)2015-10-292016-01-20北京奇虎科技有限公司Method and apparatus for controlling chat based on electronic book
CN105306281A (en)2015-12-032016-02-03腾讯科技(深圳)有限公司Information processing method and client
US20160037311A1 (en)2014-07-312016-02-04Samsung Electronics Co., Ltd.Message service providing device and method of providing content via the same
US20160042252A1 (en)2014-08-052016-02-11Sri InternationalMulti-Dimensional Realization of Visual Content of an Image Collection
US20160043974A1 (en)2014-08-082016-02-11Mastercard International IncorporatedSystems and methods for integrating a chat function into an e-reader application
US20160043817A1 (en)2014-07-182016-02-11RSS Technologies, LLCMethods and apparatus for locality based broadcasting
US9262517B2 (en)2010-08-182016-02-16At&T Intellectual Property I, L.P.Systems and methods for social media data mining
US20160055246A1 (en)2014-08-212016-02-25Google Inc.Providing automatic actions for mobile onscreen content
US20160065519A1 (en)2014-08-272016-03-03Lenovo (Singapore) Pte, Ltd.Context-aware aggregation of text-based messages
US20160072737A1 (en)2014-09-042016-03-10Microsoft CorporationApp powered extensibility of messages on an existing messaging service
US20160092044A1 (en)2014-07-072016-03-31Google Inc.Method and System for Editing Event Categories
WO2016072117A1 (en)2014-11-072016-05-12ソニー株式会社Information processing device, control method, and storage medium
US20160140477A1 (en)2014-11-132016-05-19Xerox CorporationMethods and systems for assigning tasks to workers
US20160140447A1 (en)2014-11-142016-05-19Bublup Technologies, Inc.Deriving Semantic Relationships Based on Empirical Organization of Content by Users
US20160179816A1 (en)2014-12-222016-06-23Quixey, Inc.Near Real Time Auto-Suggest Search Results
US20160196040A1 (en)2015-01-022016-07-07Microsoft Technology Licensing, LlcContextual Browser Frame and Entry Box Placement
CN105786455A (en)2014-12-172016-07-20深圳市腾讯计算机系统有限公司Method, device and terminal for data processing
US20160210962A1 (en)2015-01-192016-07-21Ncsoft CorporationMethods and systems for analyzing communication situation based on dialogue act information
US20160210279A1 (en)2015-01-192016-07-21Ncsoft CorporationMethods and systems for analyzing communication situation based on emotion information
CN105814519A (en)2013-12-122016-07-27触摸式有限公司System and method for inputting images or labels into electronic devices
CN105830104A (en)2013-08-142016-08-03脸谱公司Methods and systems for facilitating e-commerce payments
US20160224524A1 (en)2015-02-032016-08-04Nuance Communications, Inc.User generated short phrases for auto-filling, automatically collected during normal text use
US20160226804A1 (en)2015-02-032016-08-04Google Inc.Methods, systems, and media for suggesting a link to media content
US20160234553A1 (en)2015-02-112016-08-11Google Inc.Methods, systems, and media for presenting a suggestion to watch videos
WO2016130788A1 (en)2015-02-122016-08-18Google Inc.Determining reply content for a reply to an electronic communication
CN105898627A (en)2016-05-312016-08-24北京奇艺世纪科技有限公司Video playing method and device
CN105940397A (en)2013-12-122016-09-14移动熨斗公司Application synchornization
US20160284011A1 (en)2015-03-252016-09-29Facebook, Inc.Techniques for social messaging authorization and customization
US20160283454A1 (en)2014-07-072016-09-29Machine Zone, Inc.Systems and methods for identifying and suggesting emoticons
US20160292217A1 (en)2015-04-022016-10-06Facebook, Inc.Techniques for context sensitive illustrated graphical user interface elements
US9467435B1 (en)2015-09-152016-10-11Mimecast North America, Inc.Electronic message threat protection system for authorized users
US20160308794A1 (en)2015-04-162016-10-20Samsung Electronics Co., Ltd.Method and apparatus for recommending reply message
US20160321052A1 (en)2015-04-282016-11-03Google Inc.Entity action suggestion on a mobile device
EP3091445A1 (en)2015-05-082016-11-09BlackBerry LimitedElectronic device and method of determining suggested responses to text-based communications
US20160342895A1 (en)2015-05-212016-11-24Baidu Usa LlcMultilingual image question answering
US20160352656A1 (en)2015-05-312016-12-01Microsoft Technology Licensing, LlcContext-sensitive generation of conversational responses
US20160350304A1 (en)2015-05-272016-12-01Google Inc.Providing suggested voice-based action queries
WO2016204428A1 (en)2015-06-162016-12-22삼성전자 주식회사Electronic device and control method therefor
US20160378080A1 (en)2015-06-252016-12-29Intel CorporationTechnologies for conversational interfaces for system control
US20170004383A1 (en)2015-06-302017-01-05Adobe Systems IncorporatedSearching untagged images with text-based queries
US20170017648A1 (en)2015-07-152017-01-19Chappy, Inc.Systems and methods for screenshot linking
US9560152B1 (en)2016-01-272017-01-31International Business Machines CorporationPersonalized summary of online communications
US20170031575A1 (en)2015-07-282017-02-02Microsoft Technology Licensing, LlcTailored computing experience based on contextual signals
US20170075878A1 (en)2015-09-152017-03-16Apple Inc.Emoji and canned responses
KR20170032883A (en)2015-08-192017-03-23시아오미 아이엔씨.Method, device and terminal device for playing game in chatting interface
US20170093769A1 (en)2015-09-302017-03-30Apple Inc.Shared content presentation with integrated messaging
US20170098122A1 (en)2010-06-072017-04-06Affectiva, Inc.Analysis of image content with associated manipulation of expression presentation
US20170098152A1 (en)2015-10-022017-04-06Adobe Systems IncorporatedModifying at least one attribute of an image with at least one attribute extracted from another image
US9633048B1 (en)2015-11-162017-04-25Adobe Systems IncorporatedConverting a text sentence to a series of images
US20170118152A1 (en)2015-10-272017-04-27Line CorporationMessage providing methods and apparatuses, display control methods and apparatuses, and computer-readable mediums storing computer programs for executing methods
US20170134316A1 (en)2015-11-102017-05-11Wrinkl, Inc.Integrating actionable objects into an on-line chat communications platform
US20170142046A1 (en)2015-11-172017-05-18International Business Machines CorporationIdentifying relevant content contained in message streams that appear to be irrelevant
US20170149703A1 (en)2014-07-032017-05-25Nuance Communications, Inc.System and method for suggesting actions based upon incoming messages
US20170147202A1 (en)2015-11-242017-05-25Facebook, Inc.Augmenting text messages with emotion information
US20170153792A1 (en)2015-11-302017-06-01Samsung Electronics Co., Ltd.User terminal device and displaying method thereof
US20170171117A1 (en)2015-12-102017-06-15International Business Machines CorporationMessage Suggestion Using Dynamic Information
US20170180276A1 (en)2015-12-212017-06-22Google Inc.Automatic suggestions and other content for messaging applications
US20170180294A1 (en)2015-12-212017-06-22Google Inc.Automatic suggestions for message exchange threads
US20170185236A1 (en)2015-12-282017-06-29Microsoft Technology Licensing, LlcIdentifying image comments from similar images
US20170187654A1 (en)2015-12-292017-06-29Line CorporationNon-transitory computer-readable recording medium, method, system, and apparatus for exchanging message
US9715496B1 (en)2016-07-082017-07-25Asapp, Inc.Automatically responding to a request of a user
US9727584B2 (en)2012-05-302017-08-08Google Inc.Refining image annotations
US20170250930A1 (en)2016-02-292017-08-31Outbrain Inc.Interactive content recommendation personalization assistant
US20170250935A1 (en)2016-02-252017-08-31Facebook, Inc.Techniques for messaging bot app interactions
US20170250936A1 (en)*2016-02-252017-08-31Facebook, Inc.Techniques for messaging bot rich communication
US20170288942A1 (en)2016-03-302017-10-05Microsoft Technology Licensing, LlcPortal for Provisioning Autonomous Software Agents
US20170293834A1 (en)2016-04-112017-10-12Facebook, Inc.Techniques to respond to user requests using natural-language machine learning based on branching example conversations
US20170308589A1 (en)2016-04-262017-10-26Facebook, Inc.Recommendations from Comments on Online Social Networks
US20170324868A1 (en)2016-05-062017-11-09Genesys Telecommunications Laboratories, Inc.System and method for monitoring progress of automated chat conversations
US9817813B2 (en)2014-01-082017-11-14Genesys Telecommunications Laboratories, Inc.Generalized phrases in automatic speech recognition systems
US20170339076A1 (en)2012-02-142017-11-23Salesforce.Com, Inc.Smart messaging for computer-implemented devices
US20170344224A1 (en)2016-05-272017-11-30Nuance Communications, Inc.Suggesting emojis to users for insertion into text-based messages
US20170359701A1 (en)2016-06-122017-12-14Apple Inc.Sticker distribution system for messaging apps
US20170359703A1 (en)2016-06-122017-12-14Apple Inc.Layers in messaging applications
US20170357432A1 (en)2016-06-122017-12-14Apple Inc.Image creation app in messaging app
US20170357442A1 (en)2016-06-122017-12-14Apple Inc.Messaging application interacting with one or more extension applications
US20170359282A1 (en)2016-06-122017-12-14Apple Inc.Conversion of text relating to media content and media extension apps
US20170359279A1 (en)2016-06-122017-12-14Apple Inc.Messaging application interacting with one or more extension applications
US20170359285A1 (en)2016-06-122017-12-14Apple Inc.Conversion of detected url in text message
US20170359281A1 (en)2016-06-122017-12-14Apple Inc.Polling extension application for interacting with a messaging application
US20170359702A1 (en)2016-06-122017-12-14Apple Inc.Message extension app store
US20170359283A1 (en)2016-06-122017-12-14Apple Inc.Music creation app in messaging app
US20170366479A1 (en)2016-06-202017-12-21Microsoft Technology Licensing, LlcCommunication System
US20180004397A1 (en)2016-06-292018-01-04Google Inc.Systems and Methods of Providing Content Selection
US20180005272A1 (en)2016-06-302018-01-04Paypal, Inc.Image data detection for micro-expression analysis and targeted data services
US20180005288A1 (en)*2016-06-302018-01-04Paypal, Inc.Communicating in chat sessions using chat bots to provide real-time recommendations for negotiations
US20180032997A1 (en)2012-10-092018-02-01George A. GordonSystem, method, and computer program product for determining whether to prompt an action by a platform in connection with a mobile device
US20180032499A1 (en)2016-07-282018-02-01Google Inc.Automatically Generating Spelling Suggestions and Corrections Based on User Context
US20180060705A1 (en)2016-08-302018-03-01International Business Machines CorporationImage text analysis for identifying hidden text
US20180083901A1 (en)2016-09-202018-03-22Google LlcAutomatic response suggestions based on images received in messaging applications
US20180083894A1 (en)2016-09-202018-03-22Google Inc.Bot interaction
US20180083898A1 (en)2016-09-202018-03-22Google LlcSuggested responses based on message stickers
US20180089230A1 (en)2016-09-292018-03-29Baidu Online Network Technology (Beijing) Co., Ltd.Search system, method and apparatus
US20180090135A1 (en)2016-09-232018-03-29Microsoft Technology Licensing, LlcConversational Bookmarks
US20180109526A1 (en)2016-09-202018-04-19Google Inc.Bot permissions
WO2018089109A1 (en)2016-11-122018-05-17Google LlcDetermining graphical elements for inclusion in an electronic communication
US20180196854A1 (en)2017-01-112018-07-12Google Inc.Application extension for generating automatic search queries
US20180210874A1 (en)2017-01-252018-07-26Google LlcAutomatic suggested responses to images received in messages using language model
US20180293601A1 (en)2017-04-102018-10-11Wildfire Systems, Inc.Virtual keyboard trackable referral system
US20180309706A1 (en)2015-11-102018-10-25Samsung Electronics Co., Ltd.User terminal device for recommending response message and method therefor
US20180316637A1 (en)2017-05-012018-11-01Microsoft Technology Licensing, LlcConversation lens for context
US20180322403A1 (en)2017-05-052018-11-08Liveperson, Inc.Dynamic response prediction for improved bot task processing
US20180336226A1 (en)2017-05-162018-11-22Google LlcImage archival based on image categories
US10146748B1 (en)2014-09-102018-12-04Google LlcEmbedding location information in a media collaboration using natural language processing
US20180352393A1 (en)2017-06-022018-12-06Apple Inc.Messaging system interacting with dynamic extension app
US20180367483A1 (en)2017-06-152018-12-20Google Inc.Embedded programs and interfaces for chat conversations
US20180367484A1 (en)2017-06-152018-12-20Google Inc.Suggested items for use with embedded applications in chat conversations
US20180373683A1 (en)2014-04-232018-12-27Klickafy, LlcClickable emoji
US20190204868A1 (en)2016-09-052019-07-04Samsung Electronics Co., Ltd.Electronic device and control method therefor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP4485035B2 (en)*2000-08-292010-06-16富士通株式会社 Virtual space security methods
TW200910845A (en)*2007-08-172009-03-01Mobot Technology IncLocation based anonymous instant message exchange method and system

Patent Citations (345)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5963649A (en)1995-12-191999-10-05Nec CorporationMessage authorization system for authorizing message for electronic document
US6092102A (en)1997-10-242000-07-18University Of Pittsburgh Of The Commonwealth System Of Higher EducationSystem and method for notifying users about information or events of an enterprise
US7904187B2 (en)1999-02-012011-03-08Hoffberg Steven MInternet appliance system and method
JP2000298676A (en)1999-04-142000-10-24Bandai Co Ltd Information provision device
US20020040297A1 (en)2000-09-292002-04-04Professorq, Inc.Natural-language voice-activated personal assistant
JP2002132804A (en)2000-10-242002-05-10Sanyo Electric Co LtdUser support system
US20020103837A1 (en)2001-01-312002-08-01International Business Machines CorporationMethod for handling requests for information in a natural language understanding system
US20050146621A1 (en)2001-09-102005-07-07Nikon Technologies, Inc.Digital camera system, image storage apparatus, and digital camera
US20030182374A1 (en)2001-10-242003-09-25Debashis HaldarMethod and system for controlling scope of user participation in a communication session
US20030105589A1 (en)2001-11-302003-06-05Wen-Yin LiuMedia agent
EP1376392A2 (en)2002-06-272004-01-02Microsoft CorporationMethod and system for associating actions with semantic labels in electronic documents
CN1475908A (en)2002-06-272004-02-18Method and system used in making action relate to semantic marker in electronic file
EP1394713A1 (en)2002-08-282004-03-03Microsoft CorporationSystem and method for shared integrated online social interaction
US20110107223A1 (en)2003-01-062011-05-05Eric TiltonUser Interface For Presenting Presentations
US20060156209A1 (en)2003-02-252006-07-13Satoshi MatsuuraApplication program prediction method and mobile terminal
WO2004104758A2 (en)2003-05-162004-12-02Picasa, Inc.Networked chat and media sharing systems and methods
CN102158431A (en)2003-05-162011-08-17谷歌公司Method of providing and performing immediate message, machine readable media and graphical user interface
US8645697B1 (en)2003-08-082014-02-04Radix Holdings, LlcMessage authorization
US20080153526A1 (en)2003-09-122008-06-26Core Mobility, Inc.Interface for message authorizing
US20120041941A1 (en)2004-02-152012-02-16Google Inc.Search Engines and Systems with Handheld Document Data Capture Devices
JP2012027950A (en)2004-04-192012-02-09Yahoo IncTechnique for inline searching in instant messenger environment
US20060029106A1 (en)2004-06-142006-02-09Semandex Networks, Inc.System and method for providing content-based instant messaging
US20060004685A1 (en)2004-06-302006-01-05Nokia CorporationAutomated grouping of image and other user data
US20060021023A1 (en)2004-07-212006-01-26International Business Machines CorporationReal-time voting based authorization in an autonomic workflow process using an electronic messaging system
CN1989497A (en)2004-07-272007-06-27西门子通讯公司Method and apparatus for autocorrelation of instant messages
US20120322428A1 (en)2004-09-302012-12-20Motedata Inc.Network of tags
US20060150119A1 (en)2004-12-312006-07-06France TelecomMethod for interacting with automated information agents using conversational queries
US20060172749A1 (en)2005-01-312006-08-03Sweeney Robert JPermission based text messaging
US7603413B1 (en)2005-04-072009-10-13Aol LlcUsing automated agents to facilitate chat communications
US20070030364A1 (en)2005-05-112007-02-08Pere ObradorImage management
US20070094217A1 (en)2005-08-042007-04-26Christopher RonnewinkelConfidence indicators for automated suggestions
CN1988461A (en)2005-12-232007-06-27腾讯科技(深圳)有限公司Chat scence music playing method and system for instant communication tool
US20070162942A1 (en)2006-01-092007-07-12Kimmo HamynenDisplaying network objects in mobile devices based on geolocation
US20070244980A1 (en)2006-04-142007-10-18Microsoft CorporationInstant Messaging Plug-Ins
US20080086522A1 (en)*2006-10-052008-04-10Microsoft CorporationBot Identification and Control
WO2008045811A2 (en)2006-10-102008-04-17Orgoo, Inc.Integrated electronic mail and instant messaging system
US20080114837A1 (en)*2006-11-102008-05-15Microsoft CorporationOn-Line Virtual Robot (Bot) Security Agent
US20080120371A1 (en)2006-11-162008-05-22Rajat GopalRelational framework for non-real-time audio/video collaboration
US20080189367A1 (en)2007-02-012008-08-07Oki Electric Industry Co., Ltd.User-to-user communication method, program, and apparatus
US20100118115A1 (en)2007-06-142010-05-13Masafumi TakahashiImage data receiving device, operation device, operation system, data structure of image data set, control method, operation method, program, and storage medium
US20090007019A1 (en)2007-06-272009-01-01Ricoh Company, LimitedImage processing device, image processing method, and computer program product
CN101159576A (en)2007-08-302008-04-09腾讯科技(深圳)有限公司Chatting method, chatting room client terminal, system management background and server
US20110145068A1 (en)2007-09-172011-06-16King Martin TAssociating rendered advertisements with digital content
US20090076795A1 (en)2007-09-182009-03-19Srinivas BangaloreSystem And Method Of Generating Responses To Text-Based Messages
US20090119584A1 (en)2007-11-022009-05-07Steve HerbstSoftware Tool for Creating Outlines and Mind Maps that Generates Subtopics Automatically
KR20110003462A (en)2007-12-172011-01-12플레이 메가폰 Systems and methods for managing interactions between users and interactive systems
CN101983396A (en)2008-03-312011-03-02皇家飞利浦电子股份有限公司Method for modifying a representation based upon a user instruction
US20090282114A1 (en)2008-05-082009-11-12Junlan FengSystem and method for generating suggested responses to an email
US20090327436A1 (en)2008-06-302009-12-31Chen Shihn-ChengInstant messaging network control module
US8423577B1 (en)2008-07-212013-04-16Sprint Communications Company L.P.Providing suggested actions in response to textual communications
JP2010044495A (en)2008-08-112010-02-25Sharp CorpInformation processor, information processing method and information processing program
US20110212717A1 (en)2008-08-192011-09-01Rhoads Geoffrey BMethods and Systems for Content Processing
US20110230174A1 (en)2008-08-252011-09-22France TelecomSystem and method to identify and transfer to a wireless device actionable items based on user selected content
US8391618B1 (en)2008-09-192013-03-05Adobe Systems IncorporatedSemantic image classification and search
US20100077029A1 (en)2008-09-242010-03-25International Business Machines CorporationSystem and method for intelligent multi-person chat history injection
USD611053S1 (en)2008-11-242010-03-02Microsoft CorporationTransitional user interface for a portion of a display screen
USD599363S1 (en)2008-11-242009-09-01Microsoft CorporationTransitional cursor user interface for a portion of a display screen
US20130036162A1 (en)2009-02-102013-02-07Mikekoenigs.Com, Inc.Automated Communication Techniques
US20140232889A1 (en)2009-02-182014-08-21Google Inc.Automatically capturing information such as capturing information using a document-aware device
US20100228590A1 (en)2009-03-032010-09-09International Business Machines CorporationContext-aware electronic social networking
US20120096097A1 (en)2009-03-262012-04-19Ntt Docomo, Inc.Communication terminal and mail return method
US20100260426A1 (en)2009-04-142010-10-14Huang Joseph Jyh-HueiSystems and methods for image recognition using mobile devices
CN102395966A (en)2009-04-142012-03-28高通股份有限公司 Systems and methods for image recognition using mobile devices
US20120131520A1 (en)2009-05-142012-05-24Tang ding-yuanGesture-based Text Identification and Selection in Images
US20150250936A1 (en)2009-05-272015-09-10Thoratec CorporationMonitoring of redundant conductors
US9043407B1 (en)2009-06-122015-05-26Avaya Inc.Interactive user interface to communication-enabled business process platforms method and apparatus
USD651609S1 (en)2009-06-262012-01-03Microsoft CorporationDisplay screen with an animated image
CN102667754A (en)2009-07-022012-09-12乐宅睦有限公司System and method for enhancing digital content
WO2011002989A1 (en)2009-07-022011-01-06Livechime, Inc.System and method for enhancing digital content
US8515958B2 (en)2009-07-282013-08-20Fti Consulting, Inc.System and method for providing a classification suggestion for concepts
US20110074685A1 (en)2009-09-302011-03-31At&T Mobility Ii LlcVirtual Predictive Keypad
US20110098056A1 (en)2009-10-282011-04-28Rhoads Geoffrey BIntuitive computing methods and systems
US20110164163A1 (en)2010-01-052011-07-07Apple Inc.Synchronized, interactive augmented reality displays for multifunction devices
US20120245944A1 (en)2010-01-182012-09-27Apple Inc.Intelligent Automated Assistant
USD624927S1 (en)2010-01-192010-10-05Microsoft CorporationUser interface for a portion of a display screen
US8650210B1 (en)2010-02-092014-02-11Google Inc.Identifying non-search actions based on a search query
US20110202836A1 (en)2010-02-122011-08-18Microsoft CorporationTyping assistance for editing
KR20130008036A (en)2010-03-052013-01-21퀄컴 인코포레이티드Automated messaging response in wireless communication systems
US8266109B1 (en)2010-03-092012-09-11Symantec CorporationPerformance of scanning containers for archiving
US20110221912A1 (en)2010-03-102011-09-15Nikon CorporationImage data processing system
CN102222079A (en)2010-04-072011-10-19佳能株式会社Image processing device and image processing method
US20110252207A1 (en)2010-04-082011-10-13Oracle International CorporationDynamic content archiving
US20110252108A1 (en)2010-04-082011-10-13Microsoft CorporationDesignating automated agents as friends in a social network service
US20170098122A1 (en)2010-06-072017-04-06Affectiva, Inc.Analysis of image content with associated manipulation of expression presentation
USD648343S1 (en)2010-06-242011-11-08Microsoft CorporationDisplay screen with user interface
USD648735S1 (en)2010-06-252011-11-15Microsoft CorporationDisplay screen with animated user interface
US20120030289A1 (en)2010-07-302012-02-02Avaya Inc.System and method for multi-model, context-sensitive, real-time collaboration
US20120033876A1 (en)2010-08-052012-02-09Qualcomm IncorporatedIdentifying visual media content captured by camera-enabled mobile device
US20120041973A1 (en)2010-08-102012-02-16Samsung Electronics Co., Ltd.Method and apparatus for providing information about an identified object
US20120042036A1 (en)2010-08-102012-02-16Microsoft CorporationLocation and contextual-based mobile application promotion and delivery
US20140150068A1 (en)2010-08-172014-05-29Facebook, Inc.Managing social network accessibility based on age
US9262517B2 (en)2010-08-182016-02-16At&T Intellectual Property I, L.P.Systems and methods for social media data mining
US20120089847A1 (en)2010-10-062012-04-12Research In Motion LimitedMethod of obtaining authorization for accessing a service
CN102467574A (en)2010-11-152012-05-23Lg电子株式会社Mobile terminal and metadata setting method thereof
CN103548025A (en)2011-01-042014-01-29英特尔公司 Method, terminal device, and computer-readable recording medium for supporting acquisition of an object contained in an input image
US20120179717A1 (en)2011-01-112012-07-12Sony CorporationSystem and method for effectively providing entertainment recommendations to device users
US8688698B1 (en)2011-02-112014-04-01Google Inc.Automatic text suggestion
US20120278164A1 (en)2011-02-232012-11-01Nova SpivackSystems and methods for recommending advertisement placement based on in network and cross network online activity analysis
US20120224743A1 (en)2011-03-042012-09-06Rodriguez Tony FSmartphone-based methods and systems
CN103493035A (en)2011-03-152014-01-01谷歌公司Inline user addressing in chat and document editing sessions
US8938669B1 (en)2011-03-152015-01-20Google Inc.Inline user addressing in chat and document editing sessions
CA2828011A1 (en)2011-03-152012-09-20Google, Inc.Method, product and system for managing invitations to a chat session
US20120239761A1 (en)2011-03-152012-09-20HDmessaging Inc.Linking context-based information to text messages
US20130262574A1 (en)2011-03-152013-10-03Gabriel CohenInline User Addressing in Chat Sessions
US8554701B1 (en)2011-03-182013-10-08Amazon Technologies, Inc.Determining sentiment of sentences from customer reviews
JP2012221480A (en)2011-04-062012-11-12L Is B CorpMessage processing system
US20140129942A1 (en)2011-05-032014-05-08Yogesh Chunilal RathodSystem and method for dynamically providing visual action or activity news feed
EP2523436A1 (en)2011-05-112012-11-14Alcatel LucentMobile device and method of managing applications for a mobile device
USD658201S1 (en)2011-05-272012-04-24Microsoft CorporationDisplay screen with animated user interface
USD658678S1 (en)2011-05-272012-05-01Microsoft CorporationDisplay screen with animated user interface
USD658677S1 (en)2011-05-272012-05-01Microsoft CorporationDisplay screen with animated user interface
US9230241B1 (en)2011-06-162016-01-05Google Inc.Initiating a communication session based on an associated content item
WO2012173681A1 (en)2011-06-172012-12-20Ebay, Inc.Passporting credentials between a mobile app and a web browser
US8589407B2 (en)2011-06-172013-11-19Google Inc.Automated generation of suggestions for personalized reactions in a social network
US8700480B1 (en)2011-06-202014-04-15Amazon Technologies, Inc.Extracting quotes from customer reviews regarding collections of items
US20130021266A1 (en)2011-07-212013-01-24Imerj LLCMethods of displaying a second view
EP2560104A2 (en)2011-08-192013-02-20Disney Enterprises, Inc.Phrase prediction for chat messages
US20130050507A1 (en)2011-08-292013-02-28Panasonic CorporationRecipe Based Real-time Assistance for Digital Image Capture and Other Consumer Electronics Devices
US20130061148A1 (en)2011-09-012013-03-07Qualcomm IncorporatedSystems and methods involving augmented menu using mobile device
US20130073366A1 (en)2011-09-152013-03-21Stephan HEATHSystem and method for tracking, utilizing predicting, and implementing online consumer browsing behavior, buying patterns, social networking communications, advertisements and communications, for online coupons, products, goods & services, auctions, and service providers using geospatial mapping technology, and social networking
CN103226949A (en)2011-09-302013-07-31苹果公司Using context information to facilitate processing of commands in a virtual assistant
KR20130050871A (en)2011-11-082013-05-16(주)카카오Method of provicing a lot of services extended from a instant messaging service and the instant messaging service
KR20140093949A (en)2011-11-152014-07-29마이크로소프트 코포레이션Search augmented menu and configuration for computer applications
USD673172S1 (en)2011-11-212012-12-25Microsoft CorporationDisplay screen with animated graphical user interface
KR20130061387A (en)2011-12-012013-06-11엔에이치엔(주)System and method for providing information interactively by instant messaging application
USD699744S1 (en)2012-01-062014-02-18Microsoft CorporationDisplay screen with an animated graphical user interface
USD701228S1 (en)2012-01-062014-03-18Samsung Electronics Co., Ltd.Display screen or portion thereof with transitional graphical user interface
USD716338S1 (en)2012-01-092014-10-28Samsung Electronics Co., Ltd.Display screen or portion thereof for a transitional graphical user interface
USD705802S1 (en)2012-02-072014-05-27Microsoft CorporationDisplay screen with animated graphical user interface
USD705251S1 (en)2012-02-092014-05-20Microsoft CorporationDisplay screen with animated graphical user interface
US20170339076A1 (en)2012-02-142017-11-23Salesforce.Com, Inc.Smart messaging for computer-implemented devices
US20140372349A1 (en)2012-02-142014-12-18British Sky Broadcasting LimitedMethod of machine learning classes of search queries
USD699739S1 (en)2012-02-222014-02-18Microsoft CorporationDisplay screen with animated graphical user interface
US20130218877A1 (en)2012-02-222013-08-22Salesforce.Com, Inc.Systems and methods for context-aware message tagging
USD701527S1 (en)2012-02-232014-03-25Htc CorporationDisplay screen with transitional graphical user interface
USD701528S1 (en)2012-02-242014-03-25Htc CorporationDisplay screen with transitional graphical user interface
US20130260727A1 (en)2012-03-292013-10-03Digimarc Corp.Image-related methods and arrangements
US9595059B2 (en)2012-03-292017-03-14Digimarc CorporationImage-related methods and arrangements
US9727584B2 (en)2012-05-302017-08-08Google Inc.Refining image annotations
USD705244S1 (en)2012-06-202014-05-20Microsoft CorporationDisplay screen with animated graphical user interface
US20130346235A1 (en)2012-06-202013-12-26Ebay, Inc.Systems, Methods, and Computer Program Products for Caching of Shopping Items
US20140004889A1 (en)2012-06-272014-01-02Braxton K. DavisMethod and apparatus for generating a suggested message to be sent over a network
US9674120B2 (en)2012-06-272017-06-06At&T Intellectual Property I, L.P.Method and apparatus for generating a suggested message to be sent over a network
US9191786B2 (en)2012-06-272015-11-17At&T Intellectual Property I, L.P.Method and apparatus for generating a suggested message to be sent over a network
US20140012927A1 (en)2012-07-092014-01-09Ben GertzfieldCreation of real-time conversations based on social location information
EP2688014A1 (en)2012-07-172014-01-22Samsung Electronics Co., LtdMethod and Apparatus for Recommending Texts
US9019415B2 (en)2012-07-262015-04-28Qualcomm IncorporatedMethod and apparatus for dual camera shutter
KR20150037935A (en)2012-07-302015-04-08마이크로소프트 코포레이션Generating string predictions using contexts
US20140035846A1 (en)2012-08-012014-02-06Yeonhwa LeeMobile terminal and controlling method thereof
USD695755S1 (en)2012-08-062013-12-17Samsung Electronics Co., Ltd.TV monitor with graphical user interface
US20140047413A1 (en)2012-08-092014-02-13Modit, Inc.Developing, Modifying, and Using Applications
JP2015531136A (en)2012-08-202015-10-29フェイスブック,インク. Provision of content using inferred topics extracted from communications in social networking systems
USD706802S1 (en)2012-08-282014-06-10Samsung Electronics Co., Ltd.Portable electronic device displaying transitional graphical user interface
EP2703980A2 (en)2012-08-282014-03-05Samsung Electronics Co., Ltd.Text recognition apparatus and method for a terminal
US20140067371A1 (en)2012-08-312014-03-06Microsoft CorporationContext sensitive auto-correction
US20140071324A1 (en)2012-09-122014-03-13Panasonic CorporationImaging apparatus
US20140088954A1 (en)2012-09-272014-03-27Research In Motion LimitedApparatus and method pertaining to automatically-suggested emoticons
US20180032997A1 (en)2012-10-092018-02-01George A. GordonSystem, method, and computer program product for determining whether to prompt an action by a platform in connection with a mobile device
US20140108562A1 (en)2012-10-122014-04-17John PanzerAutomatically Suggesting Groups Based on Past User Interaction
JP2014086088A (en)2012-10-192014-05-12Samsung Electronics Co LtdDisplay device, display device control method, and information processor for controlling display device
USD714821S1 (en)2012-10-242014-10-07Microsoft CorporationDisplay screen with animated graphical user interface
US20150286371A1 (en)2012-10-312015-10-08Aniways Advertising Solutions Ltd.Custom emoticon generation
US20140156801A1 (en)2012-12-042014-06-05Mobitv, Inc.Cowatching and connected platforms using a push architecture
US20140163954A1 (en)2012-12-062014-06-12Microsoft CorporationCommunication context based predictive-text suggestion
US20140164506A1 (en)2012-12-102014-06-12Rawllin International Inc.Multimedia message having portions of networked media content
US20140171133A1 (en)2012-12-182014-06-19Google Inc.Query response
US20140228009A1 (en)2012-12-262014-08-14Tencent Technology (Shenzhen) Company LimitedMethod and system for notification between mobile terminals during communication
US20140189027A1 (en)2012-12-312014-07-03Huawei Technologies Co., Ltd.Message Processing Method, Terminal and System
US20140189538A1 (en)2012-12-312014-07-03Motorola Mobility LlcRecommendations for Applications Based on Device Context
US9020956B1 (en)2012-12-312015-04-28Google Inc.Sentiment and topic based content determination methods and systems
US20140195621A1 (en)2013-01-082014-07-10Vmware, Inc.Intelligent chat system
US20140201675A1 (en)2013-01-112014-07-17Samsung Electronics Co., Ltd.Method and mobile device for providing recommended items based on context awareness
JP2014142919A (en)2013-01-222014-08-07Nhn Business Platform CorpMethod and system for providing multi-user messenger service
US20140237057A1 (en)2013-02-212014-08-21Genesys Telecommunications Laboratories, Inc.System and method for processing private messages in a contact center
JP2014170397A (en)2013-03-042014-09-18L Is B Corp Message system
USD704726S1 (en)2013-03-042014-05-13Roger Leslie MaxwellDisplay screen or portion thereof with animated graphical user interface
US20140344058A1 (en)2013-03-152014-11-20Fision Holdings, IncSystems and methods for distributed marketing automation
US8825474B1 (en)2013-04-162014-09-02Google Inc.Text suggestion output using past interaction data
US20140317030A1 (en)2013-04-222014-10-23Palo Alto Research Center IncorporatedMethod and apparatus for customizing conversation agents based on user characteristics
US20140337438A1 (en)2013-05-092014-11-13Ebay Inc.System and method for suggesting a phrase based on a context
US20140372540A1 (en)2013-06-132014-12-18Evernote CorporationInitializing chat sessions by pointing to content
US20150006143A1 (en)2013-06-272015-01-01Avaya Inc.Semantic translation model training
US20150026642A1 (en)2013-07-162015-01-22Pinterest, Inc.Object based contextual menu controls
US20150026101A1 (en)2013-07-172015-01-22Xerox CorporationImage search system and method for personalized photo applications using semantic networks
US9330110B2 (en)2013-07-172016-05-03Xerox CorporationImage search system and method for personalized photo applications using semantic networks
US20150244653A1 (en)2013-07-192015-08-27Tencent Technology (Shenzhen) Company LimitedMethods and systems for creating auto-reply messages
US20150032724A1 (en)2013-07-232015-01-29Xerox CorporationSystem and method for auto-suggesting responses based on social conversational contents in customer care services
CN105830104A (en)2013-08-142016-08-03脸谱公司Methods and systems for facilitating e-commerce payments
US20150058720A1 (en)2013-08-222015-02-26Yahoo! Inc.System and method for automatically suggesting diverse and personalized message completions
CN104035947A (en)2013-09-162014-09-10腾讯科技(深圳)有限公司Method and device for recommending interest points, and method and device for obtaining recommended interest points
EP2852105A1 (en)2013-09-202015-03-25Ignazio Di ToccoComputer system and related process supporting the communication of users located in the same geographical area, in order to establish a starting contact leading to a personal communication
US20150088998A1 (en)2013-09-262015-03-26International Business Machines CorporationAutomatic Question Generation and Answering Based on Monitored Messaging Sessions
US20150095855A1 (en)2013-09-272015-04-02Microsoft CorporationActionable content displayed on a touch screen
US20150100537A1 (en)2013-10-032015-04-09Microsoft CorporationEmoji for Text Predictions
CN105683874A (en)2013-10-032016-06-15微软技术许可有限责任公司 Emoji for Text Prediction
US8996639B1 (en)2013-10-152015-03-31Google Inc.Predictive responses to incoming communications
US20150127453A1 (en)2013-11-042015-05-07Meemo, LlcWord recognition and ideograph or in-app advertising system
CN105940397A (en)2013-12-122016-09-14移动熨斗公司Application synchornization
CN105814519A (en)2013-12-122016-07-27触摸式有限公司System and method for inputting images or labels into electronic devices
US20150171133A1 (en)2013-12-182015-06-18SK Hynix Inc.Image sensor and method for fabricating the same
US20150178388A1 (en)2013-12-192015-06-25Adobe Systems IncorporatedInteractive communication augmented with contextual information
US20150178371A1 (en)2013-12-232015-06-2524/7 Customer, Inc.Systems and methods for facilitating dialogue mining
US20150185995A1 (en)2013-12-312015-07-02Google Inc.Systems and methods for guided user actions
US9817813B2 (en)2014-01-082017-11-14Genesys Telecommunications Laboratories, Inc.Generalized phrases in automatic speech recognition systems
US20150207765A1 (en)2014-01-172015-07-23Nathaniel BrantinghamMessaging Service with Conversation Suggestions
US20150220806A1 (en)2014-01-312015-08-06WiffleDan Inc. DBA Vhoto, Inc.Intelligent determination of aesthetic preferences based on user history and properties
US20150222617A1 (en)2014-02-052015-08-06Facebook, Inc.Controlling Access to Ideograms
US9973705B2 (en)2014-02-102018-05-15Google LlcSmart camera user interface
US20150227797A1 (en)2014-02-102015-08-13Google Inc.Smart camera user interface
US9600724B2 (en)2014-02-102017-03-21Google Inc.Smart camera user interface
US10440279B2 (en)2014-02-102019-10-08Google LlcSmart camera user interface
US20180227498A1 (en)2014-02-102018-08-09Google LlcSmart camera user interface
CN104836720A (en)2014-02-122015-08-12北京三星通信技术研究有限公司Method for performing information recommendation in interactive communication, and device
US20150248411A1 (en)2014-03-032015-09-03Microsoft CorporationPersonalized information query suggestions
CN103841007A (en)2014-03-042014-06-04腾讯科技(深圳)有限公司Data processing method, device and system in online game system
KR20150108096A (en)2014-03-172015-09-25에스케이플래닛 주식회사Method for coupling application with instant messenger, apparatus and system for the same
CN104951428A (en)2014-03-262015-09-30阿里巴巴集团控股有限公司User intention recognition method and device
US20150288633A1 (en)2014-04-042015-10-08Blackberry LimitedSystem and Method for Conducting Private Messaging
US20150302301A1 (en)2014-04-222015-10-22Google Inc.Automatic actions based on contextual replies
US20160162791A1 (en)2014-04-222016-06-09Google Inc.Automatic actions based on contextual replies
US9213941B2 (en)2014-04-222015-12-15Google Inc.Automatic actions based on contextual replies
US20180373683A1 (en)2014-04-232018-12-27Klickafy, LlcClickable emoji
CN103995872A (en)2014-05-212014-08-20王青Method and system for discussions and chat on basis of scene in application
US20150347769A1 (en)2014-05-302015-12-03Apple Inc.Permission request
WO2015183493A1 (en)2014-05-302015-12-03Apple Inc.Permission request
US20150347617A1 (en)2014-05-312015-12-03Apple Inc.Device, method, and graphical user interface for extending functionality of a host application to another application
US20150350117A1 (en)2014-06-032015-12-03International Business Machines CorporationConversation branching for more efficient resolution
US20150370830A1 (en)2014-06-242015-12-24Google Inc.Ranking and selecting images for display from a set of images
US20170149703A1 (en)2014-07-032017-05-25Nuance Communications, Inc.System and method for suggesting actions based upon incoming messages
US20160283454A1 (en)2014-07-072016-09-29Machine Zone, Inc.Systems and methods for identifying and suggesting emoticons
US20160092044A1 (en)2014-07-072016-03-31Google Inc.Method and System for Editing Event Categories
US20160011725A1 (en)2014-07-082016-01-14Verizon Patent And Licensing Inc.Accessible contextual controls within a graphical user interface
US20160043817A1 (en)2014-07-182016-02-11RSS Technologies, LLCMethods and apparatus for locality based broadcasting
US20160037311A1 (en)2014-07-312016-02-04Samsung Electronics Co., Ltd.Message service providing device and method of providing content via the same
CN104202718A (en)2014-08-052014-12-10百度在线网络技术(北京)有限公司Method and device for providing information for user
US20160042252A1 (en)2014-08-052016-02-11Sri InternationalMulti-Dimensional Realization of Visual Content of an Image Collection
US20160043974A1 (en)2014-08-082016-02-11Mastercard International IncorporatedSystems and methods for integrating a chat function into an e-reader application
US20160055246A1 (en)2014-08-212016-02-25Google Inc.Providing automatic actions for mobile onscreen content
US20160065519A1 (en)2014-08-272016-03-03Lenovo (Singapore) Pte, Ltd.Context-aware aggregation of text-based messages
US20160072737A1 (en)2014-09-042016-03-10Microsoft CorporationApp powered extensibility of messages on an existing messaging service
US10146748B1 (en)2014-09-102018-12-04Google LlcEmbedding location information in a media collaboration using natural language processing
WO2016072117A1 (en)2014-11-072016-05-12ソニー株式会社Information processing device, control method, and storage medium
US20160140477A1 (en)2014-11-132016-05-19Xerox CorporationMethods and systems for assigning tasks to workers
US20160140447A1 (en)2014-11-142016-05-19Bublup Technologies, Inc.Deriving Semantic Relationships Based on Empirical Organization of Content by Users
CN105786455A (en)2014-12-172016-07-20深圳市腾讯计算机系统有限公司Method, device and terminal for data processing
US20160179816A1 (en)2014-12-222016-06-23Quixey, Inc.Near Real Time Auto-Suggest Search Results
US20160196040A1 (en)2015-01-022016-07-07Microsoft Technology Licensing, LlcContextual Browser Frame and Entry Box Placement
US20160210279A1 (en)2015-01-192016-07-21Ncsoft CorporationMethods and systems for analyzing communication situation based on emotion information
US20160210962A1 (en)2015-01-192016-07-21Ncsoft CorporationMethods and systems for analyzing communication situation based on dialogue act information
US20160226804A1 (en)2015-02-032016-08-04Google Inc.Methods, systems, and media for suggesting a link to media content
US20160224524A1 (en)2015-02-032016-08-04Nuance Communications, Inc.User generated short phrases for auto-filling, automatically collected during normal text use
US20160234553A1 (en)2015-02-112016-08-11Google Inc.Methods, systems, and media for presenting a suggestion to watch videos
WO2016130788A1 (en)2015-02-122016-08-18Google Inc.Determining reply content for a reply to an electronic communication
US20160284011A1 (en)2015-03-252016-09-29Facebook, Inc.Techniques for social messaging authorization and customization
US20160292217A1 (en)2015-04-022016-10-06Facebook, Inc.Techniques for context sensitive illustrated graphical user interface elements
US20160308794A1 (en)2015-04-162016-10-20Samsung Electronics Co., Ltd.Method and apparatus for recommending reply message
US20160321052A1 (en)2015-04-282016-11-03Google Inc.Entity action suggestion on a mobile device
EP3091445A1 (en)2015-05-082016-11-09BlackBerry LimitedElectronic device and method of determining suggested responses to text-based communications
US20160342895A1 (en)2015-05-212016-11-24Baidu Usa LlcMultilingual image question answering
US20160350304A1 (en)2015-05-272016-12-01Google Inc.Providing suggested voice-based action queries
US20160352656A1 (en)2015-05-312016-12-01Microsoft Technology Licensing, LlcContext-sensitive generation of conversational responses
WO2016204428A1 (en)2015-06-162016-12-22삼성전자 주식회사Electronic device and control method therefor
US20180137097A1 (en)2015-06-162018-05-17Samsung Electronics Co., Ltd.Electronic device and control method therefor
US20160378080A1 (en)2015-06-252016-12-29Intel CorporationTechnologies for conversational interfaces for system control
US20170004383A1 (en)2015-06-302017-01-05Adobe Systems IncorporatedSearching untagged images with text-based queries
US20170017648A1 (en)2015-07-152017-01-19Chappy, Inc.Systems and methods for screenshot linking
US20170031575A1 (en)2015-07-282017-02-02Microsoft Technology Licensing, LlcTailored computing experience based on contextual signals
CN105141503A (en)2015-08-132015-12-09北京北信源软件股份有限公司Novel instant messaging intelligent robot
KR20170032883A (en)2015-08-192017-03-23시아오미 아이엔씨.Method, device and terminal device for playing game in chatting interface
CN105068661A (en)2015-09-072015-11-18百度在线网络技术(北京)有限公司Man-machine interaction method and system based on artificial intelligence
US20170075878A1 (en)2015-09-152017-03-16Apple Inc.Emoji and canned responses
US9467435B1 (en)2015-09-152016-10-11Mimecast North America, Inc.Electronic message threat protection system for authorized users
US20170093769A1 (en)2015-09-302017-03-30Apple Inc.Shared content presentation with integrated messaging
US20170098152A1 (en)2015-10-022017-04-06Adobe Systems IncorporatedModifying at least one attribute of an image with at least one attribute extracted from another image
US20170118152A1 (en)2015-10-272017-04-27Line CorporationMessage providing methods and apparatuses, display control methods and apparatuses, and computer-readable mediums storing computer programs for executing methods
CN105262675A (en)2015-10-292016-01-20北京奇虎科技有限公司Method and apparatus for controlling chat based on electronic book
US20170134316A1 (en)2015-11-102017-05-11Wrinkl, Inc.Integrating actionable objects into an on-line chat communications platform
US20180309706A1 (en)2015-11-102018-10-25Samsung Electronics Co., Ltd.User terminal device for recommending response message and method therefor
US9633048B1 (en)2015-11-162017-04-25Adobe Systems IncorporatedConverting a text sentence to a series of images
US10129193B2 (en)2015-11-172018-11-13International Business Machines CorporationIdentifying relevant content contained in message streams that appear to be irrelevant
US20170142046A1 (en)2015-11-172017-05-18International Business Machines CorporationIdentifying relevant content contained in message streams that appear to be irrelevant
US20170147202A1 (en)2015-11-242017-05-25Facebook, Inc.Augmenting text messages with emotion information
US20170153792A1 (en)2015-11-302017-06-01Samsung Electronics Co., Ltd.User terminal device and displaying method thereof
CN105306281A (en)2015-12-032016-02-03腾讯科技(深圳)有限公司Information processing method and client
US20170171117A1 (en)2015-12-102017-06-15International Business Machines CorporationMessage Suggestion Using Dynamic Information
US20170180294A1 (en)2015-12-212017-06-22Google Inc.Automatic suggestions for message exchange threads
US20170180276A1 (en)2015-12-212017-06-22Google Inc.Automatic suggestions and other content for messaging applications
US20170185236A1 (en)2015-12-282017-06-29Microsoft Technology Licensing, LlcIdentifying image comments from similar images
US20170187654A1 (en)2015-12-292017-06-29Line CorporationNon-transitory computer-readable recording medium, method, system, and apparatus for exchanging message
US9560152B1 (en)2016-01-272017-01-31International Business Machines CorporationPersonalized summary of online communications
US20170250936A1 (en)*2016-02-252017-08-31Facebook, Inc.Techniques for messaging bot rich communication
US20170250935A1 (en)2016-02-252017-08-31Facebook, Inc.Techniques for messaging bot app interactions
US20170250930A1 (en)2016-02-292017-08-31Outbrain Inc.Interactive content recommendation personalization assistant
US20170288942A1 (en)2016-03-302017-10-05Microsoft Technology Licensing, LlcPortal for Provisioning Autonomous Software Agents
US20170293834A1 (en)2016-04-112017-10-12Facebook, Inc.Techniques to respond to user requests using natural-language machine learning based on branching example conversations
US20170308589A1 (en)2016-04-262017-10-26Facebook, Inc.Recommendations from Comments on Online Social Networks
US20170324868A1 (en)2016-05-062017-11-09Genesys Telecommunications Laboratories, Inc.System and method for monitoring progress of automated chat conversations
US20170344224A1 (en)2016-05-272017-11-30Nuance Communications, Inc.Suggesting emojis to users for insertion into text-based messages
CN105898627A (en)2016-05-312016-08-24北京奇艺世纪科技有限公司Video playing method and device
US20170359282A1 (en)2016-06-122017-12-14Apple Inc.Conversion of text relating to media content and media extension apps
US20170357432A1 (en)2016-06-122017-12-14Apple Inc.Image creation app in messaging app
US20170359702A1 (en)2016-06-122017-12-14Apple Inc.Message extension app store
US20170359283A1 (en)2016-06-122017-12-14Apple Inc.Music creation app in messaging app
US20170359281A1 (en)2016-06-122017-12-14Apple Inc.Polling extension application for interacting with a messaging application
US20170359285A1 (en)2016-06-122017-12-14Apple Inc.Conversion of detected url in text message
US20170359701A1 (en)2016-06-122017-12-14Apple Inc.Sticker distribution system for messaging apps
US20170359703A1 (en)2016-06-122017-12-14Apple Inc.Layers in messaging applications
US20170359279A1 (en)2016-06-122017-12-14Apple Inc.Messaging application interacting with one or more extension applications
US20170357442A1 (en)2016-06-122017-12-14Apple Inc.Messaging application interacting with one or more extension applications
US20170366479A1 (en)2016-06-202017-12-21Microsoft Technology Licensing, LlcCommunication System
US20180004397A1 (en)2016-06-292018-01-04Google Inc.Systems and Methods of Providing Content Selection
US20180005288A1 (en)*2016-06-302018-01-04Paypal, Inc.Communicating in chat sessions using chat bots to provide real-time recommendations for negotiations
US20180005272A1 (en)2016-06-302018-01-04Paypal, Inc.Image data detection for micro-expression analysis and targeted data services
US20180012231A1 (en)2016-07-082018-01-11Asapp, IncAutomatically suggesting resources for responding to a request
US9805371B1 (en)2016-07-082017-10-31Asapp, Inc.Automatically suggesting responses to a received message
US9807037B1 (en)2016-07-082017-10-31Asapp, Inc.Automatically suggesting completions of text
US9715496B1 (en)2016-07-082017-07-25Asapp, Inc.Automatically responding to a request of a user
US20180013699A1 (en)2016-07-082018-01-11Asapp, IncAssisting entities in responding to a request of a user
US20180032499A1 (en)2016-07-282018-02-01Google Inc.Automatically Generating Spelling Suggestions and Corrections Based on User Context
US20180060705A1 (en)2016-08-302018-03-01International Business Machines CorporationImage text analysis for identifying hidden text
US20190204868A1 (en)2016-09-052019-07-04Samsung Electronics Co., Ltd.Electronic device and control method therefor
US20210243143A1 (en)2016-09-202021-08-05Google LlcSuggested responses based on message stickers
US20180083898A1 (en)2016-09-202018-03-22Google LlcSuggested responses based on message stickers
US20180083901A1 (en)2016-09-202018-03-22Google LlcAutomatic response suggestions based on images received in messaging applications
US10979373B2 (en)2016-09-202021-04-13Google LlcSuggested responses based on message stickers
US20200106726A1 (en)2016-09-202020-04-02Google LlcSuggested responses based on message stickers
US10547574B2 (en)2016-09-202020-01-28Google LlcSuggested responses based on message stickers
US20180083894A1 (en)2016-09-202018-03-22Google Inc.Bot interaction
US20180109526A1 (en)2016-09-202018-04-19Google Inc.Bot permissions
US10412030B2 (en)2016-09-202019-09-10Google LlcAutomatic response suggestions based on images received in messaging applications
US20180090135A1 (en)2016-09-232018-03-29Microsoft Technology Licensing, LlcConversational Bookmarks
US20180089230A1 (en)2016-09-292018-03-29Baidu Online Network Technology (Beijing) Co., Ltd.Search system, method and apparatus
WO2018089109A1 (en)2016-11-122018-05-17Google LlcDetermining graphical elements for inclusion in an electronic communication
US20180196854A1 (en)2017-01-112018-07-12Google Inc.Application extension for generating automatic search queries
US10146768B2 (en)2017-01-252018-12-04Google LlcAutomatic suggested responses to images received in messages using language model
US20180210874A1 (en)2017-01-252018-07-26Google LlcAutomatic suggested responses to images received in messages using language model
US20180293601A1 (en)2017-04-102018-10-11Wildfire Systems, Inc.Virtual keyboard trackable referral system
US20180316637A1 (en)2017-05-012018-11-01Microsoft Technology Licensing, LlcConversation lens for context
US20180322403A1 (en)2017-05-052018-11-08Liveperson, Inc.Dynamic response prediction for improved bot task processing
US20180336415A1 (en)2017-05-162018-11-22Google LlcSuggested actions for images
US20180336226A1 (en)2017-05-162018-11-22Google LlcImage archival based on image categories
US20180352393A1 (en)2017-06-022018-12-06Apple Inc.Messaging system interacting with dynamic extension app
US20180367483A1 (en)2017-06-152018-12-20Google Inc.Embedded programs and interfaces for chat conversations
US20180367484A1 (en)2017-06-152018-12-20Google Inc.Suggested items for use with embedded applications in chat conversations
US10404636B2 (en)2017-06-152019-09-03Google LlcEmbedded programs and interfaces for chat conversations

Non-Patent Citations (176)

* Cited by examiner, † Cited by third party
Title
Blippar, "Computer Vision API", www.web.blippar.com/computer-vision-api, 4 pages.
Chen, et al., "A Survey of Document Image Classification: problem statement, classifier architecture and performance evaluation", International Journal of Document Analysis and Recognition (IJDAR), vol. 10, No. 1, Aug. 3, 2006, pp. 1-16.
Chen, et al., "Bezel Copy: An Efficient Cross-0 Application Copy-Paste Technique for Touchscreen Smartphones.", Advanced Visual Interfaces. ACM, New York, New York, May 27, 2014, pp. 185-192.
CNIPA, First Office Action (with English translation) for Chinese Patent Application No. 201680082643.2, dated Aug. 5, 2020, 24 pages.
CNIPA, First Office Action (with English translation) for Chinese Patent Application No. 201780069884.8, dated Dec. 17, 2021, 49 pages.
CNIPA, First Office Action (with English translation) for Chinese Patent Application No. 201880019712.4, dated Jun. 22, 2021, 10 pages.
CNIPA, First Office Action for Chinese Patent Application No. 201580016692.1, dated Nov. 2, 2018, 7 pages.
CNIPA, First Office Action for Chinese Patent Application No. 201680070359.3, dated Jun. 3, 2020, 9 pages.
CNIPA, First Office Action for Chinese Patent Application No. 201780056982.8, dated Nov. 19, 2020, 10 pages.
CNIPA, Notice of Proceeding with Registration Formalities and Notice of Granting a Patent Right for Invention (with English translation) for Chinese Patent Application No. 201680070359.3, dated Jul. 5, 2021, 4 pages.
CNIPA, Notice of Proceeding with Registration Formalities and Notice of Granting a Patent Right for Invention (with English translation) for Chinese Patent Application No. 201780056982.8, dated Jul. 13, 2021, 4 pages.
CNIPA, Notice of Proceeding with Registration Formalities and Notice of Granting a Patent Right for Invention (with English translation) for Chinese Patent Application No. 201880019712.4, dated Jan. 6, 2022, 7 pages.
CNIPA, Notification for Patent Registration Formalities and Notification on the Grant of Patent Right for Invention (with English translation) for Chinese Patent Application No. 201680082643.2, dated Jun. 28, 2021, 4 pages.
CNIPA, Second Office Action (with English translation) for Chinese Patent Application No. 201680082643.2, dated Apr. 2, 2021, 13 pages.
CNIPA, Second Office Action for Chinese Patent Application No. 201680070359.3, dated Jan. 6, 2021, 6 pages.
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 16825663.4, dated Apr. 16, 2019, 5 pages.
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 16825663.4, dated May 7, 2020, 5 pages.
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 16825666.7, dated Apr. 23, 2019, 6 pages.
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 16825666.7, dated Jun. 18, 2020, 6 pages.
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 17780938.1, dated Dec. 9, 2020, 6 pages.
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 17780938.1, dated May 18, 2021, 7 pages.
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 17794825.4, dated Aug. 4, 2020, 8 pages.
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 18716399.3, dated Jul. 3, 2020, 6 pages.
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 18716400.9, dated Mar. 11, 2021, 9 pages.
EPO, Communication Under Rule 71(3) EPC for European Patent Application No. 16825666.7, dated Oct. 18, 2021, 7 pages.
EPO, Extended European Search Report for European Patent Application No. 15746410.8, dated Sep. 5, 2017, 7 pages.
EPO, Summons to Attend Oral Proceedings for European Patent Application No. 17780938.1, dated Nov. 29, 2021, 11 pages.
EPO, Summons to Attend Oral Proceedings for European Patent Application No. 17794825.4, dated Feb. 12, 2021, 10 pages.
EPO, Summons to Attend Oral Proceedings for European Patent Application No. 18716399.3, dated Jan. 13, 2022, 10 pages.
Examination Report No. 1 for Australian Patent Application No. 2015214298, dated Apr. 24, 2017, 3 pages.
Examination Report No. 2 for Austialian Patent Application No. 2015214298, dated Nov. 2, 2017, 3 pages.
Fuxman, Ariel, ""Aw, so cute!": Allo helps you respond to shared photos", Google Research Blog, https://research.googleblog.com/2016/05/aw-so-cute-allo-helps-you-respond-to.html, May 18. 2016, 6 pages.
IPO, First Examination Report for Indian Patent Application No. 201847014172, dated Jun. 17, 2020, 7 pages.
IPO, First Examination Report for Indian Patent Application No. 201847024288, dated Jan. 22, 2021, 7 pages.
IPO, First Examination Report for Indian Patent Application No. 201947014236, dated Mar. 29, 2021, 7 pages.
IPO, First Examination Report for Indian Patent Application No. 201947015830, dated Mar. 17, 2021, 6 pages.
IPO, First Examination Report for Indian Patent Application No. 201947035964, dated Jul. 15, 2021, 7 pages.
JPO, Decision of Rejection for Japanese Patent Application No. 2019-518995, dated Sep. 1, 2021, 5 pages.
JPO, Notice of Allowance (including English translation) for Japanese Patent Application No. 2019-547462, dated May 28, 2020, 2 pages.
JPO, Notice of Allowance (with English translation) for Japanese Patent Application No. 2018-532399, dated Sep. 23, 2020, 2 pages.
JPO, Notice of Allowance (with English translation) for Japanese Patent Application No. 2018-551908, dated Dec. 3, 2019, 2 pages.
JPO, Notice of Allowance (with English translation) for Japanese Patent Application No. 2019-505058, dated Jan. 21, 2020, 2 pages.
JPO, Notice of Allowance (with English translation) for Japanese Patent Application No. 2019-520680, dated Nov. 12, 2019, 2 pages.
JPO, Office Action for Japanese Patent Application No. 2018-532399, dated Jul. 23, 2019, 6 pages.
JPO, Office Action for Japanese Patent Application No. 2018-532399, dated Jun. 16, 2020, 3 pages.
JPO, Office Action for Japanese Patent Application No. 2018-532399, dated Mar. 10, 2020, 4 pages.
JPO, Office Action for Japanese Patent Application No. 2018-551908, dated Aug. 20, 2019, 4 pages.
JPO, Office Action for Japanese Patent Application No. 2019-518995, dated Jan. 19, 2021, 5 pages.
JPO, Office Action for Japanese Patent Application No. 2019-518995, dated Jul. 28, 2020, 5 pages.
JPO, Office Action for Japanese Patent Application No. 2019-547462, dated Feb. 18, 2020, 6 pages.
Kannan, Anjuli et al., "Smart reply: Automated response suggestion for email", Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 13, 2016, 10 pages.
Kannan, et al., "Smart Reply: Automated Response Suggestions for Email", Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, ACM Press, New York, New York, Aug. 13, 2016, pp. 955-965.
Khandelwal, "Hey Allo! Meet Google's AI-powered Smart Messaging App", The Hacker News, http://web.archive.org/web/20160522155700/https://thehackernews.com/2016/05/google-allo-messenger.html, May 19, 2016, 3 pages.
KIPO, Notice of Allowance (with English translation) for Korean Patent Application No. 10-2019-7020465, dated Aug. 5, 2020, 5 pages.
KIPO, Notice of Allowance (with English translation) for Korean Patent Application No. 10-2019-7024479, dated Jan. 17, 2020, 4 pages.
KIPO, Notice of Allowance for Korean Patent Application No. 10-2019-7011687, dated Sep. 26, 2019, 4 pages.
KIPO, Notice of Final Rejection for Korean Patent Application No. 10-2018-7013953, dated Jun. 13, 2019, 4 pages.
KIPO, Notice of Final Rejection for Korean Patent Application No. 10-2018-7013953, dated May 8, 2019, 4 pages.
KIPO, Notice of Final Rejection for Korean Patent Application No. 10-2018-7019756, dated Jan. 17, 2020, 4 pages.
KIPO, Notice of Final Rejection for Korean Patent Application No. 10-2018-7019756, dated Nov. 25, 2019, 3 pages.
KIPO, Notice of Final Rejection for Korean Patent Application No. 10-2019-7020465, dated Jun. 29, 2020, 4 pages.
KIPO, Notice of Preliminary Rejection (with English translation) for Korean Patent Application No. 10-2019-7020465, dated Jan. 10, 2020, 9 pages.
KIPO, Notice of Preliminary Rejection for Korean Patent Application No. 10-2018-7019756, dated May 13, 2019, 9 pages.
KIPO, Notice of Preliminary Rejection for Korean Patent Application No. 10-2019-7011687, dated May 7, 2019, 3 pages.
KIPO, Notice of Preliminary Rejection for Korean Patent Application No. 10-2019-7024479, dated Sep. 18, 2019, 9 pages.
KIPO, Preliminary Rejection for Korean Patent Application No. 10-2018-7013953, dated Oct. 29, 2018, 5 pages.
Lardinois, F. "Allo brings Google's smarts to messaging", https://techcrunch.com/2016/09/20/allo-brings-googles-smarts-to-messaging/, Sep. 2016, 14 pages.
Lee, Jang Ho et al., "Supporting multi-user, multi-applet workspaces in CBE", Proceedings of the 1996 ACM conference on Computer supported cooperative work, ACM, Nov. 16, 1996, 10 pages.
Mathur, Vishal, "How Google Allo stands out from WhatsApp, WeChat, Facebook Messenger", Retrieved from Internet: https://www.livemint.com/Leisure/6BcwmziLgEueyaL8VIgvHP/GoogleAllo-Machine-learning-smart-features-could-stumble-o.html, Sep. 21, 2016, 8 pages.
Microsoft Corporation , "Windows Messenger for Windows XP", Retrieved from Internet: http://web.archive.org/web/20030606220012/messenger.msn.com/support/features.asp?client=0 on Sep. 22, 2005, Jun. 6, 2003, 3 pages.
Notice of Acceptance for Austialian Patent Application No. 2015214298, dated Apr. 20, 2018, 3 pages.
Pieterse et al., "Android botnets on the rise: trends and characteristics", 2012 Information Security for South Africa, Aug. 15-17, 2012, 5 pages.
Pinterest, "Pinterest Lens", www.help.pinterest.com/en/articles/pinterest-lens, 2 pages.
Russell, "Google Allo is the Hangouts Killer We've Been Waiting For", Retrieved from the Internet: http://web.archive.org/web/20160519115534/https://www.technobuffalo.com/2016/05/18/google-allo-hangouts-replacement/, May 18, 2016, 3 pages.
USPTO, Final Office Action for U.S. Appl. No. 15/238,304, dated Nov. 23, 2018, 14 pages.
USPTO, Final Office Action for U.S. Appl. No. 15/386,162, dated Jun. 5, 2019, 13 pages.
USPTO, Final Office Action for U.S. Appl. No. 15/386,760, dated Jan. 10, 2020, 12 pages.
USPTO, Final Office Action for U.S. Appl. No. 15/386,760, dated May 30, 2019, 13 pages.
USPTO, Final Office Action for U.S. Appl. No. 15/912,809, dated Jun. 24, 2020, 8 pages.
USPTO, Final Office Action for U.S. Appl. No. 16/692,821, dated Dec. 22, 2021, 17 pages.
USPTO, Final Office Action for U.S. Appl. No. 16/999,702, dated Feb. 1, 2022, 17 pages.
USPTO, Final Office Action for U.S. Appl. No. 17/129,010, dated Feb. 7, 2022, 10 pages.
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 15/350,040, dated Oct. 30, 2018, 4 pages.
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 15/386,760, dated Jan. 30, 2019, 8 pages.
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 15/624,637, dated Jan. 25, 2019, 5 pages.
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 15/912,796, dated Mar. 13, 2020, 5 pages.
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 15/912,809, dated Feb. 18, 2020, 18 pages.
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 16/003,661, dated Dec. 14, 2018, 16 pages.
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 16/436,632, dated Nov. 6, 2020, 5 pages.
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 16/999,702, dated Sep. 28, 2021, 5 pages.
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 17/129,010, dated Dec. 2, 2021, 4 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/350,040, dated Jul. 16, 2018, 5 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/386,760, dated Nov. 6, 2018, 5 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/415,506, dated Apr. 5, 2018, 5 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/624,637, dated Oct. 19, 2018, 4 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/709,440, dated May 16, 2019, 4 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/912,796, dated Jan. 8, 2020, 5 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/912,809, dated Nov. 22, 2019, 5 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 16/003,661, dated Aug. 29, 2018, 6 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 16/436,632, dated Aug. 14, 2020, 5 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 16/695,967, dated Sep. 30, 2021, 6 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 16/999,702, dated Jul. 26, 2021, 6 pages.
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 17/129,010, dated Oct. 20, 2021, 4 pages.
USPTO, Non-final Office Action for Design U.S. Appl. No. 29/503,386, dated Feb. 1, 2016, 9 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 14/618,962, dated Feb. 26, 2016, 25 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 15/238,304, dated Jun. 7, 2018, 17 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 15/386,162, dated Nov. 27, 2018, 11 Pages.
USPTO, Non-final Office Action for U.S. Appl. No. 15/386,162, dated Nov. 27, 2018, 12 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 15/386,760, dated Oct. 11, 2019, 12 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 15/428,821, dated May 18, 2017, 30 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 15/709,418, dated Nov. 21, 2017, 15 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 15/709,423, dated May 2, 2019, 21 Pages.
USPTO, Non-final Office Action for U.S. Appl. No. 15/946,342, dated Jul. 26, 2018, 40 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 16/560,815, dated May 18, 2020, 16 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 16/569,273, dated Oct. 18, 2019, 21 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 16/692,821, dated Aug. 18, 2021, 16 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 16/881,816, dated Nov. 27, 2020, 15 pages.
USPTO, Non-final Office Action for U.S. Appl. No. 17/224,949, dated Oct. 21, 2021, 13 pages.
USPTO, Notice of Allowance for Design U.S. Appl. No. 29/503,386, dated Jul. 13, 2016, 8 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 14/618,962, dated Nov. 8, 2016, 14 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/238,304, dated Apr. 5, 2019, 7 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/350,040, dated Apr. 24, 2019, 16 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/386,162, dated Aug. 9, 2019, 5 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/386,760, dated Apr. 24, 2020, 9 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/415,506, dated Jul. 23, 2018, 25 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/428,821, dated Jan. 10, 2018, 20 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/624,637, dated Apr. 19, 2019, 6 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/624,638, dated Feb. 28, 2019, 21 Pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/709,418, dated Mar. 1, 2018, 11 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/709,423, dated Oct. 9, 2019, 19 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/709,440, dated Aug. 6, 2019, 21 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/912,796, dated Aug. 20, 2020, 9 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 15/912,809, dated Sep. 11, 2020, 12 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 16/003,661, dated May 1, 2019, 11 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 16/436,632, dated Mar. 3, 2021, 7 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 16/552,902, dated Aug. 27, 2020, 10 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 16/560,815, dated Aug. 31, 2020, 11 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 16/569,273, dated Feb. 20, 2020, 17 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 16/692,821, dated Mar. 17, 2022, 9 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 16/695,967, dated Jan. 24, 2022, 21 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 16/703,699, dated Dec. 11, 2020, 9 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 16/881,816, dated Feb. 4, 2021, 15 pages.
USPTO, Notice of Allowance for U.S. Appl. No. 17/224,949, dated Dec. 6, 2021, 12 pages.
Vinyals, O. et al., "Show and Tell: A Neural Image Caption Generator", arXiv:1411.4555v2 [cs.CV], Apr. 20, 2015, pp. 1-9.
WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2016/068083, dated Jul. 5, 2018, 9 pages.
WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2017/046858, dated Feb. 19, 2019, 7 pages.
WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2017/052349, dated Mar. 26, 2019, 7 pages.
WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2017/057044, dated Jul. 30, 2019, 9 pages.
WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2017/52333, dated Dec. 4, 2018, 15 pages.
WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2018/021028, dated Nov. 28, 2019, 10 pages.
WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2018/022501, dated Dec. 17, 2019, 7 pages.
WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2018/022503, dated Dec. 17, 2019, 9 pages.
WIPO, International Search Report and Written Opinion for International Application No. PCT/US2018/021028, dated Jun. 15, 2018, 11 Pages.
WIPO, International Search Report and Written Opinion for International Patent Application No. PCT/US2015/014414, dated May 11, 2015, 8 pages.
WIPO, International Search Report and Written Opinion for International Patent Application No. PCT/US2016/068083, dated Mar. 9, 2017, 13 pages.
WIPO, International Search Report and Written Opinion for PCT Application No. PCT/US2017/046858, dated Oct. 11, 2017, 10 Pages.
WIPO, International Search Report and Written Opinion PCT application No. PCT/US2017/052333, dated Nov. 30, 2017, 15 pages.
WIPO, International Search Report for International Patent Application No. PCT/US2016/068083, dated Mar. 9, 2017, 4 pages.
WIPO, International Search Report for International Patent Application No. PCT/US2017/052349, dated Dec. 13, 2017, 5 pages.
WIPO, International Search Report for International Patent Application No. PCT/US2017/052713, dated Dec. 5, 2017, 4 Pages.
WIPO, International Search Report for International Patent Application No. PCT/US2017/057044, dated Jan. 18, 2018, 5 pages.
WIPO, International Search Report for International Patent Application No. PCT/US2018/022501, dated May 14, 2018, 4 pages.
WIPO, International Search Report for International Patent Application No. PCT/US2018/022503, dated Aug. 16, 2018, 6 pages.
WIPO, Written Opinion for International Patent Application No. PCT/US2016/068083, dated Mar. 9, 2017, 9 pages.
WIPO, Written Opinion for International Patent Application No. PCT/US2017/052349, dated Dec. 13, 2017, 6 pages.
WIPO, Written Opinion for International Patent Application No. PCT/US2017/052713, dated Dec. 5, 2017, 6 Pages.
WIPO, Written Opinion for International Patent Application No. PCT/US2017/057044, dated Jan. 18, 2018, 8 pages.
WIPO, Written Opinion for International Patent Application No. PCT/US2018/022501, dated May 14, 2018, 6 pages.
WIPO, Written Opinion for International Patent Application No. PCT/US2018/022503, dated Aug. 16, 2018, 8 pages.
WIPO, Written Opinion of the International Preliminary Examination Authority for International Patent Application No. PCT/US2017/057044, dated Dec. 20, 2018, 8 pages.
WIPO, Written Opinion of the International Preliminary Examining Authority for International Patent Application No. PCT/US2017/052349, dated Aug. 6, 2018, 9 pages.
WIPO, Written Opinion of the International Preliminary Examining Authority for International Patent Application No. PCT/US2017/052713, dated Oct. 15, 2018, 6 pages.
WIPO, Written Opinion of the International Preliminary Examining Authority for International Patent Application No. PCT/US2017/52333, dated Aug. 17, 2018, 5 pages.
WIPO, Written Opinion of the International Preliminary Examining Authority for International Patent Application No. PCT/US2018/021028, dated Jun. 14, 2019, 11 pages.
Yeh, et al., "Searching the web with mobile images for location recognition", Proceedings of the 2004 IEEE Computer Society Conference on Pattern Recognition, vol. 2, Jun.-Jul. 2004, pp. 1-6.
Zhao, et al., "Cloud-based push-styled mobile botnets: a case study of exploiting the cloud to device messaging service", Proceedings ACSAC '12, Proceedings of the 28th Annual Computer Security Applications Conference, ACM Digital Library, Dec. 3, 2012, pp. 119-128.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US12126739B2 (en)2016-09-202024-10-22Google LlcBot permissions

Also Published As

Publication numberPublication date
US20220255760A1 (en)2022-08-11
US20200099538A1 (en)2020-03-26
WO2018057536A1 (en)2018-03-29
US10511450B2 (en)2019-12-17
CN109716727A (en)2019-05-03
US20250047508A1 (en)2025-02-06
US20180109526A1 (en)2018-04-19
CN109716727B (en)2021-10-15
US11336467B2 (en)2022-05-17
US12126739B2 (en)2024-10-22
JP6659910B2 (en)2020-03-04
US20230379173A1 (en)2023-11-23
JP2019530050A (en)2019-10-17
DE112017003594T5 (en)2019-04-25

Similar Documents

PublicationPublication DateTitle
US12126739B2 (en)Bot permissions
US10798028B2 (en)Bot interaction
US10146768B2 (en)Automatic suggested responses to images received in messages using language model
US11303590B2 (en)Suggested responses based on message stickers
US10862836B2 (en)Automatic response suggestions based on images received in messaging applications
EP3846393A1 (en)Identifier for message thread

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:GOOGLE LLC, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FUNG, SHELBIAN;RODRIGUEZ, ADAM;VOLKOV, ANTON;AND OTHERS;SIGNING DATES FROM 20180222 TO 20180313;REEL/FRAME:059773/0189

FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STCFInformation on status: patent grant

Free format text:PATENTED CASE


[8]ページ先頭

©2009-2025 Movatter.jp