Movatterモバイル変換


[0]ホーム

URL:


CN119003108A - Method, apparatus, device and storage medium for task processing - Google Patents

Method, apparatus, device and storage medium for task processing
Download PDF

Info

Publication number
CN119003108A
CN119003108ACN202311569312.XACN202311569312ACN119003108ACN 119003108 ACN119003108 ACN 119003108ACN 202311569312 ACN202311569312 ACN 202311569312ACN 119003108 ACN119003108 ACN 119003108A
Authority
CN
China
Prior art keywords
digital assistant
user
task
message
target task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311569312.XA
Other languages
Chinese (zh)
Inventor
齐俊元
夏勤娴
沈博文
钟信
刘杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co LtdfiledCriticalBeijing Zitiao Network Technology Co Ltd
Priority to CN202311569312.XApriorityCriticalpatent/CN119003108A/en
Priority to PCT/CN2024/131404prioritypatent/WO2025108132A1/en
Publication of CN119003108ApublicationCriticalpatent/CN119003108A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

According to embodiments of the present disclosure, methods, apparatuses, devices, and storage medium for task processing are provided. In a method for task processing, a first message sent by a first digital assistant corresponding to at least a first user is detected, the first message indicating a target task; determining the relevance of the target task indicated by the first message to the second user; and in response to determining that the target task is associated with the second user, performing at least a portion of the target task with a second digital assistant corresponding to the second user based on the first message. In this way, the intelligence of the digital assistant can be improved, and the flexibility of the use of the digital assistant and the task processing efficiency based on the digital assistant can be improved.

Description

Method, apparatus, device and storage medium for task processing
Technical Field
Example embodiments of the present disclosure relate generally to the field of computers and, more particularly, relate to methods, apparatuses, devices, and computer-readable storage media for task processing.
Background
With the rapid development of internet technology, the internet has become an important platform for people to acquire contents and share contents, and users can access the internet through terminal devices to enjoy various internet services. The terminal equipment presents corresponding content through a user interface of the service component, realizes interaction with a user and provides service for the user. Therefore, a rich and colorful interactive interface is an important means for improving user experience. With the development of information technology, various terminal devices can provide various services to people in terms of work and life, etc. For example, a business component that provides services may be deployed in a terminal device. The terminal device or business component may provide digital assistant-like functionality to the user to assist the user in using the terminal device or business component. How to improve the interaction flexibility of users and digital assistants is a technical problem to be explored currently.
Disclosure of Invention
In a first aspect of the present disclosure, a method of task processing is provided. The method comprises the following steps: detecting a first message sent by a first digital assistant corresponding to at least a first user, the first message indicating a target task; determining the relevance of the target task indicated by the first message to the second user; and in response to determining that the target task is associated with the second user, performing at least a portion of the target task with a second digital assistant corresponding to the second user based on the first message.
In a second aspect of the present disclosure, a method of task processing is provided. The method comprises the following steps: the second digital assistant acquires first event information triggered by the first digital assistant; and the second digital assistant executing the first task based on the first event information and outputting first information related to the first task.
In a third aspect of the present disclosure, an apparatus for task processing is provided. The device comprises: a message detection module configured to detect a first message sent by a first digital assistant corresponding to at least a first user, the first message indicating a target task; an association determination module configured to determine an association of the target task indicated by the first message with the second user; and a task execution module configured to execute at least a portion of the target task with a second digital assistant corresponding to the second user based on the first message in response to determining that the target task is associated with the second user.
In a fourth aspect of the present disclosure, an apparatus for task processing is provided. The device comprises: an information acquisition module configured to cause the second digital assistant to acquire first event information triggered by the first digital assistant; and an execution processing module configured to cause the second digital assistant to execute the first task based on the first event information and output first information related to the first task.
In a fifth aspect of the present disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by at least one processing unit, cause the apparatus to perform the method of the first aspect and/or the method of the second aspect.
In a sixth aspect of the present disclosure, a computer readable storage medium is provided. The computer readable storage medium has stored thereon a computer program executable by a processor to implement the method of the first aspect and/or the method of the second aspect.
It should be understood that what is described in this section is not intended to limit the key features or essential features of the embodiments of the present disclosure, nor is it intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals denote like or similar elements, in which:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure may be implemented;
FIG. 2 illustrates a flow chart of a process for task processing according to some embodiments of the present disclosure;
FIG. 3 illustrates a flow chart of a process for task processing according to further embodiments of the present disclosure;
FIG. 4 illustrates a block diagram of an apparatus for task processing, according to some embodiments of the present disclosure;
FIG. 5 illustrates a block diagram of an apparatus for task processing according to further embodiments of the present disclosure; and
Fig. 6 illustrates a block diagram of an electronic device in which one or more embodiments of the disclosure may be implemented.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather, these embodiments are provided so that this disclosure will be more thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
In this context, unless explicitly stated otherwise, performing a step "in response to a" does not mean that the step is performed immediately after "a", but may include one or more intermediate steps.
It will be appreciated that the data (including but not limited to the data itself, the acquisition, use, storage or deletion of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the relevant users, which may include any type of rights subjects, such as individuals, enterprises, groups, etc., should be informed and authorized by appropriate means of the types of information, usage ranges, usage scenarios, etc. involved in the present disclosure according to relevant legal regulations.
For example, in response to receiving an active request from a user, prompt information is sent to the relevant user to explicitly prompt the relevant user that the operation requested to be performed will need to obtain and use information to the relevant user, so that the relevant user may autonomously select whether to provide information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operation of the technical solution of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation manner, in response to receiving an active request from a relevant user, the prompt information may be sent to the relevant user, for example, in a popup window, where the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure. The starting of the related functions of the digital assistant, the acquired data, the processing and storage modes of the data and the like of the embodiment of the disclosure should all obtain the advanced authorization of the user and other rights subjects associated with the user, and should conform to the relevant legal regulations and the agreement rules among the rights subjects.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented. In this example environment 100, a digital assistant 120 and a business component 125 are installed in a terminal device 110. The user 140 may interact with the digital assistant 120 and the business component 125 via the terminal device 110 and/or an attachment device of the terminal device 110.
In some embodiments, the digital assistant 120 and the business component 125 may be downloaded and installed at the terminal device 110. In some embodiments, the digital assistant 120 and the business component 125 may also be accessed by other means, such as by web page access, etc. In environment 100 of fig. 1, terminal device 110 may present an interface 150 of digital assistant 120 and business component 125 in response to business component 125 being launched.
Business component 125 includes, but is not limited to, one or more of the following: chat service components (also known as instant messaging service components), document service components, audio video conferencing service components, mail service components, task service components, calendar service components, goal and key results (OKR) service components, and so forth. It will be appreciated that although a single service component is shown in fig. 1, in practice a plurality of service components may be installed on terminal device 110. Business components may be integrated on a multi-functional collaboration platform. In the case where multiple business components are installed in terminal device 110, the multiple business components may be integrated on one or more multi-functional collaboration platforms. In the multifunctional collaboration platform, people can start different business components as required to complete corresponding information processing, sharing, communication and the like. The business component 125 can provide a content entity 126. Content entity 126 may be an instance of content created by user 140 or other users on business component 125. For example, depending on the type of business component 125, the content entity 126 may be a document (e.g., word document, pdf document, presentation, form document, etc.), mail, message (e.g., conversation message on an instant messaging business component), calendar, task, audio, video, image, etc.
In some embodiments, the digital assistant 120 may be provided by a separate business component or may be integrated into a certain business component 120 that is capable of providing the content entity. The business components used to provide the client interface of the digital assistant may correspond to a single function business component or a multi-function collaboration platform, such as an office suite or other collaboration platform capable of integrating multiple components. It will be appreciated that, similar to the business component, although a single digital assistant is shown in FIG. 1, there may actually be multiple digital assistants.
In some embodiments, the digital assistant 120 supports the use of a plug-in. Each plug-in is capable of providing one or more functions of the business component. Such inserts include, but are not limited to, one or more of the following: search plug-ins, contact plug-ins, message plug-ins, document plug-ins, form plug-ins, mail plug-ins, calendar plug-ins, task plug-ins, and the like.
The digital assistant 120 is a user's intelligent assistant with intelligent dialog and information processing capabilities. In an embodiment of the present disclosure, the digital assistant 120 is used to interact with the user 140 to assist the user 140 in using a terminal device or business component. An interactive window with the digital assistant 120 may be presented in the client interface. In the interactive window, the user 140 is able to speak with the digital assistant 120 by entering natural language, pictures, audio files, video files, web page files, etc., to instruct the digital assistant to assist in accomplishing various tasks, including operations on the content entities 126.
In some embodiments, the digital assistant 120 may be included as a contact for the user 140, in a contact list for the current user 140 in an office suite, or in an information stream of a chat component. In some embodiments, the user 140 has a correspondence with the digital assistant 120. For example, a first digital assistant corresponds to a first user, a second digital assistant corresponds to a second user, and so on. In some embodiments, the first digital assistant may uniquely correspond to a first user, the second digital assistant may uniquely correspond to a second user, and so on. That is, the first digital assistant of the first user may be specific or dedicated to the first user. For example, in the process of the first digital assistant providing assistance or services to the first user, the first digital assistant may utilize its historical interaction information with the first user, the first user-authorized data it has access to, its current interaction context with the first user, and so on. If the first user is an individual or a person, the first digital assistant may be considered a personal digital assistant. It will be appreciated that in the disclosed embodiment the first digital assistant is based on the first user's authorization to access the granted rights data. It should be appreciated that the "unique correspondence" or similar expressions in this disclosure are not intended to limit that the first digital assistant will be updated accordingly based on the interaction process between the first user and the first digital assistant. Of course, depending on the actual needs, the digital assistant 120 need not be specific to the current user 140, but may be a general purpose digital assistant.
In some embodiments, multiple modes of interaction of the user 140 with the digital assistant 120 may be provided and flexible switching between the multiple modes of interaction may be provided. In the event that a certain interaction pattern is triggered, a corresponding interaction zone is presented to facilitate interaction of the user 140 with the digital assistant 120. The interaction mode of the user 140 and the digital assistant 120 in different interaction modes is different, so that the interaction mode can be flexibly adapted to the interaction requirements in different scenes.
In some embodiments, user 140-specific information processing services can be provided based on historical interaction information of the user 140 with the digital assistant 120 and/or a range of data specific to the user 140. In some embodiments, historical interaction information for the user 140 interacting with the digital assistant 120 in multiple interaction modes, respectively, may each be stored in association with the user 140. In this way, in one of the plurality of interaction modes (either or a designated one), the digital assistant 120 may provide services to the user 140 based on historical interaction information stored in association with the user 140.
The digital assistant 120 may be invoked or awakened in an appropriate manner (e.g., a shortcut, button, or voice) to present an interactive window with the user 140. By selecting the digital assistant 120, an interactive window with the digital assistant 120 may be opened. The interactive window may include interface elements for information interaction, such as input boxes, message lists, message bubbles, and the like. In other embodiments, the digital assistant 120 may be invoked by entry controls or menus provided in the page, or by entering preset instructions.
The interactive window of the digital assistant 120 with the user 140 may include a conversation window, such as in an instant messaging module of an instant messaging service component or a target service component. In the session window, interactions between the digital assistant 120 and the user 140 may be presented in the form of session messages. Alternatively or additionally, the interactive window of the digital assistant 120 with the user 140 may also include other types of windows, such as a floating window mode window, wherein the user 140 may trigger the digital assistant 120 to perform a corresponding operation by entering an instruction, selecting a shortcut instruction, or the like.
In some embodiments, the digital assistant 120 may support an interactive mode of the conversation window, also referred to as a conversation mode. In this interaction mode, a conversation window of the user 140 with the digital assistant 120 is presented, in which the user 140 interacts with the digital assistant 120 through conversation messages. In the conversation mode, the digital assistant 120 can perform tasks based on conversation messages in the conversation window. In the interactive window, the user 140 enters an interactive message and the digital assistant 120 provides a reply message in response to the user input.
In some embodiments, the user's 140 session mode with the digital assistant 120 may be invoked or awakened in an appropriate manner (e.g., a shortcut, button, or voice) to present a session window. By selecting the digital assistant 120, a session window with the digital assistant 120 may be opened. The session window may include interface elements for information interaction, such as input boxes, message lists, message bubbles, and the like.
In some embodiments, the digital assistant 120 may support a floating window (or floating window) interaction mode, also referred to as a floating window mode. In the case where the floating window mode is triggered, an operation panel (also referred to as a floating window) corresponding to the digital assistant 120 is presented, and the user 140 may issue an instruction to the digital assistant 120 based on the operation panel. In some embodiments, the operation panel may include at least one candidate shortcut. Alternatively or additionally, the operation panel may comprise input controls for receiving instructions. In the floating window mode, the digital assistant 120 may perform tasks according to instructions issued by the user 140 through the operation panel.
In some embodiments, the floating window mode of the user 140 and the digital assistant 120 may also be invoked or awakened in an appropriate manner (e.g., a shortcut key, button, or voice) to present a corresponding operation panel. In some embodiments, wake-up of digital assistant 120 may be supported in a particular business component, such as in a document business component, to provide for floating window mode interaction. In some embodiments, to trigger the floating window mode to present the corresponding operation panel of the digital assistant 120, an entry control for the digital assistant 120 may be presented in the business component interface. In response to detecting a triggering operation for the portal control, it may be determined that the floating window mode is triggered and an operation panel corresponding to the digital assistant 120 is presented in the target interface area.
In some embodiments described below, for ease of discussion, the user's interaction window with the digital assistant is primarily described as a conversation window.
In some embodiments, terminal device 110 communicates with server 130 to enable provisioning of services for digital assistant 120 and business component 125. The terminal device 110 may be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, personal Communication System (PCS) device, personal navigation device, personal Digital Assistant (PDA), audio/video player, digital camera/camcorder, television receiver, radio broadcast receiver, electronic book device, game device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, terminal device 110 is also capable of supporting any type of interface to the user (such as "wearable" circuitry, etc.). Server 130 may be various types of computing systems/servers capable of providing computing power, including, but not limited to, mainframes, edge computing nodes, computing devices in a cloud environment, and so forth.
It should be understood that the structure and function of the various elements in environment 100 are described for illustrative purposes only and are not meant to suggest any limitation as to the scope of the disclosure.
As mentioned briefly above, the digital assistant may assist the user in using the terminal device or business component. Some business components can provide integrated functionality of different plug-ins. In addition to enabling free conversations by the digital assistant, the user may also be enabled to use different plug-ins to perform some more complex business related operations of the business component, such as creating documents, requesting schedules, creating tasks, etc., by instructions such as natural language. Traditionally, users are often required to actively trigger interactions with digital assistants that rely on the user's instructions to perform tasks. However, such task execution always requires immediate instructions from the user, and for tasks that the user has not found or did not notice in time, the digital assistant may not be triggered immediately to help execution. In some cases, for a digital assistant that is tied to a particular business component, business scenario, or user, the digital assistant is typically only able to process information based on the business component, business scenario, or the user's immediate instructions. For some special scenes, the digital assistant cannot be effectively utilized to improve the information processing efficiency, for example, the information processing efficiency is lower under the condition that a plurality of service components are needed to cooperate, or the service scene is complex, or the user has difficulty in giving an instant instruction in real time. The service components may be different service components integrated in a collaboration platform, a first service component connected to the collaboration platform, or a second service component associated with a user, where the first service component is generally allowed to interact with the service components in the collaboration platform, or the user of the platform, with a certain type or range of information, and the second service component is a service component registered or used by the user. In other cases, a unified and standardized interactive experience is typically provided, which often fails to meet the personalized needs of all users. All the problems can affect the interaction efficiency and interaction experience of the user and the digital assistant, and the interaction function of the digital assistant is not flexible enough.
According to some embodiments of the present disclosure, improvements for task processing are presented, which aim to solve at least one of the above technical problems. In an embodiment of the present disclosure, a first message sent by a first digital assistant corresponding to at least a first user is detected, the first message indicating a target task; determining the relevance of the target task indicated by the first message to the second user; and in response to determining that the target task is associated with the second user, performing at least a portion of the target task with a second digital assistant corresponding to the second user based on the first message. In this way, the intelligence and autonomy of the digital assistant can be improved, the flexibility of the use of the digital assistant can be improved, and the working efficiency based on the digital assistant can be improved. In addition, since the digital assistant can acquire only information to which the user grants access, it is also possible to secure data.
According to some embodiments of the present disclosure, improvements for task processing are presented, which aim to solve at least one of the above technical problems. In an embodiment of the present disclosure, a second digital assistant obtains first event information triggered by a first digital assistant; and the second digital assistant executing the first task based on the first event information and outputting first information related to the first task. In this way, the intelligence and autonomy of the digital assistant can be improved, the flexibility of the use of the digital assistant can be improved, and the working efficiency based on the digital assistant can be improved.
Some example embodiments of the present disclosure will be described in detail below with reference to examples of the accompanying drawings.
Fig. 2 illustrates a flow chart of a process 200 for task processing according to some embodiments of the present disclosure. For ease of discussion, the process 200 will be described with reference to the environment 100 of FIG. 1. Process 200 may be implemented at terminal device 110 and/or server 130. For ease of description, the process 200 is described below as being implemented at the terminal device 110. It should be appreciated that some of the operations described with reference to terminal device 110 may require assistance from server 130 to complete. It should be noted that the operations performed by the terminal device 110 may be performed by related service components and/or digital assistants installed on the terminal device 110.
At block 210, the terminal device 110 or a second digital assistant on the terminal device 110 detects a first message sent by a first digital assistant corresponding to at least a first user, the first message indicating a target task.
The first digital assistant may be a digital assistant corresponding to only one first user, or may be a digital assistant corresponding to a plurality of users including the first user. That is, the first digital assistant may be a personal assistant, a team assistant/organization assistant, or the like. The second digital assistant may be, for example, a digital assistant corresponding only to the second user (i.e., the second user's personal digital assistant). The second digital assistant and the first digital assistant may both be installed in the terminal device 110.
Regarding the determination of the target task and the sending of the first message, the target task here may be, for example, a task that the terminal device 110 or the first digital assistant determines by itself based on at least one session message in the group chat session. The terminal device 110 or the first digital assistant may analyze the content of at least one session message in the group chat session, for example, to determine a target task corresponding to the at least one session message. For example, the terminal device 110 or the first digital assistant may determine that the target task corresponding to the at least one session message is the task "conference room" in response to the text "conference room" existing in the content of the at least one session message. It will be appreciated that at least one session message in the group chat session herein is merely an example, and the target task may also be a task determined by the terminal device 110 or the first digital assistant based on other information such as the content of the document, the content of the mail, the content of the meeting, etc. For example, the terminal device 110 or the first digital assistant may further perform a summary based on a certain document content, and determine a target task based on the summary result.
The terminal device 110 or the first digital assistant may determine the target task based on, for example, at least one session message and a predetermined matching rule. The terminal device 110 or the first digital assistant may also determine, for example by means of a model, the target task indicated by the at least one session message. In some embodiments, the terminal device 110 or the first digital assistant may construct a prompt (prompt) input based on the at least one session message. The terminal device 110 or the first digital assistant may provide the prompt word input to the model and obtain the model-determined target task. In some embodiments, the model may be a machine learning model, a deep learning model, a neural network, or the like. In some embodiments, the model may be based on a Language Model (LM). The language model can have question-answering capability by learning from a large number of corpora. The model may also be based on other suitable models.
The target task here may also be a task determined by the terminal device 110 or the first digital assistant by itself based on a workflow associated with the group chat session, for example. In some embodiments, the terminal device 110 or the first digital assistant may pre-fetch at least one workflow. The at least one workflow may include a workflow preset by at least one user corresponding to the first digital assistant, the at least one user including the first user. The at least one workflow may also include a workflow that is self-determined by the terminal device 110 or the first digital assistant based on historical tasks of at least one user corresponding to the first digital assistant. The terminal device 110 or the first digital assistant may determine, for example, a workflow associated with the group chat session and a target working node in the workflow based on at least one session message in the group chat session, where the target working node is a working node currently executing or about to execute in the workflow. The terminal device 110 and/or the first digital assistant may in turn determine a target task based on the target work node. Similarly, the workflow associated with the group chat session and the target worker node herein may be determined by the terminal device 110 or the first digital assistant based on predetermined rules or may be determined by means of a model.
The target task here may also be, for example, a task set by at least one user corresponding to the first digital assistant (i.e., at least one user having a setting authority for the first digital assistant). Illustratively, in a group chat session, a user (e.g., a first user) participating in the group chat session and corresponding to the first digital assistant may set up the first digital assistant. The first user may, for example, send a session message to the first digital assistant indicating the target task, and the first digital assistant may determine that the target task was received in response to receiving the session message.
In some embodiments, after determining the target task, the terminal device 110 or the first digital assistant may also set the content or attribute of the first message to indicate that the target task is associated with the second user in response to the first digital assistant and the second digital assistant having an association with the target task. The association here may be, for example, that the first digital assistant and the second digital assistant belong to a workflow comprising the target task. Illustratively, if the first digital assistant and the second digital assistant belong to a workflow that includes the target task, the terminal device 110 or the first digital assistant sets the content or attribute of the first message to indicate that the target task is associated with the second user. For example, the terminal device 110 or the first digital assistant may set the first message as an announcement message including a group chat session of the second user or the second digital assistant to indicate that the target task is associated with the second user.
The association here may also, for example, be for the second digital assistant to issue a trigger request to the first digital assistant for one or more tasks, where the one or more tasks include the target task. For example, the terminal device 110 or the first digital assistant may determine that the second digital assistant issued a trigger request to the first digital assistant for one or more tasks in response to detecting that the second digital assistant focused on the first digital assistant, that the second digital assistant subscribed to the first digital assistant, and so on. The terminal device 110 or the first digital assistant may set the content or attribute of the first message to indicate that a target task of the one or more tasks is associated with the second user in response to the second digital assistant issuing a trigger request to the first digital assistant for the one or more tasks.
Terminal device 110 may send a first message indicating the target task in the group chat session using the first digital assistant in response to determining the target task and an association between the target task and the second user. The first digital assistant may send the first message, for example, by sending a session message in a group chat session. The first digital assistant may also send the first message, for example, by sending an announcement/notification in a group chat session, setting a pop-up window, etc. The present disclosure is not limited to the particular manner in which the first digital assistant sends the first message.
The first message sent by the first digital assistant may be, for example, an Instant Messaging (IM) message. The first message may also be a summary/analysis content or the like comprised by a document or mail or any other suitable information carrier. The first message may be any suitable type of message, which may include, for example, text, images, audio, video, links, etc., as not limited by this disclosure. Illustratively, in a group chat session, the terminal device 110 or a second digital assistant on the terminal device 110 may receive a session message from the first digital assistant. Such group chat session includes at least a first digital assistant, a first user, and a second user. It will be appreciated that the first digital assistant may be the digital assistant corresponding to the first user, or may be the digital assistant corresponding to all users participating in the group chat session (including the first user and the second user). The second digital assistant may or may not be in the group chat session. If the second digital assistant is in the group chat session, it may directly receive the first message in the group chat session sent by the first digital assistant. If the second digital assistant is not in a group chat session, it may also detect all messages in a business component (e.g., a chat business component), including messages in individual chat sessions in the business component.
It should be noted that the first digital assistant and the second digital assistant should be digital assistants that acquire the authorization of the user. For example, for a second user, only a second digital assistant that acquired authorization of the second user may detect the first message.
At block 220, the terminal device 110 or a second digital assistant on the terminal device 110 determines an association of the target task indicated by the first message with the second user.
In some embodiments, the terminal device 110 or the second digital assistant may determine the content of the first message and/or the attribute of the first message based on the first message. The terminal device 110 or the second digital assistant may in turn determine the relevance of the target task to the second user based on the content and/or the properties of the first message.
In particular, the terminal device 110 or the second digital assistant may also determine that the target task is associated with the second user in response to the first message including the name of the second user or the second digital assistant. For example, the terminal device 110 or the second digital assistant may determine that the target task indicated by the first message is associated with the second user in response to the first message including the text "second user", the text "second digital assistant", and so on. In some embodiments, to ensure accuracy of the relevance determination, the terminal device 110 or the second digital assistant may also determine that the target task is associated with the second user in response to the first message including a reference symbol to the second user or the second digital assistant. Reference herein to a second user or second digital assistant may include reference to a symbol as well as the name of the second user or second digital assistant. For example, the terminal device 110 or the second digital assistant may determine that the target task indicated by the first message is associated with the second user in response to the first message including "@ the second user", "@ the second digital assistant", and the like.
The terminal device 110 or the second digital assistant may also determine that the target task is associated with the second user in response to the first message being an advertisement message that includes a group chat session of the second user or the second digital assistant. In this case, since the advertisement message is an advertisement message directed to all users and all digital assistants in the group chat session, the terminal device 110 or the second digital assistant may determine that the target task indicated by the advertisement message (i.e., the first message) is associated with the second user in the group chat session in response to the second user and/or the second digital assistant.
The terminal device 110 or the second digital assistant may also determine that the target task is associated with the second user in response to the first message being directed to a community including the second user. It will be appreciated that the second user may be a user in a certain group of users. The terminal device 110 or the second digital assistant may determine that the target task indicated by the first message is associated with the second user in response to the first message being a message for a community of the second user. For example, the terminal device or the second digital assistant may determine that the target task is associated with the second user in response to the first message including a reference symbol to the group in which the second user is located. For example, if the second user is in a "project group," the terminal device 110 or the second digital assistant may determine that the target task indicated by the first message is associated with the second user in response to the first message being a message that is clustered to all users in the project group, or in response to a reference symbol for "project group" being included in the first message.
In some embodiments, the terminal device 110 or the second digital assistant may also determine that the target task is associated with the second user in response to determining that the first digital assistant and the second digital assistant have an association with the target task. That is, the terminal device 110 may also determine that the target task is associated with the second user in response to determining that the first digital assistant and the second digital assistant belong to a workflow that includes the target task, and/or that the second digital assistant issues a trigger request to the first digital assistant for one or more tasks that include the target task.
In block 230, terminal device 110 performs at least a portion of the target task with a second digital assistant corresponding to the second user based on the first message in response to determining that the target task is associated with the second user.
It should be noted that, in the embodiment of the present disclosure, the enablement of the task execution function, the acquired data, the processing and storage manner of the data, etc. should all obtain the advance authorization of the user and other rights objects associated with the user, and should conform to the relevant legal regulations and the agreement rules between the rights objects. And the digital assistant actively performs the relevant tasks within the authorization scope of the user.
After terminal device 110 determines that the target task is associated with the second user, it may instruct the second digital assistant to perform at least a portion of the target task based on the content of the first message. It will be appreciated that if the second digital assistant determines that the target task is associated with the second user, it may perform at least a portion of the target task itself based on the content of the first message. Specifically, the terminal device 110 or the second digital assistant may determine whether the second user needs to perform all or a portion of the target task based on the first message. For example, if the target task indicates to order a conference room, the terminal device 110 or the second digital assistant may perform the target task based on the first message. If the target task indicates to hold the meeting, the terminal device 110 or the second digital assistant may further determine which sub-task of the meeting is specifically performed by the second user based on at least the first message, and only perform the sub-task, since the task of holding the meeting may further include a plurality of sub-tasks (e.g., ordering a meeting room, determining the content of the meeting, determining the attendees, etc.).
In some embodiments, the terminal device 110 or the second digital assistant may also provide a confirmation message associated with the target task to the second user before performing at least a portion of the target task. The terminal device 110 or the second digital assistant may determine that at least a portion of the target task may be performed in response to receiving a confirmation operation of the confirmation message from the second user.
An example in which the first digital assistant and the second digital assistant have an association relationship is described in detail below in conjunction with fig. 3. Fig. 3 illustrates a flow chart of a process 300 for task processing according to some embodiments of the present disclosure. For ease of discussion, the process 300 will be described with reference to the environment 100 of FIG. 1. Process 300 may be implemented at terminal device 110 and/or server 130. For ease of description, the process 300 is illustrated below as being implemented at the terminal device 110. It should be appreciated that some of the operations described with reference to terminal device 110 may require assistance from server 130 to complete. It should be noted that the operations performed by the terminal device 110 may be specifically performed by related service components and/or digital assistants installed on the terminal device 110.
At block 310, terminal device 110 or a second digital assistant on terminal device 110 obtains first event information triggered by the first digital assistant. The first digital assistant and the second digital assistant have a certain association, which may be, for example, that the second digital assistant and the first digital assistant are in the same business entity. Business entities herein may include, for example, but are not limited to, groups, documents, meetings, projects, tasks, and the like. The association may also, for example, concern the second digital assistant that the first digital assistant and/or the second digital assistant subscribe to at least part of the functionality of the first digital assistant.
The first digital assistant may be a digital assistant associated with the first user. The first digital assistant may also be a digital assistant of the first business component. The first business component may be, for example, a meeting, a document, a calendar, and the like. Accordingly, the digital assistant of the first business component may include, for example, a conference assistant, a document assistant, a calendar assistant, and the like. It will be appreciated that the digital assistant of the first business component is associated with all users corresponding to the first business component. For example, a conference assistant is associated with all users participating in the conference. The second digital assistant is a digital assistant associated with the second user, and the second digital assistant may output first information related to the first task to the second user.
The second digital assistant acquires first event information triggered by the first digital assistant having an association relationship with the first event information and determines a first task indicated by the first time information. For example, if the first event information triggered by the first digital assistant indicates that the conference room is booked, the second digital assistant may determine that the first task indicated by the first event information is booked after acquiring the first event information.
At block 320, the terminal device 110 or a second digital assistant on the terminal device 110 performs a first task based on the first event information and outputs first information related to the first task.
The first information related to the first task output by the terminal device 110 or the second digital assistant may be, for example, an execution result of the first task, an execution state of the first task, an execution progress of the first task, or the like. The terminal device 110 or the second digital assistant may output the first information related to the first task, for example, by sending a session message to the second user. The conversation message may include text, pictures, audio, and so forth. The second user may determine first information related to the first task based on the session message.
In summary, according to embodiments of the present disclosure, the digital assistant may obtain information indicating a task transmitted by other digital assistants and perform the task based on the information. The intelligent and autonomous digital assistant can be improved, the flexibility of the digital assistant is improved, and the working efficiency of the digital assistant is improved.
In some embodiments, the terminal device 110 or a first digital assistant on the terminal device 110 detects whether a task trigger condition associated with the first digital assistant is met.
The terminal device 110 may obtain a task trigger associated with a first digital assistant, which may be, for example, the digital assistant 120. It will be appreciated that the first digital assistant corresponds to a first user, which may include one or more users, i.e., the first digital assistant may be a personal assistant, a team assistant/organizational assistant, or the like. In some embodiments, the task trigger condition may be configured by a first user (i.e., user 140 corresponding to terminal device 110, which may also be referred to as a current user) setting corresponding to the first digital assistant.
Specifically, terminal device 110 may receive at least one user input by a first user to a first digital assistant. Terminal device 110 may receive at least one user input by a first user to a first digital assistant, for example, in a session window between the first user and the first digital assistant. The user input may be any type of input, such as text, audio, images, and the like. Terminal device 110 may determine a task trigger condition based on the at least one user input. Terminal device 110 may, for example, analyze the user input to determine a task trigger condition indicated by the user input. Illustratively, the first user may send a conversation message "whenever me receives a customer complaint mail," reminding me, and ready a reply mail "in advance in the interactive window of the first user and the first digital assistant. Terminal device 110 may determine, in response to receiving the session message, that the task trigger condition is "customer complaint mail received". Alternatively or additionally, the task trigger condition may also be determined by the terminal device 110 itself. For example, terminal device 110 may determine the task trigger condition based on historical data associated with the first user.
In some embodiments, the terminal device 110 may determine the task trigger condition by means of a target model. Terminal device 110 may, for example, construct a prompt word (prompt) input based on at least one user input/history data. Terminal device 110 may provide the prompt word input to the model and obtain the task trigger conditions determined by the model. In some embodiments, the model may be a machine learning model, a deep learning model, a neural network, or the like. In some embodiments, the model may be based on a Language Model (LM). The language model can have question-answering capability by learning from a large number of corpora. The model may also be based on other suitable models.
In some embodiments, the terminal device 110 or a first digital assistant on the terminal device 110 initiates task execution associated with the first digital assistant in response to the task trigger condition being met. The task execution includes: acquiring first task related information; based on the acquired first task related information, performing at least one of the following by interacting with the target model: acquiring attribute information of a first task; the first task is executed based on the attribute information of the first task. In some embodiments, interaction with the object model may occur during an attribute information acquisition phase of the first task. In some embodiments, interaction with the target model may occur during an execution phase of the first task. In some embodiments, both the attribute information acquisition phase and the execution phase of the first task involve interactions with the target model. The specific use of the object model in the various stages will be discussed in the following examples.
Therefore, the task related information can be automatically acquired under the specific triggering condition without the active triggering of the user, and the user can be helped to execute the corresponding task. The first digital assistant may automatically initiate execution of the task without requiring real-time instructions from the user if the task trigger condition is met. In task execution, the completion of tasks may also be aided by interactions with the object model).
In some embodiments, the active execution of the first task may be based on interactive instructions or non-interactive instructions with the user. In some embodiments, the first digital assistant may also determine whether additional input by the first user is required for the execution of the first task. If the first task is executed without additional input of the first user, the first digital assistant can actively execute the first task directly based on the acquired task related information without interaction instructions of the user. In some embodiments, if the execution of the first task requires additional input by the first user, the first digital assistant may send a message to the first user indicating that the first user provides additional information about the first task. The first digital assistant may then receive additional input from the first user and determine additional information corresponding to the additional input, and the first digital assistant may then perform the first task based on the obtained task related information and the additional information. In this process, the first digital assistant may complete the first task through an active dialogue with the user 140 and further acquired interaction information. For example, the first digital assistant may first draft a project assessment report to send to the user 110 for confirmation and modification by the user, ultimately completing the project assessment report.
It should be noted that the message indicating that the first user provides additional information about the first task and the additional message may both be sent and received in the first user's interactive window with the first digital assistant. For example, terminal device 110 may send a message in the first user's interactive window with the first digital assistant instructing the first user to provide additional information about the first task. The terminal device 110 may in turn obtain additional information based on the multiple rounds of session of the first user with the first digital assistant. For another example, the terminal device 110 may also present operational controls, shortcuts, etc. associated with obtaining additional inputs in an interactive window of the first user with the first digital assistant. The terminal device 110 may determine that additional input is acquired in response to detecting a triggering operation of the corresponding operation control/shortcut instruction, and further determine additional information indicated by the additional input.
In some embodiments, the terminal device 110 notifies the first user of the result of the execution of the first task using the first digital assistant. The terminal device 110 may notify the first user of the execution result of the first task by, for example, transmitting a notification message indicating the execution result of the first task in an interactive window of the first user with the first digital assistant.
In some embodiments, the task trigger condition for triggering the first digital assistant to actively perform the task may include an execution condition of one or more worker nodes in the first workflow being satisfied. The first workflow herein may include a plurality of work nodes. The plurality of working nodes respectively have corresponding tasks, and a dependency relationship exists between the tasks corresponding to two or more working nodes in the plurality of working nodes. For example, if the first node and the second node are neighboring nodes and the first node is located in front of the second node, the second node may perform the task corresponding to the second node only if the first node completes the task corresponding to the first node. In this case, the terminal device 110 may determine that the task trigger condition associated with the first digital assistant is satisfied, for example, in response to detecting that the execution condition of a certain working node is satisfied.
In addition to actively performing tasks according to a workflow, in some embodiments, the task trigger condition for the first digital assistant may include, for example, the first user receiving a notification message from the first business component. The first service component here may be, for example, a service component integrated with the first digital assistant. In this case, the terminal device 110 may determine that the task trigger condition associated with the first digital assistant is satisfied in response to detecting the notification information from the first business component. The notification message may be a notification message of the service component itself, or may be a notification message sent by a contact in the service component. In some embodiments, the first user may indicate to the first digital assistant a notification message to detect the first business component when the task is set. In some embodiments, the user may also define the type of notification message when setting up the task. In this case, the terminal device 110 may determine that the task trigger condition associated with the first digital assistant is satisfied in response to detecting the predetermined type of notification information from the first service component.
In some embodiments, the task trigger condition may include, for example, an occurrence of an event associated with the first user at the second business component. The second service component may be the same service component as the first service component, or may be a different service component. The event associated with the first user herein may likewise be any suitable event, such as a calendar event, a time shift event, a messaging event, a document shift event, a mailing event, a task update event, etc. associated with the first user. In setting up the task, the first user may indicate to the first digital assistant the second business component that needs to be detected, and the event to be detected in the second business component. In some embodiments, if the second business component is a different business component than the first business component, in order to be able to obtain that an event related to the first user occurred at the second business component, the terminal device 110 may provide a business component program interface (API) and development interface so that the second business component may be registered at the first digital assistant. The first digital assistant may only acquire data of the registered second service component, for example.
In some embodiments, the task trigger condition may include, for example, a preset time to arrive. The task related to the arrival of a preset event may be referred to as a timed task. In this case, the terminal device 110 may determine that the task trigger condition associated with the first digital assistant is satisfied in response to the current time reaching the preset time.
It will be appreciated that some example task trigger conditions are discussed above, and in some embodiments, the task trigger conditions for certain tasks may further include a combination of one or more of a first user receiving a notification message from a first business component, an event associated with the first user occurring at a second business component, a preset time being reached, an execution condition of one or more work nodes in a preset workflow being satisfied, and the like. For example, the task trigger condition may include a preset time being reached and the first user receiving a notification message from the first business component. It will be appreciated that the task trigger conditions may also include any suitable conditions, which are not limited by this disclosure. The number of task trigger conditions associated with the first digital assistant may depend on the first user's settings of the first digital assistant, and different task trigger conditions may trigger execution of one or more tasks.
In some embodiments, terminal device 110 may also determine whether the active execution task function of the first digital assistant is enabled before detecting whether the task trigger condition is met. If it is determined that the actively executing task functionality is enabled, terminal device 110 may determine that the first digital assistant may perform detection of a task trigger condition and detect whether a task trigger condition associated with the first digital assistant is satisfied. If it is determined that the actively executing task function is not enabled, the terminal device 110 may determine that detection of a task trigger condition may not be performed, i.e., the terminal device 110 may not detect whether the task trigger condition associated with the first digital assistant is satisfied. In some embodiments, if it is determined that the actively executing task functionality is not enabled, terminal device 110 may also provide a prompt message to the first user to prompt that the actively executing task functionality of the first digital assistant is not currently enabled. Illustratively, the terminal device 110 may present the alert message in a client interface of the digital assistant (e.g., the client interface presented by the terminal device 110). Regarding the manner in which the first digital assistant's active execution task functions are enabled, in some embodiments, terminal device 110 may present a configuration page for configuring the first digital assistant. An operation control for enabling the active execution task function can be included in the configuration page, for example. The terminal device 110 may determine to enable the active execution task function of the first digital assistant in response to detecting a trigger operation for the operation control. The server 130 may determine that the active execution task function of the first digital assistant is enabled through communication with the terminal device 110.
In some embodiments, the terminal device 110 may continually detect whether one or more task trigger conditions associated with the first digital assistant are met. In some embodiments, terminal device 110 may detect whether the task trigger condition is met by means of a target model). In some embodiments, terminal device 110 may utilize different detection modes to detect whether the task trigger condition is met, taking into account the resources and time overhead of the model call.
In some embodiments, terminal device 110 may detect whether the task trigger condition is met based on a first detection pattern, where the first detection pattern includes other detection patterns in addition to interactions with the target model. In some examples, the first detection mode may include performing a key information search on first information including information for detecting whether a task trigger condition is satisfied. For example, at least one keyword associated with a task trigger condition may be obtained. The at least one keyword may be determined by the first user or may be determined by the terminal device 110 itself based on the task trigger condition. The terminal device 110 may further search whether the first information includes one or more keywords associated with the task trigger condition based on the keyword detection manner, so as to detect whether the task trigger condition related to the first digital assistant is satisfied. In some embodiments, alternatively or additionally, the first detection mode further comprises detecting whether the first information satisfies a preset rule. If the first information satisfies the preset rule, it may be determined that a task trigger condition associated with the first digital assistant is satisfied. That is, in the first detection mode, whether the task trigger condition is satisfied may be determined in a simple, low-cost manner by key information search and/or rule matching.
In some embodiments, terminal device 110 may detect whether a task trigger condition associated with the first digital assistant is satisfied through interaction with the object model) based on the second detection mode. In some embodiments, due to the simplicity and low cost of the first detection mode, the terminal device 110 may first detect whether the task trigger condition is met based on the first detection mode. If it cannot be judged whether the task trigger condition is satisfied based on the first detection mode, it is possible to detect whether the task trigger condition associated with the first digital assistant is satisfied by means of a model based on the second detection mode. This can help reduce the amount of computation by the terminal device 110 and improve detection efficiency.
In some embodiments that detect whether the task trigger condition is satisfied based on the second detection mode, the terminal device 110 may obtain the second alert word based on the first information. The first information here includes information for detecting whether the task trigger condition is satisfied. The task trigger condition can be detected to be met based on interaction between the second prompt word and the model and based on interaction results between the second prompt word and the target model. In such embodiments, the determination of whether the task trigger condition is met may be aided by the information processing capabilities of the model.
When a task is actively executed, for example, a first task is actively executed, the acquired first task related information refers to information that can be used to determine what task is to be executed and how to execute the task. In some embodiments, the first task related information may be obtained based on the task trigger itself. For example, if the task trigger condition is that the first user receives a notification message from the first business component, the first task related information may be obtained based on the notification message from the first business component. The first task related information may include the notification message itself, information mentioned or referenced in the notification message, and so forth. The first task related information may further include information required to perform the first task obtained from a business object associated with the first user in the first business component. The first task related information may further include information required to perform the first task from a business object associated with the first user in the second business component if the task trigger condition is that an event associated with the first user occurs in the second business component. In some embodiments, if the task trigger condition includes an event associated with the first user occurring at the second service component, the terminal device 110 may determine the first task based on the event and obtain task related information of the first task.
Alternatively or additionally, the first task related information may further comprise a determination from at least one third business component other than the first business component or the second business component related to the task trigger condition, e.g. may be based on obtaining information needed for performing the first task from business objects related to the first user in the at least one third business component. The business object associated with the first user may include, for example, documents, meetings, calendars, meeting descriptions, etc. associated with the first user, depending on the type of business component. It should be noted that, the service object related to the first user in each relied service component is a service object that the first user has authority to acquire. For example, if the business object is a document, the document is a document that the first user has permission to acquire. This helps to secure the user's data.
In some embodiments, the terminal device 110 may further obtain task related information of the first task from the historical interaction information of the first user. That is, the terminal device 110 may determine task related information for the first task based on historical interaction information between the first user and the first digital assistant.
In some embodiments, the first task related information may alternatively or additionally be obtained based on input information that results in the task trigger condition being met. The input information here may include user input, the aforementioned first information, and the like.
In some embodiments, the first task related information may alternatively or additionally be obtained based on a knowledge base associated with the task trigger condition. For example, different task trigger conditions may be associated with different services, which may have corresponding knowledge bases. Some or all of the information in the knowledge base may be determined as first task related information for use in determining what tasks to perform and how to perform the tasks.
In some embodiments, as mentioned previously, the first digital assistant may also be associated with a preset workflow and determine whether the execution condition of the one or more worker nodes is met based on the preset workflow. The first digital assistant may also obtain the first task related information based on known execution information of one or more work nodes in the preset workflow when the first digital assistant is associated with the preset workflow. In general, in a workflow, it may be necessary to determine whether and/or how tasks at a certain or some of the work nodes are to be performed based on the execution of previous work nodes. In particular if a certain working node has a plurality of task branches, it is necessary to determine which task branch to execute subsequently based on the execution situation of the previous working node.
Note that in various embodiments, the acquired task related information is information that the first user corresponding to the first digital assistant has authority to acquire, and that the first user authorizes the terminal device 110 or the first digital assistant to acquire. Such information acquisition should meet the requirements of the respective legal regulations and related regulations and obtain a valid authorization of the respective rights body.
After the first task related information is acquired, attribute information of the first task is determined based on the acquired first task related information. The attribute information of the first task may include the type of the first task, i.e., what kind of any is to be performed. The attribute information of the first task may further include parameter information related to execution of the first task, plug-in information for executing the first task, or application development interface (API) information. For example, the type of the first task may indicate that a schedule is created, and the parameter information may indicate a time, a participant, a place, and the like of the schedule to be created. In some embodiments, the plug-in information or the API information in the attribute information may indicate a plug-in or an interface to be invoked that needs to be invoked when executing the first task. Based on the attribute information of the first task, it is possible to determine what task is to be performed, and the manner in which the task is performed. The attribute information of the first task may also include other types of information as long as the types of information can be used to specify a specific manner of execution of the first task to be executed.
In some embodiments, to determine attribute information of the first task, the terminal device 110 or a first digital assistant on the terminal device 110 may obtain a first prompt based on the first task related information and send the first prompt to the model. After obtaining feedback of the model, attribute information of the first task is determined based on the feedback. That is, the model may be used to analyze the prompt word obtained based on the first task related information, so as to determine the type of task, parameter information, plug-in information, API information, etc. that the first digital assistant needs to actively perform.
In some embodiments, the terminal device 110 may acquire a task corresponding to the task trigger condition in advance. Terminal device 110 may store the task trigger conditions in association with the corresponding task. The terminal device 110 may further obtain task related information based on its corresponding task in response to the task trigger condition being satisfied. For example, terminal device 110 may, in response to receiving a session message from the first user in the session window of the first user and the first digital assistant, "remind me whenever me receives a customer complaint mail and prepare for a reply mail," terminal device 110 may determine that the task trigger condition is "receive customer complaint mail" and that the task corresponding to the task trigger condition is "remind the first user and prepare for a reply mail". The terminal device 110 may in turn obtain information related to the preparation of the reply mail. That is, in this case, the terminal device 110 may determine a task to be executed that matches it directly based on the task trigger condition. The task to be performed may also be referred to as a first task.
In some embodiments, terminal device 110 may also only obtain the task trigger condition. The terminal device 110 may determine the task to be performed based on the task trigger condition in response to the task trigger condition being satisfied. For example, if the task trigger condition includes that the first user receives a notification message from the first service component, the terminal device 110 may determine the first task based on the notification message and acquire attribute information of the first task (e.g., information required to execute the first task). That is, the satisfaction of the specific task trigger condition is for triggering the terminal device 110 to acquire attribute information to determine whether a task needs to be executed and what kind of task is to be executed specifically. For example, if the first business component is a chat business component, terminal device 110 may determine that a task trigger associated with the first digital assistant is satisfied in response to receiving a notification message (e.g., message "send me project assessment") from the other user to the first user at the first business component. Terminal equipment 110 may determine, based on the notification message, that the first task is to formulate an item assessment report. The terminal device 110 may in turn obtain information related to formulating a project assessment report (e.g., project data for different users).
Regarding the specific manner of determining the first task based on the notification message, in some embodiments, similar to determining the task trigger condition, the terminal device 110 may determine a prompt word based on information related to the notification message, send the prompt word to the target model, and receive a result returned by the target model. The returned result contains information of the first task to be executed. Alternatively or additionally, in some embodiments, terminal device 110 may also determine a first task that matches the notification message based on a predetermined rule.
As described above, after the task execution is completed, the first user may be notified of the execution result of the first task by the first digital assistant. For example, the terminal device 110 may notify the first user of the execution result of the first task by transmitting a notification message indicating the execution result of the first task in an interactive window of the first user with the first digital assistant.
Taking the task trigger condition as an example that the first user receives a notification message from the first service component, and the terminal device 110 determines that the execution process of the first task requires additional input of the first user, if the notification message indicates that the first user needs to complete project evaluation of a certain object, the terminal device 110 may determine that the task trigger condition is met in response to receiving the notification message. Terminal device 110 may determine, based on the notification information, that the first task is to formulate an item report for an object. The terminal 110 may then actively create a project report draft based on the work output of the subject in the last half year. The project report for the last half year of an object here may be determined by terminal device 110 based on multiple rounds of conversations between the first digital assistant and the first user. Terminal device 110 may send a notification message prompting completion of the task and attach a project report in the first user's interactive window with the first digital assistant.
Taking the task triggering condition as an example, the first user receives a notification message from the first service component, and the terminal device 110 determines that the execution process of the first task does not need additional input of the first user, if the notification message indicates that a new client intention is found, the first user is requested to follow up. Terminal device 110 may determine that the task trigger condition is satisfied in response to receiving the notification message. The terminal device 110 may determine, based on the notification information, that the first task is to formulate an email to send to the new client. The terminal device 110 may sort the new customer's material and draft an email in a predetermined flow. The profile of the new customer may be obtained by the terminal device 110 from a business object associated with the first user in at least one third business component. The terminal device 110 may send a notification message prompting completion of the task in the first user's interactive window with the first digital assistant and attach the new customer's material and email draft.
In summary, according to the embodiments of the present disclosure, the terminal device 110 may actively execute a task indicated by a task trigger condition in response to the task trigger condition being satisfied, which is related to the digital assistant, and may notify the user of the execution result. The intelligent and autonomous digital assistant can be improved, the flexibility of the digital assistant is improved, and the working efficiency of the digital assistant is improved.
It should be appreciated that some embodiments of the above-described process 200 and process 300 may be implemented together or separately at a user's terminal device 110.
Fig. 4 illustrates a block diagram of an apparatus 400 for task processing, according to some embodiments of the present disclosure. The apparatus 400 may be implemented in or included in the terminal device 110, for example. The various modules/components in apparatus 400 may be implemented in hardware, software, firmware, or any combination thereof.
As shown, the apparatus 400 includes a message detection module 410 configured to detect a first message sent by a first digital assistant corresponding to at least a first user, the first message indicating a target task. The apparatus 400 further comprises an association determination module 420 configured to determine an association of the target task indicated by the first message with the second user. The apparatus 400 further includes a task execution module 430 configured to execute at least a portion of the target task with a second digital assistant corresponding to the second user based on the first message in response to determining that the target task is associated with the second user.
In some embodiments, the message detection module 410 is further configured to: in a group chat session, a session message is received from a first digital assistant, wherein the group chat session includes at least the first digital assistant, a first user, and a second user.
In some embodiments, the first digital assistant corresponds to a plurality of users including the first user, and the second user is one of the plurality of users.
In some embodiments, the relevance determination module 420 is further configured to: the relevance of the target task to the second user is determined based on the content or attributes of the first message.
In some embodiments, the relevance determination module 420 is further configured to: the method may further include determining that the target task is associated with the second user in response to the first message including a reference symbol to the second user or the second digital assistant, determining that the target task is associated with the second user in response to the first message being an announcement message including a group chat session of the second user or the second digital assistant, or determining that the target task is associated with the second user in response to the first message being sent directed to a group including the second user.
In some embodiments, the relevance determination module 420 is further configured to: in response to determining that the first digital assistant and the second digital assistant have an association with the target task, the target task is determined to be associated with the second user.
In some embodiments, the apparatus 400 further comprises: a task determination module configured to determine a target task using the first digital assistant based on at least one of: at least one session message in the group chat session, or a workflow associated with the group chat session; a messaging module is configured to send a first message indicating a target task in a group chat session using a first digital assistant.
In some embodiments, the apparatus 400 further comprises: an association setting module configured to set content or attributes of the first message to indicate that the target task is associated with the second user in response to the first digital assistant and the second digital assistant having an association relationship with the target task.
In some embodiments, the association includes at least one of: the first digital assistant and the second digital assistant belong to a workflow comprising a target task, or the second digital assistant issues a trigger request to the first digital assistant for one or more tasks, including the target task.
Fig. 5 illustrates a block diagram of an apparatus 500 for task processing according to some embodiments of the present disclosure. The apparatus 500 may be implemented or included in the terminal device 110, for example. The various modules/components in apparatus 500 may be implemented in hardware, software, firmware, or any combination thereof.
As shown, the apparatus 500 includes an information acquisition module 510 configured to cause a second digital assistant to acquire first event information triggered by a first digital assistant. The apparatus 500 further comprises an execution processing module 520 configured to cause the second digital assistant to execute the first task based on the first event information and to output first information related to the first task.
In some embodiments, one or more of the following are satisfied: the second digital assistant and the first digital assistant are within the same business entity; the second digital assistant focuses on the first digital assistant; the second digital assistant subscribes to at least some of the functionality of the first digital assistant.
In some embodiments, the second digital assistant is a digital assistant associated with the second user, the second digital assistant outputting first information related to the first task to the second user.
In some embodiments, the first digital assistant is a digital assistant associated with the first user; or the first digital assistant is a digital assistant of the first business component.
It should be appreciated that the above apparatus 400 and/or apparatus 500 may be implemented in the same apparatus or device. One or more steps of the processes discussed above may be performed by an appropriate electronic device or combination of electronic devices. Such an electronic device or combination of electronic devices may include, for example, server 130, terminal device 110, and/or a combination of server 130 and terminal device 110 in fig. 1.
Fig. 6 illustrates a block diagram of an electronic device 600 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 600 illustrated in fig. 6 is merely exemplary and should not be construed as limiting the functionality and scope of the embodiments described herein. The electronic device 600 shown in fig. 6 may be used to implement the terminal device 110 of fig. 1, the apparatus 400 shown in fig. 4 and/or the apparatus 500 shown in fig. 5.
As shown in fig. 6, the electronic device 600 is in the form of a general-purpose electronic device. The components of electronic device 600 may include, but are not limited to, one or more processors or processing units 610, memory 620, storage 630, one or more communication units 640, one or more input devices 650, and one or more output devices 660. The processing unit 610 may be an actual or virtual processor and is capable of performing various processes according to programs stored in the memory 620. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to increase the parallel processing capabilities of electronic device 600.
The electronic device 600 typically includes a number of computer storage media. Such a medium may be any available medium that is accessible by electronic device 600, including, but not limited to, volatile and non-volatile media, removable and non-removable media. The memory 620 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. Storage device 630 may be a removable or non-removable media and may include machine-readable media such as flash drives, magnetic disks, or any other media that may be capable of storing information and/or data and that may be accessed within electronic device 600.
The electronic device 600 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in fig. 6, a magnetic disk drive for reading from or writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data medium interfaces. Memory 620 may include a computer program product 625 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 640 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of the electronic device 600 may be implemented in a single computing cluster or in multiple computing machines capable of communicating over a communication connection. Thus, the electronic device 600 may operate in a networked environment using logical connections to one or more other servers, a network Personal Computer (PC), or another network node.
The input device 650 may be one or more input devices such as a mouse, keyboard, trackball, etc. The output device 660 may be one or more output devices such as a display, speakers, printer, etc. The electronic device 600 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., with one or more devices that enable a user to interact with the electronic device 600, or with any device (e.g., network card, modem, etc.) that enables the electronic device 600 to communicate with one or more other electronic devices, as desired, via the communication unit 640. Such communication may be performed via an input/output (I/O) interface (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions are executed by a processor to implement the method described above is provided. According to an exemplary implementation of the present disclosure, there is also provided a computer program product tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions that are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, and computer program products implemented according to the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of implementations of the present disclosure has been provided for illustrative purposes, is not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each implementation disclosed herein.

Claims (17)

CN202311569312.XA2023-11-222023-11-22Method, apparatus, device and storage medium for task processingPendingCN119003108A (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202311569312.XACN119003108A (en)2023-11-222023-11-22Method, apparatus, device and storage medium for task processing
PCT/CN2024/131404WO2025108132A1 (en)2023-11-222024-11-11Method and apparatus for task processing, device, and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202311569312.XACN119003108A (en)2023-11-222023-11-22Method, apparatus, device and storage medium for task processing

Publications (1)

Publication NumberPublication Date
CN119003108Atrue CN119003108A (en)2024-11-22

Family

ID=93487736

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202311569312.XAPendingCN119003108A (en)2023-11-222023-11-22Method, apparatus, device and storage medium for task processing

Country Status (2)

CountryLink
CN (1)CN119003108A (en)
WO (1)WO2025108132A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107636710A (en)*2015-05-152018-01-26苹果公司 Virtual assistant in a communication session
CN110622126A (en)*2017-05-152019-12-27谷歌有限责任公司Providing access to user-controlled resources through automated assistant
CN110710170A (en)*2017-06-292020-01-17谷歌有限责任公司 Proactive provision of new content to group chat participants
CN110720090A (en)*2017-07-072020-01-21谷歌有限责任公司Invoking an automated assistant to perform multiple tasks through individual commands
US20210249009A1 (en)*2020-02-122021-08-12Apple Inc.Digital assistant interaction in a video communication session environment
US20230058929A1 (en)*2021-08-132023-02-23Apple Inc.Digital assistant interaction in a communication session
CN116775344A (en)*2023-07-312023-09-19北京字跳网络技术有限公司Method, apparatus, device and storage medium for interaction

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10956666B2 (en)*2015-11-092021-03-23Apple Inc.Unconventional virtual assistant interactions
US10824932B2 (en)*2016-04-292020-11-03Microsoft Technology Licensing, LlcContext-aware digital personal assistant supporting multiple accounts
US11538478B2 (en)*2020-12-072022-12-27Amazon Technologies, Inc.Multiple virtual assistants

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107636710A (en)*2015-05-152018-01-26苹果公司 Virtual assistant in a communication session
CN110622126A (en)*2017-05-152019-12-27谷歌有限责任公司Providing access to user-controlled resources through automated assistant
CN110710170A (en)*2017-06-292020-01-17谷歌有限责任公司 Proactive provision of new content to group chat participants
CN110720090A (en)*2017-07-072020-01-21谷歌有限责任公司Invoking an automated assistant to perform multiple tasks through individual commands
US20210249009A1 (en)*2020-02-122021-08-12Apple Inc.Digital assistant interaction in a video communication session environment
US20230058929A1 (en)*2021-08-132023-02-23Apple Inc.Digital assistant interaction in a communication session
CN116775344A (en)*2023-07-312023-09-19北京字跳网络技术有限公司Method, apparatus, device and storage medium for interaction

Also Published As

Publication numberPublication date
WO2025108132A1 (en)2025-05-30

Similar Documents

PublicationPublication DateTitle
US9824335B1 (en)Integrated calendar and conference application for document management
WO2021076335A1 (en)Contextual meeting participant suggestion platform
US20240385860A1 (en)Method, device, and storage medium for information interaction
CN118520954A (en)Method, apparatus, device and storage medium for information interaction
CN118780392A (en) Method, device, equipment and storage medium for information interaction
CN119011519A (en)Method, apparatus, device and storage medium for message processing
CN119003108A (en)Method, apparatus, device and storage medium for task processing
US9628629B1 (en)Providing conference call aid based on upcoming deadline
CN119003107A (en)Method, apparatus, device and storage medium for task processing
CN119003109A (en)Method, apparatus, device and storage medium for task processing
US20250165127A1 (en)Information interaction
CN119003106A (en)Method, apparatus, device and storage medium for task processing
CN119002980B (en) Method, device, equipment and storage medium for model configuration
US20250166634A1 (en)Method, apparatus, device, and storage medium for information interaction
US20250168133A1 (en)Method, apparatus, device and storage medium for conversation interaction
CN119002743A (en)Method, apparatus, device and storage medium for information interaction
US20250013478A1 (en)Method, apparatus, device, and storage medium for processing information
US20250165273A1 (en)Information interaction
CN119002736B (en) Method, apparatus, device, and storage medium for application operation
CN119916981A (en) Method, device, equipment and storage medium for conversation interaction
CN119916980A (en) Method, device, equipment and storage medium for information interaction
US20250307555A1 (en)Information extraction
US11831694B2 (en)Method and system for content management for a virtual meeting
CN120579656A (en)Method, apparatus, device and storage medium for transaction processing
US20250165264A1 (en)Method, apparatus, device and storage medium for information interaction

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp