Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
First, key technical term definitions related to the embodiments of the present application are explained.
And (5) a behavior tree.
The behavior tree is a decision tree for representing task and subtask hierarchies, and task execution flow and logic judgment are managed through behavior nodes and node information (also called conditional nodes). The behavior tree is used as a task decomposition and decision management tool, and clear and visual task execution logic and dynamic adjustment capability are provided through hierarchical behavior nodes and condition nodes.
Large models are intended.
The intent big model is a deep learning model that uses natural language processing techniques to extract and understand the intent of a user from the user's instructions. The large intention model can extract complex intention information from user input, so that the naturalness and effectiveness of human-computer interaction are enhanced.
A modular control architecture.
The modular control architecture is a design method for dividing a robot control system into a plurality of independent functional modules, and simplifies system design and maintenance by decomposing the system into independent functional modules, such as navigation, operation, perception, and the like. Each module in the modular control architecture can be independently developed, tested, and optimized, and communicate and cooperate through a standard interface.
Next, an overall concept of a control method of a robot provided by an embodiment of the present application will be described.
Robot mission planning and execution has always played an important role in contemporary artificial intelligence and automation technology. In the related art, a robot control system generally uses a single instruction set to control a robot to perform task planning in response to user input, and uses a fixed task execution sequence to control the robot to perform task execution, but there is no flexibility and adaptability for task adjustment and execution. In this way, as the application scenario of the robot becomes more and more complex, the robot control system will be very easy to control the robot to smoothly perform task planning and execution due to lack of flexible task adjustment and execution strategies.
Based on the above, the embodiment of the application provides a control method, equipment, medium and program product of a robot, which aim to improve the flexibility and accuracy of task planning and execution of the robot, so that the robot can adapt to dynamically-changed complex scenes to perform efficient work.
The method comprises the steps of generating a behavior tree based on task intention information corresponding to a user instruction, wherein the task intention information is used for representing the intention of a user to instruct a robot to execute an operation task, tree nodes of the behavior tree are used for representing subtasks of the operation task executed by the robot, node information of each tree node is used for representing execution conditions of the subtasks corresponding to the tree nodes, controlling the robot to execute the operation task based on the behavior tree, receiving task execution state information fed back by the robot in the process of executing the operation task, and carrying out dynamic adjustment operation on the behavior tree to obtain an updated behavior tree when the task execution state information represents that the operation task has an abnormal execution condition, and controlling the robot to continue executing the operation task based on the updated behavior tree, wherein the dynamic adjustment operation comprises adjustment of the subtasks represented by the tree nodes of the behavior tree and/or adjustment of the execution conditions represented by the node information of the tree nodes.
In this way, compared with the mode that the robot adopts a single instruction set and a fixed execution sequence to control the robot to respond to user input to carry out task planning and execution, the embodiment of the application converts task intention information corresponding to the user instruction into tasks in the behavior tree, so that the robot is controlled to execute operation tasks indicated by the user intention through the behavior tree. And then, receiving task execution state information fed back by the robot in the process of executing the operation task, dynamically adjusting the behavior tree to obtain an updated behavior tree when the task execution state information characterizes that the operation task of the robot is abnormal in execution, and controlling the robot to continuously execute the operation task through the updated behavior tree. Wherein the dynamic adjustment for the behavior tree mainly adjusts subtasks represented by tree nodes of the behavior tree and/or adjusts execution conditions represented by node information of the tree nodes, i.e. by dynamically adjusting execution strategies and/or execution sequences of operation tasks. Therefore, the robot task planning and execution can be realized more flexibly and accurately in response to the user instruction, and even under a more and more complex scene, the strategy and/or sequence of the operation tasks can be dynamically adjusted, so that the operation tasks can be planned and executed with high adaptability. That is, the embodiment of the application improves the flexibility and the accuracy of controlling the robot to carry out task planning and execution, so that the robot can adapt to a complex scene with dynamic change to carry out efficient work.
Next, a method, apparatus, medium, and program product for controlling a robot according to the embodiments of the present application will be specifically described by the following embodiments, and first, a method for controlling a robot according to the embodiments of the present application will be described in detail.
The embodiment of the application provides a control method of a robot, and relates to the technical field of robots. The control method of the robot provided by the embodiment of the application can be applied to the terminal, can be applied to the server side, and can also be software running in the terminal or the server side. In some embodiments, the terminal may be a terminal configured by the robot itself, or may be an electronic device such as a smart phone, a tablet computer, a notebook computer, or a desktop computer associated with the robot, where associating the terminal with the robot means that the terminal may perform communication data interaction with the robot based on a network. The server side can be a background server terminal device of the robot, can be configured as an independent physical server, can be configured as a server cluster or a distributed system formed by a plurality of physical servers, and can be configured as a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution networks (Content Delivery Network, CDNs), basic cloud computing services such as big data and artificial intelligent platforms and the like. The software may be an application implementing a control method of the robot, a computer program, a storage medium carrying the computer program, or the like. It should be understood that, based on different design requirements of practical applications, in different possible embodiments, the terminal, the server side, the software, etc. applying the control method of the robot provided by the embodiment of the present application may be other forms not listed herein, and the control method of the robot provided by the embodiment of the present application is not limited specifically.
Furthermore, the application is operational with numerous general purpose or special purpose computer system environments or configurations. Such as robots, personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, personal computers (Personal Computer, PCs), minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
For easy understanding and explanation, the control method of the robot provided by the embodiment of the application is taken as an example for the electronic equipment configured by the robot, and each specific embodiment of the application is described in detail. The implementation of the control method of the robot provided by the embodiment of the application can refer to the process of the control method of the robot applied by the electronic equipment explained below.
It should be noted that, in each specific embodiment of the present application, when related processing is required according to user information, user behavior data, user history data, user location information, and other data related to user identity or characteristics, permission or consent of the user is obtained first, and the collection, use, processing, and the like of the data comply with related laws and regulations and standards. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the user, the independent permission or independent consent of the user is acquired through popup or jump to a confirmation page and the like, and after the independent permission or independent consent of the user is definitely acquired, the necessary relevant data of the user for enabling the embodiment of the application to normally operate is acquired.
Referring to fig. 1, fig. 1 is a schematic flow chart of steps in some embodiments of a control method of a robot according to an embodiment of the present application. It should be understood that, although fig. 1 and the subsequent other step flowcharts illustrate the execution sequence of some method steps, the control method of the robot provided in the embodiment of the present application may certainly use different execution sequences of the method steps illustrated in the drawings based on different design needs of practical applications. That is, the sequence of the steps of the method shown in fig. 1 does not constitute a limitation on the execution logic sequence of the control method of the robot provided in the embodiment of the present application, and any other reasonable change based on the sequence of the steps of the method shown in fig. 1 should be included in the protection scope of the control method of the robot provided in the embodiment of the present application.
As shown in fig. 1, in some embodiments, the electronic device may include steps S101 to S104 by applying the method for controlling a robot provided in the embodiments of the present application.
And step 101, generating a behavior tree based on task intention information corresponding to a user instruction, wherein the task intention information is used for representing the intention of a user for indicating the robot to execute an operation task, tree nodes of the behavior tree are used for representing subtasks of the operation task executed by the robot, and node information of each tree node is used for representing execution conditions of the subtasks corresponding to the tree nodes.
After receiving an instruction (user instruction) issued by a user for the robot, the electronic device obtains an intention (task intention information) of the user for instructing the robot to execute an operation task by intention understanding of the user instruction. Then, the electronic device generates a behavior tree based on the task intention information, that is, the electronic device characterizes subtasks of the operation task performed by the robot through tree nodes of the behavior tree, and characterizes execution conditions of the subtasks corresponding to the tree nodes through node information of the respective tree nodes of the behavior tree.
In some embodiments, the user instruction may be a natural voice instruction issued by the user for the robot. The electronic device may receive, through a microphone of the robot, a voice command input to the robot by a user through natural language.
In other embodiments, the user instruction may also be a text instruction issued by the user for the robot. The electronic equipment can receive a text instruction input by a user for instructing the robot to execute an operation task through a visual human-computer interaction interface provided by the robot. Or the electronic equipment can also receive a text instruction input by a user on a visual man-machine interaction interface provided by the terminal equipment through the terminal equipment which is in communication connection with the robot.
The electronic device can conduct intention recognition on the user instruction through the intention big model, so that task intention information of a user for indicating the robot to execute an operation task is obtained.
In some embodiments, the technical architecture of the control method of the robot provided by the embodiment of the application may include an intention analysis module. The perception module may take the user instructions as input and then parse the user instructions using natural language processing techniques to extract user intent and translate into structured task descriptions (task intent information).
It should be noted that, the intent analysis module, the task planning module, the dynamic adjustment module, the navigation module, the operation module and the perception module in the technical architecture of the control method of the robot provided by the embodiment of the application, which are mentioned later, communicate and cooperate through standard interfaces to form an overall solution. The intention analysis module analyzes the user instruction to generate task intention information, and then the task intention is converted into a behavior tree by the task planning module. The dynamic adjustment module performs feedback to optimize the behavior tree in real time according to the environment and the task, the navigation module and the operation module specifically execute the task, and the perception module provides environment data support for the intention analysis module, the dynamic adjustment module and the like.
In some embodiments, the method for controlling a robot provided by the embodiment of the present application may further include the following steps:
inputting the obtained user instruction into a preset user intention big model, analyzing the user instruction through the user intention big model and outputting task intention information corresponding to the user instruction.
After the electronic equipment acquires an instruction input by a user for instructing the robot to execute an operation task, the user instruction is input into a pre-trained user intention big model, so that analysis and intention judgment are carried out on the user instruction through the user intention big model, and then task intention information corresponding to the user instruction is output.
It should be noted that, the large user intention model may be obtained by performing model training by the electronic device based on the constructed user sample. Wherein the user samples include a user instruction sample and a user intention sample corresponding to the user instruction sample. For example, the electronic device performs model training on the intention model through the user sample in advance, that is, performs intention judgment on the user instruction sample through the intention model, and continuously adjusts model parameters of the optimized intention model based on the user intention sample corresponding to the user instruction sample, so that the user intention output by the intention model is basically consistent with the user intention sample.
In some embodiments, considering generalization capability of the large model, in the process of training the large model of user intention, the electronic device can introduce not only two intentions of a user for indicating the robot to execute a navigation task and execute a specific action task, but also more fine-grained intentions according to the scene of the robot for executing the task, thereby helping the robot to complete the task more accurately. It should be understood that, based on different design needs of practical applications, the electronic device may of course introduce specific different intents for understanding intention information of a user for instructing the robot to execute an operation task when designing the intention model, and the control method of the robot provided by the embodiment of the present application is not limited to specific types of fine-grained intents introduced by the electronic device in the intention model.
And step S102, controlling the robot to execute the operation task based on the behavior tree.
The electronic device controls the robot to execute an operation task instructed by the user to be executed based on the behavior tree immediately after generating the behavior tree based on the task intention information corresponding to the user instruction. For example, the electronic device sequentially controls the electronic device to execute the subtasks represented by each tree node in the whole behavior tree according to each tree node of the behavior tree and the node information of the tree node until the goal of the operation task represented by the whole behavior tree is completed or the maximum execution step number of the operation task is reached.
Step S103, task execution state information fed back by the robot in the process of executing the operation task is received.
After the electronic device controls the robot to execute the operation task based on the behavior tree, the robot continuously feeds back task execution state information of the currently executed task to the electronic device in the process of executing the operation task. And the electronic equipment receives the task execution state information fed back by the robot.
The task execution state information is information for reflecting the task execution condition (such as execution success, execution failure, or executing a state of the sub task) currently executed by the robot. When the robot executes the subtasks represented by one tree node in the behavior tree, the robot can feed back task pointing state information to the electronic equipment.
Step S104, under the condition that the task execution state information represents that the operation task has the execution abnormality, carrying out dynamic adjustment operation on the behavior tree to obtain an updated behavior tree, and controlling the robot to continue to execute the operation task based on the updated behavior tree, wherein the dynamic adjustment operation comprises the steps of adjusting subtasks represented by tree nodes of the behavior tree and/or adjusting execution conditions represented by node information of the tree nodes.
It should be noted that, in the process of controlling the robot to execute the operation task by the electronic device, if the robot fails to execute the currently executed subtask or there is a failure risk due to the self-cause or the change of the external environment, the task execution state information fed back by the robot to the electronic device at this time indicates that the operation task has abnormal execution.
After receiving task execution state information fed back by the robot, if task execution state information fed back by the robot in real time currently received by the electronic device indicates that an operation task has execution abnormality, the electronic device immediately carries out dynamic adjustment operation on a currently used behavior tree to obtain an updated behavior tree, and then the electronic device controls the robot to continue to execute the operation task executed by the user-indicated robot based on the updated behavior tree until the goal of the operation task is completed or the maximum execution step number of the operation task is reached. The dynamic adjustment operation of the electronic device for the used behavior tree can be that only subtasks represented by the tree nodes of the behavior tree are adjusted, only execution conditions represented by the node information of the tree nodes are adjusted, or the subtasks represented by the tree nodes and the execution conditions represented by the node information of the tree nodes are adjusted at the same time.
The electronic device may specifically control the robot to perform task planning and execution of the robot in a complex scenario according to the following detailed technical steps, so that the robot adapts to a dynamically changing environment and smoothly performs an operation task indicated by a user. That is, the electronic device first performs an operation of generating an initial behavior tree to generate an initial behavior tree bt= GenerateInitialBehaviorTree (I, E) according to task intention information obtained by analyzing a user instruction and environment information of an environment where the robot is located, so as to characterize an operation task that the user instructs the robot to perform. In the behavior tree BT, each tree node represents a task or subtask, and the relationships between the nodes represent the order and conditions of task execution. In addition, the electronic device may perform context aware operations prior to initial behavior tree generation, to continuously collect the environment of the robot through the sensing module environment data et=of (2) PercentionModule.getEnvironemData (). Then, in the process of controlling the robot to execute the operation task based on the behavior tree BT, the electronic device executes the task execution monitoring operation to monitor the execution condition of the robot on each task node in the behavior tree BT, so as to obtain the task execution feedback ft= MonitorTaskExecution (Ti). The electronic equipment performs feedback analysis and adjustment operations on the acquired task execution feedback Ft, namely, analyzes the execution condition of the current behavior tree according to the environmental data Et and the task execution feedback Ft, and identifies task nodes and sequences which need to be adjusted. AnalysisResult = AnalyzeFeedback (BT, et, ft), and if the analysis result shows that adjustment is required, an operation of updating the behavior tree BT is performed, BT' = AdjustBehaviorTree (BT, analysisResult). When the electronic device performs the operation of updating the behavior tree BT, task nodes and execution sequences in the behavior tree BT may be dynamically adjusted according to the analysis result and a preset adjustment rule, so as to obtain an updated new behavior tree BT' =updatetree (BT, analysisResult). Then, the electronic device continues to control the robot to execute the adjusted behavior tree BT ', thereby causing the robot to continue to execute the operation task ExecuteBehaviorTree (BT') instructed by the user. And in the process that the robot continues to execute the operation task, the electronic equipment still continues to perform task execution monitoring, so that the operation of dynamic adjustment of the behavior tree is circularly performed until the purpose that the robot executes the operation task is achieved or the maximum execution step number is reached.
In the embodiment of the application, the task intention information corresponding to the user instruction is converted into the task in the behavior tree, so that the robot is controlled to execute the operation task indicated by the user intention through the behavior tree. And then, receiving task execution state information fed back by the robot in the process of executing the operation task, dynamically adjusting the behavior tree to obtain an updated behavior tree when the task execution state information characterizes that the operation task of the robot is abnormal in execution, and controlling the robot to continuously execute the operation task through the updated behavior tree. Wherein the dynamic adjustment for the behavior tree mainly adjusts subtasks represented by tree nodes of the behavior tree and/or adjusts execution conditions represented by node information of the tree nodes, i.e. by dynamically adjusting execution strategies and/or execution sequences of operation tasks. Therefore, the robot task planning and execution can be realized more flexibly and accurately in response to the user instruction, and even under a more and more complex scene, the strategy and/or sequence of the operation tasks can be dynamically adjusted, so that the operation tasks can be planned and executed with high adaptability, and the environment change can be responded in real time, and the robot task planning and execution method is suitable for the complex scene. That is, the embodiment of the application improves the flexibility and the accuracy of controlling the robot to carry out task planning and execution, so that the robot can adapt to a complex scene with dynamic change to carry out efficient work.
In addition, the embodiment of the application carries out intention recognition on the user instruction by using the intention large model through the electronic equipment, thereby obtaining task intention information of a user for indicating the robot to execute the operation task, then generating and constructing a behavior tree according to different intents in the task intention information, and controlling the robot to execute the operation task indicated by the user by executing the behavior tree. Therefore, the embodiment of the application remarkably improves the flexibility and accuracy of the planning and execution of the robot task by combining the intention big model with the behavior tree.
In the process of executing the operation task indicated by the user by the controller robot, the electronic device can dynamically adjust the behavior tree according to task execution state information representing the task execution condition fed back by the robot and environmental state change of the environment where the robot is located.
In some embodiments, the method for controlling a robot provided by the embodiment of the present application may further include the following steps:
And acquiring first environment information of the environment where the robot is located.
It should be noted that the first environmental information is an environmental factor for characterizing that the change has an influence on the performance of the operation task by the robot, for example, an obstacle appearing on the navigation path of the robot, or the like.
In the process that the electronic equipment controls the robot to execute the operation task based on the action tree, the robot continuously collects first environment information of the environment where the robot is located through an environment sensing device (such as an image collecting device, a microphone, a temperature sensor, a pressure sensor and the like), then the first environment information is uploaded to the electronic equipment, and the electronic equipment obtains the first environment information.
In some embodiments, the electronic device may obtain the first environmental information using a perception module in the above-described technical architecture. That is, the electronic device takes environmental data of the environment where the robot is located, obtained by acquiring environmental information of the robot or a terminal device associated with the robot, as input through the sensing module, and then processes the environmental data to output first environmental information of the environment where the robot is located.
It should be noted that, the function of the sensing module used by the electronic device may collect environmental data through the sensor, so as to provide real-time environmental information for task planning, execution and dynamic adjustment of the robot to support task execution.
In some embodiments, in the step S104, the step of performing the dynamic adjustment operation on the behavior tree to obtain the updated behavior tree may include the following steps:
And dynamically adjusting at least one tree node of the behavior tree based on the first environment information.
When the electronic equipment receives task execution state information fed back by the robot in real time and the task execution state characterizes that the operation task has execution abnormality, the electronic equipment dynamically adjusts the currently used behavior tree based on the first environment information acquired and uploaded by the robot through the environment sensing device at the current moment so as to obtain an updated behavior tree. For example, the electronic device adjusts only subtasks represented by the tree nodes of the currently used behavior tree based on the first environment information, or adjusts only execution conditions of subtasks represented by the node information of the tree nodes, or adjusts both the subtasks represented by the tree nodes and the execution conditions represented by the node information of the tree nodes.
In some embodiments, the electronic device may dynamically adjust at least one tree node of the behavior tree through a dynamic adjustment module in the above technical architecture. For example, the electronic device takes the action tree and the first environmental information for controlling the robot to execute the operation task as input of the dynamic adjustment module, so that the dynamic adjustment module dynamically adjusts the task execution sequence and strategy in the action tree according to the real-time environmental feedback reflected by the first environmental information, and outputs the updated action tree.
Referring to fig. 2, fig. 2 is a flowchart illustrating steps involved in a method for controlling a robot according to an embodiment of the present application.
As shown in fig. 2, in some embodiments, the step of performing the dynamic adjustment operation on the at least one tree node of the behavior tree based on the first environmental information may include steps S201 to S203 as follows.
And step S201, carrying out state evaluation on subtasks represented by the tree nodes in the behavior tree based on the first environment information to obtain the execution state of the subtasks represented by the tree nodes.
When the electronic equipment dynamically adjusts the behavior tree of the robot based on the first environmental information, firstly, performing state evaluation on the subtasks represented by the tree nodes in the behavior tree based on whether the first environmental information affects the robot to smoothly execute the subtasks currently being executed, so as to obtain the execution state of the subtasks represented by the tree nodes.
The electronic device performs state evaluation on the subtasks represented by the tree nodes to obtain the execution state of the subtasks, which can indicate that the subtasks are successfully executed, that the subtasks are failed to be executed, that the subtasks are blocked to be executed, and that the subtasks are being executed. For example, if the electronic device confirms that the environment in which the robot is located is the environment in which the execution of the operation task is completed after the current sub-task is executed based on the first environment information, the electronic device may evaluate the execution state of the sub-task currently executed by the robot as successful execution. For another example, if the electronic device confirms that the environment in which the robot is located is the environment in which the current sub-task is ready to be executed based on the first environment information, the electronic device may evaluate that the execution state of the sub-task currently executed by the robot is in progress. For another example, if the electronic device confirms that the robot is in the environment based on the first environment information, there is an obstacle that affects the robot to successfully execute the current subtask, and the existence of the obstacle may cause the robot to fail to complete the current subtask, the electronic device may evaluate that the execution state of the subtask currently executed by the robot is an execution failure. For another example, if the electronic device confirms that the robot is in the environment based on the first environment information, there is an obstacle that affects the robot to successfully execute the current subtask, and the existence of the obstacle causes the robot to fail to start executing the current subtask, the electronic device may evaluate that the execution state of the subtask currently executed by the robot is execution blocking.
The execution state of the subtasks obtained by the electronic device may be a result obtained by the electronic device by uniformly performing state evaluation on the subtasks respectively represented by all tree nodes of the behavior tree, so as to indicate the execution condition of the subtasks represented by each tree node. Or the execution state of the subtasks obtained by the electronic equipment can also be the result that the electronic equipment can only instruct the execution condition of the subtasks represented by the tree nodes by independently carrying out state evaluation on the subtasks represented by the tree nodes in the behavior tree.
Step S202, in the case that the execution state indicates that the execution of the target subtask represented by the target tree node in the tree nodes fails, determining the target tree node as a problem node.
After the electronic device performs state evaluation on the subtasks represented by the tree nodes in the behavior tree, if the execution state of the subtasks represented by one or more target tree nodes in the behavior tree is that task execution fails, in this case, the electronic device determines the one or more target tree nodes as problem nodes needing dynamic adjustment.
In some embodiments, after performing the state evaluation on the subtasks represented by the tree nodes in the behavior tree, if the execution state of the subtasks represented by one or more target tree nodes in the behavior tree is that task execution is blocked, in this case, the electronic device may also determine the one or more target tree nodes as problem nodes that need to be dynamically adjusted.
And step 203, performing dynamic adjustment operation on the problem node.
And the electronic equipment immediately carries out dynamic adjustment operation on one or more problem nodes needing dynamic adjustment in the behavior tree after confirming the one or more problem nodes so as to obtain an updated behavior tree. The electronic device may only adjust the subtasks represented by the problem node, or only adjust the execution conditions of the subtasks represented by the node information of the problem node, or adjust the subtasks represented by the problem node and the execution conditions represented by the node information of the problem node at the same time.
It should be noted that, the electronic device may adjust the subtask represented by the problem node by deleting the subtask and binding a new subtask that is re-planned on the problem node. In addition, the electronic device adjusts the execution condition of the subtasks represented by the node information of the problem node, and may adjust the task execution sequence or other logic in the execution condition.
In some embodiments, the electronic device may further split the subtask represented by the current problem node into a plurality of subtasks, and then newly generate the tree node and the node information of the tree node based on the subtasks obtained by splitting, and simultaneously establish the hierarchical relationship between the previous-level node and the next-level node of the current problem node and the newly generated tree node respectively.
In other embodiments, the electronic device may perform the dynamic adjustment operation for the problem node, or plan one or more new subtasks after the subtasks represented by the current problem node, and then newly generate the tree node and the node information of the tree node based on the subtasks obtained by the new planning, and simultaneously establish the hierarchical structure relationship between the current problem node and the newly generated tree node respectively.
When the electronic device dynamically adjusts the problem node in the behavior tree, the specific reason that the robot fails to execute the subtask represented by the problem node can be combined to adjust the subtask and/or the execution condition of the subtask in a targeted manner.
Based on this, in step S202 described above, after determining the target tree node as the problem node in the case where the execution state indicates that the execution of the target subtask represented by the target tree node in the tree nodes fails, the control method of the robot provided by the embodiment of the present application may further include the following steps:
And generating an adjustment strategy based on the failure reason of the target subtask represented by the problem node.
Before the electronic device dynamically adjusts the problem node in the behavior tree, the electronic device is further based on analyzing the failure reason of the target subtask represented by the problem node. For example, if the robot fails to perform the target subtask characterized by the problem node due to environmental changes (such as sudden obstacles), insufficient resources (such as insufficient power), etc., the electronic device may take the environmental changes, insufficient resources, or other factors as failure causes.
Then, the electronic device generates a corresponding adjustment policy based on the failure cause obtained by the analysis. For example, when the failure is due to an environmental change in which an obstacle suddenly appears in front of the robot, the electronic device may generate an adjustment strategy for the re-planning task (in particular, re-planning the navigation path of the robot). In addition, the electronic device may also generate adjustment policies for changing task order, adding new tasks, etc., based on other failure reasons.
Based on this, the step S203 of dynamically adjusting the problem node may include the following steps:
And carrying out dynamic adjustment operation on the problem node based on the adjustment strategy.
After the electronic device analyzes the failure reason of the target subtask represented by the problem node and generates the adjustment strategy corresponding to the failure reason, when the problem node is dynamically adjusted, the electronic device can dynamically adjust the problem node based on the adjustment strategy.
Referring to fig. 3, fig. 3 is a flowchart illustrating steps involved in a method for controlling a robot according to an embodiment of the present application for dynamically adjusting a behavior tree based on an adjustment policy in some embodiments.
As shown in fig. 3, in some embodiments, the step of performing the dynamic adjustment operation on the problem node based on the adjustment policy may include at least one of the following steps S301 to S303.
And step 301, under the condition that the adjustment strategy is used for re-planning the task, switching the subtasks represented by the problem nodes into the subtasks obtained by re-planning.
When the electronic equipment dynamically adjusts the problem node in the behavior tree based on the adjustment strategy, if the electronic equipment generates the adjustment strategy corresponding to the failure reason based on the failure reason of analyzing the target subtask represented by the problem node, and the task is re-planned, in this case, the robot switches the target subtask represented by the problem node into a new subtask obtained by re-planning. For example, the electronic device deletes the target subtask and binds the newly planned new subtask on the problem node. For another example, the electronic device further splits the target subtask into a plurality of subtasks, then newly generates a tree node and node information of the tree node based on the subtasks obtained by splitting, and simultaneously establishes hierarchical structures between a previous-level node and a next-level node of the current problem node and the newly generated tree node respectively.
And step S302, adjusting the task execution sequence in the execution conditions characterized by the node information of the problem node under the condition that the adjustment strategy is to change the task execution sequence.
When the electronic device dynamically adjusts the problem node in the behavior tree based on the adjustment strategy, if the adjustment strategy generated by the electronic device is to change the task execution sequence, in this case, the robot adjusts the execution condition of the target subtask represented by the node information of the problem node, that is, the electronic device adjusts the task execution sequence of the target subtask in the execution condition.
And step S303, adding a new subtask in the subtask represented by the problem node under the condition that the adjustment strategy is adding the new task, and adding the execution condition of the new subtask in the execution condition represented by the condition information of the problem node.
When the electronic device dynamically adjusts the problem node in the behavior tree based on the adjustment strategy, if the adjustment strategy generated by the electronic device is to add a new task, in this case, the electronic device adds the new subtask among target subtasks represented by the problem node, and simultaneously adds the execution condition of the new subtask in the execution condition represented by the condition information of the problem node.
In some embodiments, if the adjustment policy generated by the electronic device is to add a new task, the electronic device may further plan one or more new subtasks after the subtasks represented by the current problem node, and then newly generate a tree node and node information of the tree node based on the subtasks obtained by the new planning, and also establish a hierarchical relationship between the current problem node and the newly generated tree node, respectively.
Referring to fig. 4, fig. 4 is a flowchart illustrating a dynamic adjustment strategy in the execution of a behavior tree according to some embodiments of the control method of a robot according to the present application.
As shown in fig. 4, in some embodiments, the process of building and dynamically adjusting the behavior tree referenced by the electronic device when the robot performs the operation task may be:
After generating an initial behavior tree based on intention judgment on a user instruction, the electronic device performs state evaluation on each tree node in the behavior tree in the process of controlling the robot to execute an operation task based on the behavior tree, so as to determine the execution state (success, failure, in-progress, blocking and the like) of a subtask represented by the tree node, and then identifies a problem node. That is, based on the result of the state evaluation of the tree node, a problem node in which a problem (failure or blockage of execution) occurs when the robot performs a task is identified. Then, the electronic device analyzes the failure reasons of the subtasks represented by the problem nodes, which may be caused by environmental changes, insufficient resources and other factors, so as to generate corresponding adjustment strategies (re-planning tasks, changing task sequences, adding new tasks and the like) according to the failure reasons. Finally, the electronic equipment adjusts the problem node based on the generated adjustment strategy to update the behavior tree, and the electronic equipment controls the robot to execute the new adjusted behavior tree, so that the operation task indicated by the user is continuously executed.
Illustratively, when generating an adjustment strategy based on analyzing the failure cause of a target subtask characterized by a problem node, the electronic device assumes that the target subtask is a "handling article A to a position B" executed by a robot, and the failure cause of the target subtask is that the robot encounters an obstacle when executing the "handling article A to the position B" task, resulting in a path planning failure. The electronic device may generate a corresponding adjustment policy based on the following 1 to 5 procedure 1. Obtain environmental data et= ObstacleDetected and feedback of robot execution tasks ft= PathPlanningFailure,2. Perform state evaluation on the current tree node based on Et and Ft: nodeStatus (NavigateToA) =failure, 3. Identify problem node based on the result of the state evaluation: problemNodes = { NavigateToA },4. Analyze cause of task execution Failure of problem node: failureReasons = ObstacleInPath,5. Generate a corresponding adjustment policy based on the cause of Failure (re-plan path around obstacle): adjustmentStrategy = ReplanPathAroundObstacle.
In some embodiments, the electronic device may control the robot to perform specific tasks using the navigation module and the operation module in the above technical architecture. For example, the electronic device inputs a target position that the robot needs to reach when performing a certain task to the navigation module, and the navigation module calculates and outputs an optimal navigation path according to the target position and an environment map of an environment in which the robot is located, and simultaneously controls the robot to move according to the navigation path. In addition, the electronic device inputs an operation instruction for instructing the robot to perform a specific operation to the operation module, so that the operation module performs a specific operation task (such as grabbing, carrying, etc.) in response to the operation instruction and outputs an execution result.
In this embodiment, the real-time environment feedback and the task execution condition (task execution state information) of the robot feedback are reflected by the electronic device in combination with the first environment information, so that the behavior tree is dynamically adjusted in real time, and the task execution sequence and strategy of the subtasks represented by one or more tree nodes in the behavior tree are actually adjusted, thereby improving the flexibility and adaptability of planning the robot task and controlling the robot to execute the task.
The electronic device can adjust the behavior tree by combining task execution state information fed back by the robot and changed environment information (first environment information), and can generate an initial behavior tree to represent an operation task instructed by a user to be executed by the robot by combining task intention information corresponding to a user instruction and environment information (second environment information) of an environment where the robot is located.
In some embodiments, the method for controlling a robot provided by the embodiment of the present application may further include the following steps:
And acquiring second environment information of the environment where the robot is located.
The second environmental information is information acquisition by the environment sensing device for the environment where the robot is located, and can acquire the acquired environmental information. For example, the second environment information may be one or more of environment image information, environment text description information, and even environment point cloud information of an environment in which the robot is located.
The electronic equipment can control the environment sensing device of the robot to acquire the environment information while acquiring the user instruction, so as to obtain the second environment information.
Referring to fig. 5, fig. 5 is a flowchart illustrating a refinement step of step S101 in fig. 1.
As shown in fig. 5, in some embodiments, step S101 of generating a behavior tree based on task intent information corresponding to a user instruction may include step S501 and step S502 as shown below.
Step S501, performing robot task planning based on task intention information corresponding to a user instruction and the second environment information to obtain at least one subtask of the robot and execution conditions of the at least one subtask.
Under the condition that the electronic equipment receives a user instruction input for the robot and simultaneously acquires second environment information of the environment where the robot is located, task intention information and the second environment information which are obtained by analyzing the user instruction are input into a large language model (Large Language Model, LLM) together, so that the LLM model is used for carrying out robot task planning based on the task intention information and the second environment information, and at least one subtask and execution conditions of the at least one subtask are output. In this way, the electronic device can use the at least one subtask and the execution condition of the at least one subtask output by the LLM model as the execution condition of the at least one subtask and the at least one subtask of the operation task that the user instructs the robot to execute in the current environment where the robot is located by inputting the instruction.
Step S502, binding the at least one subtask with a tree node of a behavior tree, and fusing the execution condition with node condition information of the tree node to construct the behavior tree of the robot.
After the electronic equipment obtains at least one subtask of the operation task and the execution condition of the at least one subtask, the at least one subtask is bound with the tree node of the behavior tree one by one, and the execution condition of the at least one subtask is fused with the node information of the tree node binding the subtask, so that an initial behavior tree of the robot is constructed and obtained and used for representing the operation task of the robot.
In some embodiments, after obtaining at least one subtask of the operation task and the execution condition of the at least one subtask, the electronic device may further directly generate the behavior tree based on the hierarchical structure of the at least one subtask, that is, each subtask is a tree node of the behavior tree, and the execution condition of each subtask is node information of the tree node.
In some embodiments, the electronic device may generate a robot-initiated behavior tree using the mission planning module in the architecture described above. That is, the electronic device uses task intention information and second environment information obtained by analyzing the user instruction as input by the task planning module, so that an initial behavior tree is generated by the task planning module according to the task intention information and the second environment information, and the hierarchical structure of tasks and subtasks executed by the user-indicated robot is represented by the tree nodes of the behavior tree and the node information of the tree nodes. Thus, the electronic equipment can obtain the behavior tree output by the task planning module.
In this embodiment, the electronic device inputs the task intention information and the second environment information into the LLM large model by combining the LLM large model and the behavior tree, so that the robot task planning is performed by using the LLM large model and the behavior tree is output to characterize the operation task executed by the robot, so that the natural language instruction of the user can be accurately converted into a specific task execution scheme of the robot, and further the task execution flexibility and adaptability of the robot in a complex environment are improved. That is, the present embodiment can enhance both the accuracy of the user intent resolution through the LLM large model and the coordination and execution efficiency of the robot multitasking through the use of the behavior tree.
The control method of the robot provided by the embodiment of the application is based on the complete embodiment of the technical architecture.
Referring to fig. 6, fig. 6 is a schematic diagram of the overall structure of a control method of a robot according to some embodiments of the present application.
As shown in fig. 6, in some embodiments, the electronic device performs intent judgment on the user instruction by using the LLM large model through the intent analysis module (analyzes the user instruction by using a natural language processing technology, extracts the user intent from the user instruction, and converts the user intent into a structured task description), so as to obtain task intent information (navigation intent and operation intent) corresponding to the user instruction, which instructs the robot to execute the operation task. Then, the electronic equipment generates and constructs a behavior tree according to different intents in task intention information through a task planning module, and then controls the robot to execute operation tasks represented by the behavior tree through a navigation module and an operation model. In the process of executing the operation of the robot, the electronic equipment dynamically adjusts the behavior tree to represent the smooth execution and the strategy of the operation task according to the environment state and the execution result by using the dynamic adjustment module until the final target of the operation task is completed or the number of steps of executing the operation task by the robot reaches the maximum execution step number.
In some embodiments, the navigation module in the technical architecture may implement optimal path planning by calling a mature technical scheme and a third party API. For example, when the electronic device uses the navigation module, the navigation module first performs an operation of inputting a target position to receive the target position g= GetGoalPosition (), and then the navigation module further performs an operation of building an environment map on an environment where the robot is located, that is, building the environment map m= BuildMapFromSensors (), using sensor data. Thereafter, the Navigation module performs path planning by calling a third-party API, that is, path planning using a third-party API (e.g., google Maps API, ROS Navigation Stack, etc.), generating an optimal path p=third partyapi planpath (M, G)). Finally, the navigation module performs path execution operation based on the optimal path P to control the robot to move along the planned optimal path P ExecutePath (P).
In the process of controlling the robot to conduct task planning and execution based on the technical framework, the electronic equipment always collects environment data through the sensor through the sensing module, so that real-time environment data support is provided for the task planning and execution of the robot.
In this embodiment, the electronic device parses the user instruction by introducing the intent big model, so that the user instruction can be more accurately understood and converted into a specific task and subtask in the behavior tree. In addition, the electronic equipment dynamically adjusts the behavior tree through the dynamic adjustment module, namely, the task execution sequence and strategy in the behavior tree are dynamically adjusted according to environment feedback and task execution conditions, so that the flexibility and adaptability of controlling the robot to carry out task planning and execution are improved. In addition, the electronic equipment enables independent functional modules to cooperate through standard interfaces through a modularized control framework, so that the maintenance and expansion of the whole system can be simplified, and the capability of controlling the robot to perform multi-task parallel processing is improved.
Next, another complete embodiment of the control method of the robot provided by the embodiment of the present application is presented.
Referring to fig. 7, fig. 7 is an application flow chart of a control method of a robot according to an embodiment of the application.
As shown in fig. 7, in some embodiments, when controlling the robot to perform task planning and execution, the electronic device first receives an instruction u= UserInput () input to the robot by a user through natural language. Then, the electronic device performs the process of analyzing the intention with respect to the instruction U, thereby generating the task intention i= IntentModel (U) by analyzing the user instruction U. Then, the electronic device controls the robot to perform task planning operation to generate an initial behavior tree bt= GenerateInitialBehaviorTree (I, E) according to the task intention I and the environment information. Then, the electronic device controls the robot to execute the operation task ExecuteBehaviorTree (BT) represented by the behavior tree BT, and continuously monitors the task execution condition and the environmental change in the process, so as to dynamically adjust the behavior tree to obtain an updated behavior tree BT' = AdjustBehaviorTree (BT, E, F). Finally, the electronic equipment feeds back an execution result to the user after the execution task of the control robot is completed.
Illustratively, assume that in a complex warehouse management scenario, a user indicates that a robot needs to perform a number of tasks, including item handling and path navigation, etc. The electronic device can control the robot to implement the following steps.
First, the electronic device controls the robot to perform an article handling task. That is, the electronic device receives the user instruction u= "walk from the position of the article a to the position of the article B". The electronic device generates task intention i=navigation through intention parsing. Then, the electronic device performs an operation of behavior tree generation based on the task intention I to generate an initial behavior tree bt= { locate item a position→navigate to position a→locate item B position→navigate to position B } according to the intention I. Thereafter, the electronic device controls the robot to perform tasks in the behavior tree BT. In the process that the robot navigates to the article A, the electronic equipment detects that an obstacle Et= { is detected on the path through the sensing module, then the electronic equipment immediately and dynamically adjusts the behavior tree to obtain a new behavior tree BT '= { to plan a new path, navigate to the article A position, locate the B position, navigate to the B position and complete the task }, and thus, the electronic equipment controls the robot to continuously execute the task in the behavior tree BT' to plan the new path to bypass the obstacle.
It should be noted that, in practical application, the control method of the robot provided by the embodiment of the application can be applied to various scenes such as a service robot, an industrial robot, a mobile robot and the like. The control method of the robot provided by the embodiment of the application can enable the robot system or the system for controlling the robot to be flexibly expanded and maintained through the modularized design. In addition, each functional module (an intention analysis module, a task planning module, a dynamic adjustment module, a navigation module, an operation module and a perception module can be independently developed and optimized, wherein the intention big model enables the robot to interact with a user more naturally, the task planning module generates a behavior tree and the dynamic adjustment module dynamically adjusts the behavior tree, and flexibility and adaptability of the robot to task execution are ensured.
Referring to fig. 8, the embodiment of the application further provides a control device for a robot, which can implement the control method for a robot, and the device comprises a task planning module 801, a task execution module 801 and a task dynamic adjustment module 803. Wherein, the
The task planning module 801 is used for generating a behavior tree based on task intention information corresponding to a user instruction, wherein the task intention information is used for representing the intention of a user for indicating a robot to execute an operation task, tree nodes of the behavior tree are used for representing subtasks of the operation task executed by the robot, and node information of each tree node is used for representing execution conditions of the subtasks corresponding to the tree nodes;
a task execution module 801, configured to control the robot to execute the operation task based on the behavior tree;
The task dynamic adjustment module 803 is configured to receive task execution status information fed back by the robot in a process of executing the operation task, and dynamically adjust the behavior tree to obtain an updated behavior tree when the task execution status information indicates that the operation task has an execution abnormality, where the dynamic adjustment operation includes adjusting subtasks represented by tree nodes of the behavior tree, and/or adjusting execution conditions represented by node information of the tree nodes;
The task execution module 802 is further configured to control the robot to continue executing the operation task based on the updated behavior tree.
In some embodiments, the control device for a robot provided by the embodiment of the present application further includes:
the acquisition module is used for acquiring first environment information of the environment where the robot is located;
the task dynamic adjustment module 803 is further configured to dynamically adjust at least one tree node of the behavior tree based on the first environmental information.
In some embodiments, the task dynamic adjustment module 803 is further configured to perform a state evaluation on subtasks represented by tree nodes in the behavior tree based on the first environmental information, to obtain an execution state of the subtasks represented by the tree nodes, determine a target tree node as a problem node if the execution state indicates that the target subtasks represented by the target tree node in the tree nodes fail to execute, and perform a dynamic adjustment operation on the problem node.
In some embodiments, the task dynamic adjustment module 803 is further configured to generate an adjustment policy based on a failure cause of the target subtask characterized by the problem node, and perform a dynamic adjustment operation on the problem node based on the adjustment policy.
In some embodiments, the task dynamic adjustment module 803 is further configured to switch the subtask represented by the problem node to a subtask obtained by re-planning if the adjustment policy is to re-plan the task, adjust the task execution sequence in the execution condition represented by the node information of the problem node if the adjustment policy is to change the task execution sequence, and add a new subtask in the subtask represented by the problem node if the adjustment policy is to add a new task, and add the execution condition of the new subtask in the execution condition represented by the condition information of the problem node.
In some embodiments, the acquiring module is further configured to acquire second environmental information of an environment in which the robot is located;
The task planning module 801 is further configured to perform task planning on the robot based on task intention information and the second environment information corresponding to a user instruction, obtain at least one subtask of the robot and an execution condition of the at least one subtask, bind the at least one subtask with a tree node of a behavior tree, and fuse the execution condition with node condition information of the tree node to construct the behavior tree of the robot.
In some embodiments, the task planning module 801 is further configured to input the obtained user instruction into a preset user intention big model, so as to parse the user instruction through the user intention big model and output task intention information corresponding to the user instruction, where the user intention big model is obtained by performing model training based on a constructed user sample, and the user sample includes a user instruction sample and a user intention sample corresponding to the user instruction sample.
The specific implementation manner of the control device of the robot provided by the embodiment of the application is basically the same as that of the control method of the robot, and is not repeated here.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the control method of the robot when executing the computer program. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
The processor 901 may be implemented by a general purpose CPU (central processing unit), a microprocessor, an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs, so as to implement the technical solution provided by the embodiments of the present application;
the memory 902 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM), among others. The memory 902 may store an operating system and other application programs, and when the technical solution provided in the embodiments of the present disclosure is implemented by software or firmware, relevant program codes are stored in the memory 902, and the processor 901 invokes a control method of the robot that performs the embodiments of the present disclosure;
an input/output interface 903 for inputting and outputting information;
The communication interface 904 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
a bus 905 that transfers information between the various components of the device (e.g., the processor 901, the memory 902, the input/output interface 903, and the communication interface 904);
Wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 are communicatively coupled to each other within the device via a bus 905.
The embodiment of the application also provides a robot, wherein the robot is provided with electronic equipment, the electronic equipment comprises a memory and a processor, the memory stores a computer program, and the processor realizes the control method of the robot when executing the computer program.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the control method of the robot when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the steps realized when the computer program is executed by a processor are basically the same as those of the specific embodiment of the control method of the robot, and are not repeated herein.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by persons skilled in the art that the embodiments of the application are not limited by the illustrations, and that more or fewer steps than those shown may be included, or certain steps may be combined, or different steps may be included.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" is used to describe an association relationship of an associated object, and indicates that three relationships may exist, for example, "a and/or B" may indicate that only a exists, only B exists, and three cases of a and B exist simultaneously, where a and B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one of a, b or c may represent a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. The storage medium includes various media capable of storing programs, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and are not thereby limiting the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.