Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 illustrates a flow chart of a method for multi-robot collaboration at a robot side and a network device side according to an aspect of the application.
Wherein the method comprises step S11, step S12 and step S21.
The embodiment of the application provides a method for multi-robot cooperation, which can be realized at a corresponding robot end and a network equipment end. The robot includes various kinds of machine equipment capable of automatically executing work, machine equipment having a moving function, a carrying and loading function, or other functions, or machine equipment having the above-mentioned multiple functions at the same time, for example, various kinds of artificial intelligence equipment having a moving and carrying function. In the present application, a plurality of robots that perform the same cooperative task may have the same or different functions. The network device includes, but is not limited to, a computer, a network host, a single network server, multiple network server sets or a cloud server, wherein the cloud server may be a virtual supercomputer operating in a distributed system and composed of a group of loosely coupled computer sets, which is used to realize a simple, efficient, safe, reliable, and processing-capacity-scalable computing service. In the present application, the robot may be referred to as the robot 1, and the network device may be referred to as the network device 2.
Specifically, in step S21, the network device 2 may provide the matching collaboration instruction to one or more robots 1, wherein the robots 1 execute corresponding multi-robot collaboration tasks based on the respective collaboration instruction. Correspondingly, in step S11, the cooperation instruction matching itself is acquired from the network device 2 by the corresponding robot 1. Here, the multi-machine cooperative task may be various tasks cooperatively performed by the plurality of robots 1. For example, the plurality of robots 1 keep synchronized movement of similar distances; as another example, multiple robots 1 collectively carry the same object; for example, the plurality of robots 1 perform an assembly task of each component of one object. In one implementation, the network device 2 may match corresponding collaboration instructions for different robots 1 based on the type of collaboration task, or the specific collaboration operation.
In one implementation, the collaboration instruction may include at least any one of: information on a formation state of the multiple robots of the robot; a speed control rule of the robot; coordinate information of a target object to be followed by the robot; other execution related information of the robot.
Specifically, taking synchronous movement in which a plurality of robots 1 keep similar distances, or a scene in which a plurality of robots 1 carry the same object together as an example, in one implementation, the network device 2 may give, through a cooperation instruction, formation state information that each robot 1 needs to maintain for its respective movement, for example, keep one column, one row, or multiple columns of formation; in another implementation manner, the network device 2 may further control the running speed of each robot 1 in cooperation through a cooperation instruction of a speed control rule to adjust the distance between each robot 1, so as to control the movement of the whole queue; in still another implementation, the network device 2 may further provide the one or more robots 1 with coordinate information of a target object to be followed, where the coordinate information of the target object to be determined is provided when a moving operation is started, or the coordinate information of the target object is provided in real time based on a setting during moving.
Taking as an example a scenario in which a plurality of robots 1 perform assembly tasks of respective parts of one object, the cooperation tasks may include speed control rules of the robots in order to move the respective robots 1 to respective corresponding assembly positions; coordinate information of a target position of the robot; and information on the assembly operation steps of the robot. In addition, the collaborative tasks will also adapt based on the specific task needs of other collaborative tasks.
In one implementation manner, the network device 2 may simultaneously and uniformly send a cooperation instruction to each robot 1 corresponding to the cooperation task; in another implementation, the network device 2 may also send the cooperation instruction to any one or more robots 1 at any time. In one implementation, the collaboration instructions corresponding to multiple robots 1 in the same collaboration task may be the same; or may be different, or partially the same or partially different, for example, in a synchronous moving scene in which a plurality of robots 1 keep similar distances in a queue shape, the cooperation instruction of the robot 1 at the head of the queue may be different from the cooperation instructions of the other robots 1 in the queue.
Next, in step S12, the robot 1 may execute the corresponding multi-robot cooperative task based on the cooperation instruction. In one implementation, the robots 1 do not need to communicate directly to implement corresponding cooperative tasks, but may control one or more robots 1 cooperating with the network device 2 in real time according to the cooperative instruction, and each robot 1 executes the cooperative instruction to implement the cooperative task. In one implementation, the network device 2 may only give necessary instructions required for the robots 1 to cooperate with each other, and other operations that do not need to cooperate, i.e. executable, may be performed by the robots 1 independently, for example, in a scenario where a plurality of robots 1 keep synchronous movement at similar distances, or a plurality of robots 1 carry the same object together, control over the overall formation keeping and queue running speed may be controlled by the network device 1 through the cooperation instructions, and specific following operations for each robot 1, such as determination, identification, etc. of the following object may be set and performed by each robot 1 itself.
In the present application, a plurality of independent robots 1 performing a cooperative task may collectively execute a corresponding multi-robot cooperative task based on a cooperative instruction acquired from a corresponding network device 2. Here, according to the application requirements of a specific scene, the multiple independent robots can be flexibly combined through the cooperation instruction sent by the network device 2, so that each combined robot can realize cooperative work on tasks with large workload or complex work classification, and decomposition of complex work and optimization of overall resources are facilitated.
In one implementation, in step S12, the robot 1 may control the robot 1 to move to the destination location or the target object according to the corresponding moving path based on the cooperation instruction. Here, the multi-robot cooperative task of the present application may be a cooperative task that requires a plurality of robots to move in a formation, for example, a plurality of robots 1 keep synchronous movement at similar distances; as another example, a plurality of robots 1 collectively carry the same object. Specifically, in one implementation, based on the cooperation instruction, the robot 1 may be controlled to move to the destination location according to a corresponding moving path, for example, the robot 1 is one or more robots located at the forefront of the queue, which may not have a specific target object but correspond to the destination location to be reached; in one implementation, based on the cooperation instruction, the robot 1 may also be controlled to move to the target object according to a corresponding moving path, for example, one or more robots 1 located at the forefront of the robot queue may have a tracked object, such as a certain moving person or object, and for example, a robot 1 not located at the forefront of the robot queue needs to follow the target object, i.e., a target robot, which may be another robot closest to the robot 1 in front of the robot, or another robot preset or determined based on the cooperation instruction, to move.
In this implementation, the robot 1 may be configured to implement multi-robot formation movement, for example, the robots 1 performing cooperation may move to a destination location based on a matching cooperation instruction or move to follow a target object to implement formation movement of multiple robots 1. Based on the implementation mode, various cooperative tasks, such as cooperative moving and carrying tasks and the like, which need to be realized based on the movement of a plurality of robot formations can be flexibly and effectively realized.
Further, fig. 2 illustrates a flow diagram of a method for multi-robot collaboration at a robot end in accordance with an aspect of the subject application. Wherein the method comprises steps S11 and S12, and further the step S12 comprises steps S121, S122 and S123.
Specifically, in step S121, the robot 1 may determine a target object to be followed by the robot 1. In one implementation, the target object includes a target robot, and the same transport object is carried by the robot and the target robot corresponding to the robot, and at this time, the cooperative task may correspond to a cooperative mobile handling task. The robot 1 needs to determine the target object it is to follow at the start of the collaborative task.
In one implementation, in step S121, when the robot 1 is set to the following mode, the robot 1 may identify a corresponding matching object from the surrounding information captured by the robot 1 in real time, and then use the matching object as a target object to be followed by the robot 1. In one implementation, the following mode of the robot 1 may be initiated by a preset trigger operation. When this follow mode begins, the robot 1 may capture the surrounding information in real time, and in one implementation, raw data of the surrounding information may be acquired by one or more sensing devices in the robot 1, and the raw data may be an image, a picture, or a point cloud. Further, the robot 1 detects from said raw data the type of object that needs to be followed, to which one or more objects in the environment may belong. Through a machine learning method, a classifier is trained in advance, namely characteristic information of scanning data of a certain class of objects is extracted and input into the classifier, and the certain class of objects are detected from environmental information through comparison. There are often a plurality of objects of a certain class, and a matching object is an object selected from one or more objects of the class as a target object.
Further, in one implementation, the matching object may include, but is not limited to, at least any one of: an object closest to the robot 1 around the robot 1; an object closest to the robot 1 in front of the robot 1; an object closest to the robot 1 directly in front of the robot 1; an object that is around the robot 1 and matches object feature information of an object to be followed; an object around the robot 1 and best matching object feature information of an object to be followed; an object closest to the robot among a plurality of objects that match object feature information of an object to be followed around the robot 1. In one implementation, the object feature information may include, but is not limited to, one or more of position information, motion state information, and ontology feature information of the object to be followed.
Further, in one implementation, in step S121, the robot 1 may determine coordinate information of a target object to be followed based on the cooperation instruction; further, the robot 1 acquires the surrounding environment information of the robot in real time, wherein the distance between the robot 1 and the coordinate information is less than or equal to a predetermined distance threshold; then, the robot 1 identifies a corresponding matching object from the surrounding environment information, and takes the matching object as a target object to be followed by the robot 1. Here, the coordinate information may be absolute coordinate information or relative coordinate information. The robot 1 obtains the surrounding environment information through scanning, and if the distance between the robot 1 and the coordinate information is smaller than or equal to a preset distance threshold value at the moment; a matching object that matches the coordinate information may be identified from the environment information and set as the target object.
Further, in an implementation manner, if the robot 1 obtains the cooperation instruction, and the distance between the position of the robot and the position of the object to be followed is greater than a predetermined distance threshold, the present application further provides a solution in such a case: when the distance between the robot 1 and the coordinate information is larger than a preset distance threshold value, controlling the robot 1 to move towards the coordinate information so as to reduce the distance between the robot 1 and the coordinate information; then, in the moving process, the surrounding environment information of the robot 1 is acquired in real time, until the distance between the robot 1 and the coordinate information is smaller than or equal to a predetermined distance threshold, a corresponding matching object can be identified from the surrounding environment information, and the matching object is used as a target object to be followed by the robot 1.
Next, in step S122, the robot 1 may identify the target object from a scene captured by the robot 1 in real time. Since each object in the environment is also in a constantly changing state while the robot 1 is moving, the robot 1 needs to repeat the operation of identifying the target object again and again based on the environment that changes in real time. In one implementation, the robot 1 may periodically scan the surrounding environment to obtain real-time environment data information, detect all objects belonging to the same class as the target object from the environment data information, and finally identify the matched target object according to the detection results of a certain period or a plurality of periods of continuous scanning;
specifically, in one implementation manner, in step S122, the robot 1 may scan and acquire the ambient environment information of the robot 1 in real time; then, one or more observed objects matching the object feature information of the target object may be detected from the ambient environment information, where the object feature information of the one or more observed objects determined by the current environmental information scan may be similar matched to the stored object feature information of the target object since the target object determined by the latest target object recognition operation and the corresponding object feature information thereof have been stored, for example, in the form of a historical observation record, where the object feature information of the observed object or the target object determined by the current environmental information scan may include, but is not limited to, any one of the following: location information of the object; motion state information of the object; body characteristic information of the object and the like, wherein the position information refers to the position of the object at the corresponding scanning moment; the motion state information comprises motion information such as motion direction, speed and the like; the body characteristic information refers to the appearance characteristics of the object body, including shape, size, color information and the like; furthermore, the robot 1 may identify the target object from one or more of the observation objects, for example, an observation object satisfying a certain matching degree may be estimated as the target object.
Further, in one implementation, the identifying the target object from the one or more observed objects may include: determining association information of each observation object in one or more observation objects corresponding to the robot 1 and a historical observation record, wherein the one or more observation objects comprise the target object, and the historical observation record comprises object-related information of one or more historical observation objects; next, the robot 1 identifies the target object from one or more observation objects based on the association information between the observation object and the historical observation records.
Specifically, when the robot 1 determines the target object after repeating the operation of identifying the target object again and again based on the environment changing in real time, the target object and the object feature information corresponding to the target object may be recorded in the historical observation record, and meanwhile, other observation objects determined simultaneously with the target object and the object feature information corresponding to the other observation objects may be determined by matching and also recorded in the historical observation record. Further, when the target object identification operation is currently performed, data association may be performed between each of the currently acquired one or more observation objects and the historical observation records, in an implementation manner, the data association may refer to matching each of the currently acquired one or more observation objects with the stored observation record of each of the historical observation records, and a result of the data association is association information. For example, in a current scanning cycle, there are N observed objects in the environment, and the robot has previously stored historical observation records of M objects, where the numbers of M and N may be the same or different; and one or more object intersections may exist between the N objects and the particular objects corresponding to the M objects. And performing data association, namely, respectively matching the N observation objects with the observation records of the M objects in the historical observation records one by one to obtain the matching degree of each matching, wherein the overall matching result is a matrix with N rows and M columns, the matrix elements are the corresponding matching degrees, and the matrix is the association information. Wherein the observation object includes a target object. In one implementation, the matching may be based on feature matching of one or more object feature information of the object. Then, the target object is identified based on the obtained association information. After obtaining the associated information, namely the matching degree matrix, selecting an associated mode with the highest overall matching degree through comprehensive analysis operation, thereby obtaining the target object.
In one implementation, the method further includes step S13 (not shown), and in step S13, robot 1 may update the historical observation based on the one or more observed objects, wherein the updated objects in the historical observation include the target object identified from the one or more observed objects. The observation object corresponding to the robot 1 changes continuously with the change of the environment, and in one implementation mode, if a new observation object appears, the corresponding observation record is added; if the existing observation object disappears, deleting the observation record corresponding to the observation object; and if the existing observation object still exists, updating the relevant information in the corresponding observation record.
Next, in step S123, the robot 1 may control the robot to move to the target object according to the corresponding movement path based on the cooperation instruction. Specifically, the robot 1 may determine a moving path of the robot 1 to the target object; further, the robot 1 is controlled to move along the movement path. The determination of the movement path or the control action of the movement may be performed based on a cooperation instruction of the network device 2, or only one of the determination and the control action may be performed based on the cooperation instruction.
In one implementation, the robot 1 may control the robot to move to the target object according to the corresponding moving path based on the cooperation instruction, wherein the formation state between the robot and the target object matches with the formation state information of the multiple robots in the cooperation instruction, and the relative distance between the second robot and the first robot is included in a preset relative distance range threshold. The network device 2 may provide formation state information that each robot 1 needs to maintain for its respective movement through a cooperation instruction, for example, keep a column, a row, or multiple columns to form a queue, and in one implementation, these formation states may be implemented through setting parameters such as a movement path, a motion state, and the like of the robot 1; in still another implementation manner, the network device 2 may further control the running speed of each robot 1 in cooperation through a cooperation instruction of a speed control rule to adjust the distance between each robot 1, so as to control the movement of the whole queue. Here, the queue shape for controlling multiple robots in a multiple robot cooperative task, or specifically the relative positions of the robots with respect to each other, may be controlled by the cooperative instructions. The coordination degree of the coordination operation among the robots 1 is higher, and the completion efficiency of the coordination task is improved.
In one implementation, the step S123 may include a step S1231 (not shown) and a step S1232 (not shown). Specifically, in step S1231, the robot 1 may determine a moving path of the robot 1 to the target object based on the cooperation instruction; in step S1232, the robot 1 may control the robot 1 to move along the movement path based on the cooperation instruction.
Further, in step S1231, the robot 1 may acquire obstacle information from the surrounding environment information of the robot; next, determining target coordinates of the robot 1 based on the identified position information of the target object; then, based on the cooperation instruction, the moving path of the robot to the target object is determined by combining the target coordinates and the obstacle information, wherein the cooperation instruction comprises multi-robot formation state information.
Specifically, the robot 1 first determines obstacle information between the robot body and the target object, where obstacles refer to all objects in the environment except the target object, and therefore, the obstacles include both static obstacles, such as buildings like walls and pillars when tracking indoors, and moving obstacles, such as observation objects that do not belong to the target object. Next, the position information of the current target object, for example, the position information recorded in the corresponding historical observation record, is set as the target coordinates of the robot 1. And finally, determining a moving path of the robot to the target object according to the distribution situation of the obstacles and the target coordinates of the robot based on the cooperation instruction. In practical applications, since the movement path from one location to another is not unique, the movement path determined for the robot is not unique, but the most suitable path is selected from a plurality of paths. In the multi-robot cooperative task, independent motions of the robots need to be considered in cooperation with each other, where the cooperation instruction provided by the network device 2 to each robot 1 includes multi-robot formation state information to indicate movement formation information of each robot 1 in cooperation, for example, to keep one row, one line, or multiple rows for formation, and further, a movement path of the robot to the target object is planned through the formation state information, for example, if each robot 1 advances in a row manner, a path width on the movement path needs to be considered, and a candidate path with a limited path width is excluded. In one implementation, the cooperation instruction including the formation state information may be received by the corresponding robot 1 before the robot 1 starts moving, or may be provided to the robot 1 in real time based on a change in a scene during the movement.
Further, in step S1232, the robot 1 may determine the moving speed of the robot 1 based on the cooperation instruction, wherein the cooperation instruction includes a speed control rule; then, the robot 1 is controlled to move along the movement path based on the movement speed, wherein the relative distance between the robot 1 and the target object is controlled to be included in a preset relative distance range threshold value through the movement speed. Specifically, when the multi-robot cooperative formation moves, in addition to the formation, it is necessary to consider the relative position between the specific robots 1, for example, in the coordinated movement/conveyance task, when the robots 1 move in a single row and the conveyance object is N meters long, in order to ensure that each robot simultaneously carries the transportation task, the relative position of two adjacent robots 1 is not random, but needs to ensure that the distance between two adjacent robots 1 is kept within a certain range, here, the moving speed of the robot 1 may be determined by a speed control rule in the cooperative instruction, so that the robot 1 can move in the moving path based on the moving speed, at the same time, a preset distance range between the target robot (which may correspond to another robot 1) to follow it is maintained.
Further, in an implementation, the determining the moving speed of the robot 1 based on the cooperation instruction, wherein the cooperation instruction including a speed control rule includes: based on the speed control rule, a moving speed of the robot 1 is determined, wherein the moving speed includes a forward speed and/or a steering speed. Here, the movement of the robot 1 needs to be constrained by the kinematics and dynamics of the robot body, and at the same time, the size of the robot 1 needs to be considered in avoiding a collision. When the control robot 1 moves along the movement path, it is necessary to control the movement speed of the robot 1 while controlling the robot 1 so that the movement direction does not deviate from the path range. Further, it is preferable that the moving speed of the robot 1 is divided into two components of a forward speed and a turning speed, and specifically, the forward speed refers to a speed component in the direction in which the robot 1 faces, and the turning speed refers to a speed component in the direction perpendicular to the forward speed.
On this basis, a further implementation manner is as follows: when the distance between the robot 1 and the target object is larger than or equal to a distance threshold value, carrying out planning control on the advancing speed and the steering speed at the same time; when the distance between the robot 1 and the target object is smaller than the distance threshold value, that is, the robot approaches the target object, only the movement direction of the robot, that is, the steering speed, needs to be finely adjusted.
In the application, after the robot 1 obtains a cooperation instruction, a target object to be followed by the robot 1 is determined; identifying the target object from a scene captured by the robot in real time; therefore, the robot 1 is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction. Compared with the prior robot following technology, the robot following method and device can accurately lock the target object in the natural environment with real-time change and more interference factors and effectively track the target object, so that the accuracy of robot following is improved, and the technical problem that the current robot follows the wrong target or loses the target frequently is solved. Meanwhile, the robot is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction, and formation movement of mutual cooperation of a plurality of robots can be realized on the whole.
In one implementation, in step S21, the network device 1 may provide a first cooperation instruction to the first robot, where the first robot controls the first robot to move to the target object or the destination location according to the corresponding movement path based on the first cooperation instruction; then, a second cooperation instruction is provided for a second robot, wherein the second robot controls the second robot to follow the first robot according to a corresponding movement path based on the second cooperation instruction. Further, in one implementation, the formation status between the second robot and the first robot is matched with the formation status information of the multiple robots in the cooperative instruction, and the relative distance between the second robot and the first robot is included in a preset relative distance range threshold. Here, the first robot and the second robot may both correspond to different robots 1, and in one implementation, the same multi-robot cooperation task may be cooperatively executed by one or more first robots and one or more second robots. In one implementation, the first and second collaboration instructions may be the same or different.
FIG. 3 illustrates a system diagram for multi-robot collaboration in accordance with an aspect of the subject application. Wherein the system comprises a robot 1 and a network device 2.
Wherein the robot 1 comprises afirst device 31 and asecond device 32, and the network device 2 comprises afourth device 41.
The embodiment of the application provides a system for multi-robot cooperation. The robot includes various kinds of machine equipment capable of automatically executing work, machine equipment having a moving function, a carrying and loading function, or other functions, or machine equipment having the above-mentioned multiple functions at the same time, for example, various kinds of artificial intelligence equipment having a moving and carrying function. In the present application, a plurality of robots that perform the same cooperative task may have the same or different functions. The network device includes, but is not limited to, a computer, a network host, a single network server, multiple network server sets or a cloud server, wherein the cloud server may be a virtual supercomputer operating in a distributed system and composed of a group of loosely coupled computer sets, which is used to realize a simple, efficient, safe, reliable, and processing-capacity-scalable computing service. In the present application, the robot may be referred to as the robot 1, and the network device may be referred to as the network device 2.
In particular, thefourth device 41 may provide the matched collaboration instructions to one or more robots 1, wherein said robots 1 perform corresponding multi-robot collaboration tasks based on the respective collaboration instructions. Correspondingly, thefirst device 31 acquires the cooperation instruction matched with itself from the network device 2. Here, the multi-machine cooperative task may be various tasks cooperatively performed by the plurality of robots 1. For example, the plurality of robots 1 keep synchronized movement of similar distances; as another example, multiple robots 1 collectively carry the same object; for example, the plurality of robots 1 perform an assembly task of each component of one object. In one implementation, the network device 2 may match corresponding collaboration instructions for different robots 1 based on the type of collaboration task, or the specific collaboration operation.
In one implementation, the collaboration instruction may include at least any one of: information on a formation state of the multiple robots of the robot; a speed control rule of the robot; coordinate information of a target object to be followed by the robot; other execution related information of the robot.
Specifically, taking synchronous movement in which a plurality of robots 1 keep similar distances, or a scene in which a plurality of robots 1 carry the same object together as an example, in one implementation, the network device 2 may give, through a cooperation instruction, formation state information that each robot 1 needs to maintain for its respective movement, for example, keep one column, one row, or multiple columns of formation; in another implementation manner, the network device 2 may further control the running speed of each robot 1 in cooperation through a cooperation instruction of a speed control rule to adjust the distance between each robot 1, so as to control the movement of the whole queue; in still another implementation, the network device 2 may further provide the one or more robots 1 with coordinate information of a target object to be followed, where the coordinate information of the target object to be determined is provided when a moving operation is started, or the coordinate information of the target object is provided in real time based on a setting during moving.
Taking as an example a scenario in which a plurality of robots 1 perform assembly tasks of respective parts of one object, the cooperation tasks may include speed control rules of the robots in order to move the respective robots 1 to respective corresponding assembly positions; coordinate information of a target position of the robot; and information on the assembly operation steps of the robot. In addition, the collaborative tasks will also adapt based on the specific task needs of other collaborative tasks.
In one implementation manner, thefourth device 41 may simultaneously and uniformly send a cooperation instruction to each robot 1 corresponding to the cooperation task; in another implementation, thefourth device 41 may also send the cooperation instruction to any one or more robots 1 at any time. In one implementation, the collaboration instructions corresponding to multiple robots 1 in the same collaboration task may be the same; or may be different, or partially the same or partially different, for example, in a synchronous moving scene in which a plurality of robots 1 keep similar distances in a queue shape, the cooperation instruction of the robot 1 at the head of the queue may be different from the cooperation instructions of the other robots 1 in the queue.
Thesecond device 32 may then perform a corresponding multi-robot collaboration task based on the collaboration instructions. In one implementation, the robots 1 do not need to communicate directly to implement corresponding cooperative tasks, but may control one or more robots 1 cooperating with the network device 2 in real time according to the cooperative instruction, and each robot 1 executes the cooperative instruction to implement the cooperative task. In one implementation, the network device 2 may only give necessary instructions required for the robots 1 to cooperate with each other, and other operations that do not need to cooperate, i.e. executable, may be performed by the robots 1 independently, for example, in a scenario where a plurality of robots 1 keep synchronous movement at similar distances, or a plurality of robots 1 carry the same object together, control over the overall formation keeping and queue running speed may be controlled by the network device 1 through the cooperation instructions, and specific following operations for each robot 1, such as determination, identification, etc. of the following object may be set and performed by each robot 1 itself.
In the present application, a plurality of independent robots 1 performing a cooperative task may collectively execute a corresponding multi-robot cooperative task based on a cooperative instruction acquired from a corresponding network device 2. Here, according to the application requirements of a specific scene, the multiple independent robots can be flexibly combined through the cooperation instruction sent by the network device 2, so that each combined robot can realize cooperative work on tasks with large workload or complex work classification, and decomposition of complex work and optimization of overall resources are facilitated.
In one implementation, the second device 21 may control the robot 1 to move to the destination position or the target object according to the corresponding moving path based on the cooperation instruction. Here, the multi-robot cooperative task of the present application may be a cooperative task that requires a plurality of robots to move in a formation, for example, a plurality of robots 1 keep synchronous movement at similar distances; as another example, a plurality of robots 1 collectively carry the same object. Specifically, in one implementation, based on the cooperation instruction, the robot 1 may be controlled to move to the destination location according to a corresponding moving path, for example, the robot 1 is one or more robots located at the forefront of the queue, which may not have a specific target object but correspond to the destination location to be reached; in one implementation, based on the cooperation instruction, the robot 1 may also be controlled to move to the target object according to a corresponding moving path, for example, one or more robots 1 located at the forefront of the robot queue may have a tracked object, such as a certain moving person or object, and for example, a robot 1 not located at the forefront of the robot queue needs to follow the target object, i.e., a target robot, which may be another robot closest to the robot 1 in front of the robot, or another robot preset or determined based on the cooperation instruction, to move.
In this implementation, the robot 1 may be configured to implement multi-robot formation movement, for example, the robots 1 performing cooperation may move to a destination location based on a matching cooperation instruction or move to follow a target object to implement formation movement of multiple robots 1. Based on the implementation mode, various cooperative tasks, such as cooperative moving and carrying tasks and the like, which need to be realized based on the movement of a plurality of robot formations can be flexibly and effectively realized.
Further, in one implementation, thesecond device 32 includes a first unit (not shown), a second unit (not shown), and a third unit (not shown).
In particular, the first unit may determine a target object to be followed by the robot 1. In one implementation, the target object includes a target robot, and the same transport object is carried by the robot and the target robot corresponding to the robot, and at this time, the cooperative task may correspond to a cooperative mobile handling task. The robot 1 needs to determine the target object it is to follow at the start of the collaborative task.
In one implementation, when the robot 1 is set to the following mode, the first unit may identify a corresponding matching object from the surrounding information captured by the robot 1 in real time, and then use the matching object as a target object to be followed by the robot 1. In one implementation, the following mode of the robot 1 may be initiated by a preset trigger operation. When this follow mode begins, the robot 1 may capture the surrounding information in real time, and in one implementation, raw data of the surrounding information may be acquired by one or more sensing devices in the robot 1, and the raw data may be an image, a picture, or a point cloud. Further, the robot 1 detects from said raw data the type of object that needs to be followed, to which one or more objects in the environment may belong. Through a machine learning method, a classifier is trained in advance, namely characteristic information of scanning data of a certain class of objects is extracted and input into the classifier, and the certain class of objects are detected from environmental information through comparison. There are often a plurality of objects of a certain class, and a matching object is an object selected from one or more objects of the class as a target object.
Further, in one implementation, the matching object may include, but is not limited to, at least any one of: an object closest to the robot 1 around the robot 1; an object closest to the robot 1 in front of the robot 1; an object closest to the robot 1 directly in front of the robot 1; an object that is around the robot 1 and matches object feature information of an object to be followed; an object around the robot 1 and best matching object feature information of an object to be followed; an object closest to the robot among a plurality of objects that match object feature information of an object to be followed around the robot 1. In one implementation, the object feature information may include, but is not limited to, one or more of position information, motion state information, and ontology feature information of the object to be followed.
Further, in one implementation, the first unit may determine, based on the cooperation instruction, coordinate information of a target object to be followed; further, the robot 1 acquires the surrounding environment information of the robot in real time, wherein the distance between the robot 1 and the coordinate information is less than or equal to a predetermined distance threshold; then, the robot 1 identifies a corresponding matching object from the surrounding environment information, and takes the matching object as a target object to be followed by the robot 1. Here, the coordinate information may be absolute coordinate information or relative coordinate information. The robot 1 obtains the surrounding environment information through scanning, and if the distance between the robot 1 and the coordinate information is smaller than or equal to a preset distance threshold value at the moment; a matching object that matches the coordinate information may be identified from the environment information and set as the target object.
Further, in an implementation manner, if the robot 1 obtains the cooperation instruction, and the distance between the position of the robot and the position of the object to be followed is greater than a predetermined distance threshold, the present application further provides a solution in such a case: when the distance between the robot 1 and the coordinate information is larger than a preset distance threshold value, controlling the robot 1 to move towards the coordinate information so as to reduce the distance between the robot 1 and the coordinate information; then, in the moving process, the surrounding environment information of the robot 1 is acquired in real time, until the distance between the robot 1 and the coordinate information is smaller than or equal to a predetermined distance threshold, a corresponding matching object can be identified from the surrounding environment information, and the matching object is used as a target object to be followed by the robot 1.
Then, the second unit may identify the target object from a scene captured by the robot 1 in real time. Since each object in the environment is also in a constantly changing state while the robot 1 is moving, the robot 1 needs to repeat the operation of identifying the target object again and again based on the environment that changes in real time. In one implementation, the robot 1 may periodically scan the surrounding environment to obtain real-time environment data information, detect all objects belonging to the same class as the target object from the environment data information, and finally identify the matched target object according to the detection results of a certain period or a plurality of periods of continuous scanning;
specifically, in one implementation, the second unit may scan and acquire the ambient environment information of the robot 1 in real time; then, one or more observed objects matching the object feature information of the target object may be detected from the ambient environment information, where the object feature information of the one or more observed objects determined by the current environmental information scan may be similar matched to the stored object feature information of the target object since the target object determined by the latest target object recognition operation and the corresponding object feature information thereof have been stored, for example, in the form of a historical observation record, where the object feature information of the observed object or the target object determined by the current environmental information scan may include, but is not limited to, any one of the following: location information of the object; motion state information of the object; body characteristic information of the object and the like, wherein the position information refers to the position of the object at the corresponding scanning moment; the motion state information comprises motion information such as motion direction, speed and the like; the body characteristic information refers to the appearance characteristics of the object body, including shape, size, color information and the like; furthermore, the robot 1 may identify the target object from one or more of the observation objects, for example, an observation object satisfying a certain matching degree may be estimated as the target object.
Further, in one implementation, the identifying the target object from the one or more observed objects may include: determining association information of each observation object in one or more observation objects corresponding to the robot 1 and a historical observation record, wherein the one or more observation objects comprise the target object, and the historical observation record comprises object-related information of one or more historical observation objects; next, the robot 1 identifies the target object from one or more observation objects based on the association information between the observation object and the historical observation records.
Specifically, when the robot 1 determines the target object after repeating the operation of identifying the target object again and again based on the environment changing in real time, the target object and the object feature information corresponding to the target object may be recorded in the historical observation record, and meanwhile, other observation objects determined simultaneously with the target object and the object feature information corresponding to the other observation objects may be determined by matching and also recorded in the historical observation record. Further, when the target object identification operation is currently performed, data association may be performed between each of the currently acquired one or more observation objects and the historical observation records, in an implementation manner, the data association may refer to matching each of the currently acquired one or more observation objects with the stored observation record of each of the historical observation records, and a result of the data association is association information. For example, in a current scanning cycle, there are N observed objects in the environment, and the robot has previously stored historical observation records of M objects, where the numbers of M and N may be the same or different; and one or more object intersections may exist between the N objects and the particular objects corresponding to the M objects. And performing data association, namely, respectively matching the N observation objects with the observation records of the M objects in the historical observation records one by one to obtain the matching degree of each matching, wherein the overall matching result is a matrix with N rows and M columns, the matrix elements are the corresponding matching degrees, and the matrix is the association information. Wherein the observation object includes a target object. In one implementation, the matching may be based on feature matching of one or more object feature information of the object. Then, the target object is identified based on the obtained association information. After obtaining the associated information, namely the matching degree matrix, selecting an associated mode with the highest overall matching degree through comprehensive analysis operation, thereby obtaining the target object.
In one implementation, the robot 1 further comprises a third device (not shown), and the robot 1 may update the historical observation according to the one or more observed objects, wherein the updated objects in the historical observation include the target object identified from the one or more observed objects. The observation object corresponding to the robot 1 changes continuously with the change of the environment, and in one implementation mode, if a new observation object appears, the corresponding observation record is added; if the existing observation object disappears, deleting the observation record corresponding to the observation object; and if the existing observation object still exists, updating the relevant information in the corresponding observation record.
Then, the third unit may control the robot to move to the target object in the corresponding movement path based on the cooperation instruction. Specifically, the robot 1 may determine a moving path of the robot 1 to the target object; further, the robot 1 is controlled to move along the movement path. The determination of the movement path or the control action of the movement may be performed based on a cooperation instruction of the network device 2, or only one of the determination and the control action may be performed based on the cooperation instruction.
In one implementation, the third unit may control the robot to move to the target object according to the corresponding moving path based on the cooperation instruction, wherein the formation state between the robot and the target object matches with the formation state information of the multiple robots in the cooperation instruction, and the relative distance between the second robot and the first robot is included in a preset relative distance range threshold. The network device 2 may provide formation state information that each robot 1 needs to maintain for its respective movement through a cooperation instruction, for example, keep a column, a row, or multiple columns to form a queue, and in one implementation, these formation states may be implemented through setting parameters such as a movement path, a motion state, and the like of the robot 1; in still another implementation manner, the network device 2 may further control the running speed of each robot 1 in cooperation through a cooperation instruction of a speed control rule to adjust the distance between each robot 1, so as to control the movement of the whole queue. Here, the queue shape for controlling multiple robots in a multiple robot cooperative task, or specifically the relative positions of the robots with respect to each other, may be controlled by the cooperative instructions. The coordination degree of the coordination operation among the robots 1 is higher, and the completion efficiency of the coordination task is improved.
In one implementation, the third unit may include a first sub-unit (not shown) and a second sub-unit (not shown). Specifically, the first subunit may determine, based on the cooperation instruction, a movement path of the robot 1 to the target object; the second subunit may control the robot 1 to move along the movement path based on the cooperation instruction.
Further, the first subunit may acquire obstacle information from the surrounding environment information of the robot; next, determining target coordinates of the robot 1 based on the identified position information of the target object; then, based on the cooperation instruction, the moving path of the robot to the target object is determined by combining the target coordinates and the obstacle information, wherein the cooperation instruction comprises multi-robot formation state information.
Specifically, the first subunit first determines obstacle information between the robot body and the target object, where obstacles refer to all objects in the environment except the target object, and therefore, the obstacles include both static obstacles, such as buildings like walls and pillars when tracking indoors, and moving obstacles, such as observation objects that do not belong to the target object. Next, the position information of the current target object, for example, the position information recorded in the corresponding historical observation record, is set as the target coordinates of the robot 1. And finally, determining a moving path of the robot to the target object according to the distribution situation of the obstacles and the target coordinates of the robot based on the cooperation instruction. In practical applications, since the movement path from one location to another is not unique, the movement path determined for the robot is not unique, but the most suitable path is selected from a plurality of paths. In the multi-robot cooperative task, independent motions of the robots need to be considered in cooperation with each other, where the cooperation instruction provided by the network device 2 to each robot 1 includes multi-robot formation state information to indicate movement formation information of each robot 1 in cooperation, for example, to keep one row, one line, or multiple rows for formation, and further, a movement path of the robot to the target object is planned through the formation state information, for example, if each robot 1 advances in a row manner, a path width on the movement path needs to be considered, and a candidate path with a limited path width is excluded. In one implementation, the cooperation instruction including the formation state information may be received by the corresponding robot 1 before the robot 1 starts moving, or may be provided to the robot 1 in real time based on a change in a scene during the movement.
Further, the second subunit may determine the moving speed of the robot 1 based on the cooperation instruction, wherein the cooperation instruction includes a speed control rule; then, the robot 1 is controlled to move along the movement path based on the movement speed, wherein the relative distance between the robot 1 and the target object is controlled to be included in a preset relative distance range threshold value through the movement speed. Specifically, when the multi-robot cooperative formation moves, in addition to the formation, it is necessary to consider the relative position between the specific robots 1, for example, in the coordinated movement/conveyance task, when the robots 1 move in a single row and the conveyance object is N meters long, in order to ensure that each robot simultaneously carries the transportation task, the relative position of two adjacent robots 1 is not random, but needs to ensure that the distance between two adjacent robots 1 is kept within a certain range, here, the moving speed of the robot 1 may be determined by a speed control rule in the cooperative instruction, so that the robot 1 can move in the moving path based on the moving speed, at the same time, a preset distance range between the target robot (which may correspond to another robot 1) to follow it is maintained.
Further, in an implementation, the determining the moving speed of the robot 1 based on the cooperation instruction, wherein the cooperation instruction including a speed control rule includes: based on the speed control rule, a moving speed of the robot 1 is determined, wherein the moving speed includes a forward speed and/or a steering speed. Here, the movement of the robot 1 needs to be constrained by the kinematics and dynamics of the robot body, and at the same time, the size of the robot 1 needs to be considered in avoiding a collision. When the control robot 1 moves along the movement path, it is necessary to control the movement speed of the robot 1 while controlling the robot 1 so that the movement direction does not deviate from the path range. Further, it is preferable that the moving speed of the robot 1 is divided into two components of a forward speed and a turning speed, and specifically, the forward speed refers to a speed component in the direction in which the robot 1 faces, and the turning speed refers to a speed component in the direction perpendicular to the forward speed.
On this basis, a further implementation manner is as follows: when the distance between the robot 1 and the target object is larger than or equal to a distance threshold value, carrying out planning control on the advancing speed and the steering speed at the same time; when the distance between the robot 1 and the target object is smaller than the distance threshold value, that is, the robot approaches the target object, only the movement direction of the robot, that is, the steering speed, needs to be finely adjusted.
In the application, after the robot 1 obtains a cooperation instruction, a target object to be followed by the robot 1 is determined; identifying the target object from a scene captured by the robot in real time; therefore, the robot 1 is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction. Compared with the prior robot following technology, the robot following method and device can accurately lock the target object in the natural environment with real-time change and more interference factors and effectively track the target object, so that the accuracy of robot following is improved, and the technical problem that the current robot follows the wrong target or loses the target frequently is solved. Meanwhile, the robot is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction, and formation movement of mutual cooperation of a plurality of robots can be realized on the whole.
In one implementation, thefourth device 41 of the network device 1 may provide a first cooperation instruction to the first robot, wherein the first robot controls the first robot to move to the target object or the destination location according to the corresponding movement path based on the first cooperation instruction; then, a second cooperation instruction is provided for a second robot, wherein the second robot controls the second robot to follow the first robot according to a corresponding movement path based on the second cooperation instruction. Further, in one implementation, the formation status between the second robot and the first robot is matched with the formation status information of the multiple robots in the cooperative instruction, and the relative distance between the second robot and the first robot is included in a preset relative distance range threshold. Here, the first robot and the second robot may both correspond to different robots 1, and in one implementation, the same multi-robot cooperation task may be cooperatively executed by one or more first robots and one or more second robots. In one implementation, the first and second collaboration instructions may be the same or different.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.