Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The repositioning method provided by the embodiment of the application can be applied to an application environment shown in figure 1. Wherein the terminal 102 communicates with the robot 104 via a network. The terminal 102 and the robot 104 may cooperate to perform the repositioning method provided in the embodiment of the present application, and the robot 104 may also be used alone to perform the repositioning method provided in the embodiment of the present application. The robot 104 may be various self-moving devices such as a cleaning robot, a dispensing robot, a tour robot, an automatic guided vehicle (Automatic Guided Vehicle, AGV), a sweeper, etc. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, portable wearable devices, which may be smart televisions, smart car-mounted devices, etc., and portable wearable devices, which may be smart watches, etc.
The robot 104 may also cooperate with a server To implement the relocation method provided in the embodiment of the present application, where the server may be an independent physical server, or may be a service node in a blockchain system, where a Peer-To-Peer (P2P) network is formed between service nodes in the blockchain system, and the P2P protocol is an application layer protocol running on top of a transmission control protocol (TCP, transmission Control Protocol) protocol. In addition, the server may be a server cluster formed by a plurality of physical servers, and may be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligence platforms, and the like. The robot 104 and the server may be connected by a communication connection manner such as bluetooth, USB (Universal Serial Bus ) or a network, which is not limited herein.
In one embodiment, as shown in fig. 2, a repositioning method is provided, and this embodiment is described as the application of the method to a robot, that is, the robot is used as an execution body, and includes steps 202 to 210.
Step 202, obtaining operation parameter values under the condition that the obstacle outline in the obstacle map is matched with the obstacle outline in the static map, wherein the operation parameter values are parameter values corresponding to triggering operation aiming at a display screen, and the display screen displays the obstacle map and the static map.
The obstacle map is a layer displaying an obstacle outline corresponding to the current position of the robot, and it can be understood that the robot acquires obstacle data at a position where pose information is lost, forms the obstacle outline in the initial obstacle map according to the obstacle data to obtain the obstacle map, and the sensor can be a laser radar sensor, a depth camera, a vision sensor, an infrared sensor or the like, and is not limited herein, and the obstacle data can be obstacle point clouds or image data or the like. The obstacle map may be a layer displayed in the display screen, where the layer includes an obstacle contour corresponding to a current position of the robot, and the obstacle contour is formed according to an obstacle point cloud acquired by a laser radar sensor of the robot. The obstacle map includes, but is not limited to, a virtual robot located at the center of the display screen and an obstacle outline located near the virtual robot, where the virtual robot refers to a virtualized representation of the robot in the obstacle map, and the obstacle outline refers to a graph characterizing the shape of the obstacle, and the obstacle outline may be composed of multiple curves or multiple points, which is not limited herein.
The static map refers to a map used for representing environmental information in the navigation and positioning process of the robot, and the static map can be generated in advance through SLAM (Simultaneous Localization AND MAPPING, simultaneous positioning and mapping) or other mapping technologies, and is kept unchanged in the operation process of the robot. The static map includes the outline of the obstacle in the robot operating environment. For example, as shown in fig. 3, the operator clicks a start control as shown in fig. 3, an obstacle map and a static map are displayed in a display screen, an obstacle profile 304 in a virtual robot 302 and the obstacle map is located in the obstacle map, the virtual robot 302 is located in the center of the display screen, the obstacle profile 304 in the obstacle map represents an obstacle profile corresponding to the current position of the robot, that is, an obstacle profile corresponding to the position of the robot where pose information is lost, the obstacle profile is a profile corresponding to a local obstacle in the running environment, an obstacle profile 306 in the static map is located in the static map, the obstacle profile 306 in the static map represents a global obstacle profile in the running environment of the robot, the obstacle profile is a profile corresponding to all obstacles in the running environment, for example, the running environment of the robot is a living room, the obstacle profile in the obstacle map is an obstacle profile corresponding to the position of the robot where pose information is lost through a laser radar sensor, and the obstacle profile in the static map is an obstacle profile of all obstacles in the living room. The position relationship between the obstacle map and the static map is an upper-lower layer relationship, the obstacle map may be located on a first layer, the static map is located on a second layer, that is, the obstacle map is located on an upper layer of the static map, or the obstacle map may be located on the second layer, the static map is located on the first layer, that is, the obstacle map is located on a lower layer of the static map.
Matching refers to the overlapping of the obstacle outline in the obstacle map with the local obstacle outline in the static map, for example, as shown in fig. 4, the local overlapping of the obstacle outline 304 in the obstacle map and the obstacle outline 306 in the static map, where the obstacle outline in the obstacle map matches the obstacle outline in the static map. The operation parameter value represents a parameter value for triggering operation on the static map, the triggering operation on the static map comprises but is not limited to moving, zooming and rotating the static map, it is understood that an operator performs triggering operation on the static map through a display screen to enable an obstacle profile in the obstacle map to be matched with the obstacle profile in the static map, the parameter value for triggering operation on the static map is represented as an operation parameter value, the operation parameter value comprises but is not limited to a moving parameter value, a zooming parameter value, a rotating parameter value and the like, or the operator performs triggering operation on the obstacle map through the display screen to enable the obstacle profile in the obstacle map to be matched with the obstacle profile in the static map, the parameter value for triggering operation on the obstacle map is represented as an operation parameter value, and the operation parameter value comprises but is not limited to a moving parameter value, a zooming parameter value, a rotating parameter value and the like.
In an exemplary embodiment, the robot loses pose information, an obstacle map and a static map are displayed in a display screen, an operator adjusts the static map in the display screen through a triggering operation, an obstacle profile in the obstacle map is matched with an obstacle profile in the static map, and when the obstacle profile in the obstacle map is matched with the obstacle profile in the static map, the robot acquires an operation parameter value corresponding to the triggering operation for the display screen.
In one embodiment, step 202 includes obtaining obstacle data in the event that the robot loses pose information, generating an obstacle map based on the obstacle data, obtaining a static map, displaying the obstacle map and the static map in a display screen, adjusting a position of the static map in the display screen in response to a triggering operation for the static map, and obtaining operating parameter values in the event that an obstacle profile in the obstacle map matches an obstacle profile in the static map. The obstacle data characterizes the detected obstacle, and the obstacle data may be an obstacle point cloud or image data, and the like, which is not limited herein.
In one embodiment, the obstacle data is acquired in the case of the robot losing pose information, the obstacle map is generated based on the obstacle data, and the obstacle map is generated based on the obstacle point cloud. The obstacle point cloud is a set obtained by detecting obstacles by the laser radar sensor.
Step 204, determining an initial position of the robot in the display screen coordinate system based on the displacement parameter values of the operation parameter values.
The displacement parameter values refer to parameter values for moving the static map, and the displacement parameter values comprise horizontal movement parameter values and vertical movement parameter values. The initial position refers to the position of the robot in the static map in the display screen coordinate system before the triggering operation, and it can be understood that before the triggering operation, the robot in the obstacle map is located at the center of the obstacle map, the obstacle outline in the obstacle map is located around the robot, at this time, the robot in the obstacle map is located at the center of the display screen, the display screen coordinate system takes the lower left corner of the display screen as the origin, and the position of the robot in the obstacle map in the display screen coordinate system isThe operator moves the static map through triggering operation to enable the outline of the obstacle in the static map to be matched with the outline of the obstacle in the obstacle map, at the moment, the robot in the static map is overlapped with the robot in the obstacle map, and the position of the robot in the static map in a display screen coordinate system is as followsBefore the operator performs the triggering operation, the position of the robot in the static map in the display screen coordinate system can be according to the position of the robot in the static map in the display screen coordinate system after the triggering operationThe horizontal movement distance of the static map is dx and the vertical movement distance is dy, if dx= eventx-lastx, dy= eventy-lasty, (eventx, eventy) is the position coordinate when the gesture of the triggering operation is put down, and (lastx, lasty) is the position coordinate when the gesture of the triggering operation is lifted, the position of the robot in the static map in the display screen coordinate system before the triggering operation is。
The robot obtains displacement parameter values from the operation parameter values, and a map height and a map width of the static map in the display screen, and determines an initial position of the robot in the display screen coordinate system based on the displacement parameter values, the map height and the map width.
Step 206, based on the operation parameter values, determining the conversion relation between the display screen coordinate system and the static map coordinate system.
The static map coordinate system is a coordinate system with the upper left corner of the static map as an origin, the horizontal direction of the static map as a horizontal axis and the vertical direction of the static map as a vertical axis. The conversion relation refers to a conversion parameter representing a display screen coordinate system to a static map coordinate system, and it is understood that the conversion parameter refers to a conversion parameter for converting a position coordinate in the display screen coordinate system to a position coordinate in the static map coordinate system, and the conversion relation can be represented by a matrix or a conversion parameter set.
Illustratively, the robot determines a static map transformation matrix based on the operational parameter values, and determines a conversion relationship between the display screen coordinate system to the static map coordinate system based on the static map transformation matrix.
Step 208, determining the repositioning position of the robot based on the initial position and the conversion relation.
The repositioning position refers to the current position of the robot under a world coordinate system, which is obtained by positioning the position of the robot after the robot loses pose information.
Illustratively, the robot determines a static position of the robot in a static map coordinate system based on the initial position and the conversion relationship, and determines a repositioning position of the robot in a world coordinate system based on the static position.
In one embodiment, in the case that the repositioning of the robot fails, the robot rotates by a preset angle, acquires an updated lidar point cloud through the lidar sensor, generates an obstacle map based on the updated lidar point cloud, and repeatedly performs the steps 202 to 208 until the repositioning position is obtained. The preset angle refers to a preset rotation angle, and the preset angle may be determined according to a rotation speed of the robot, for example, the robot needs to take 5-6 seconds to rotate 90 degrees, 10-12 seconds to rotate 180 degrees, and 20-24 seconds to rotate 360 degrees, if the preset angle is 360 degrees, that is, the rotation time of the robot is 20-24 seconds, which results in a longer repositioning time, so the preset angle may be set to 90 degrees, or the preset angle may be set to 180 degrees.
In one embodiment, step 208 further includes the robot acquiring matching feature data corresponding to the repositioning position from the map configuration file, determining obstacle feature data corresponding to the obstacle point cloud based on the obstacle point cloud corresponding to the obstacle outline in the static map, calculating similarity between the matching feature data and the obstacle feature data, comparing the similarity with a similarity threshold, determining that the repositioning position is correct if the similarity is equal to or greater than the similarity threshold, determining that the repositioning position is incorrect if the similarity is less than the similarity threshold, and determining that the repositioning position is incorrect and the repositioning is failed.
In one embodiment, the robot rotates a preset angle in the event of a failure of repositioning the robot, including rotating the robot by a preset angle in an obstacle avoidance rotation mode in the event of a failure of repositioning the robot and a non-circular chassis of the robot. If the chassis of the robot is circular, the robot cannot collide with the obstacle in the rotating process, if the chassis of the robot is non-circular, the robot can collide with the obstacle in the rotating process, and the robot can be prevented from colliding with the obstacle in the rotating process by adopting the obstacle avoidance rotating mode to rotate.
In one embodiment, the robot rotates a preset angle in an obstacle avoidance rotation mode, and the robot comprises the steps of acquiring a historical position, determining a target position based on the historical position and the preset angle, determining a rotation speed based on the historical position and the target position, and rotating based on the rotation speed. The historical position refers to a position corresponding to the previous moment of losing the pose information. The rotation speed refers to a speed at which the robot rotates, and includes a rotation linear speed and a rotation angular speed. For example, as shown in fig. 5, the robot acquires a historical position corresponding to the previous moment of losing the pose information, determines a target position according to the historical position and a preset angle, determines a rotation speed according to the historical position and the target position by a planning algorithm, rotates according to the rotation speed, performs obstacle recognition during rotation, and if an obstacle is recognized, re-plans a rotation path (such as reverse rotation) and determines the rotation speed to obtain an updated rotation speed, and then rotates based on the updated rotation speed.
In one embodiment, as shown in fig. 6, in the case of a repositioning failure of the robot, a historical position corresponding to a previous moment of a lost positioning is obtained, a plurality of candidate positioning points (for example, a "m" shaped candidate point, the plurality of candidate positioning points are located at positions near the historical position and within a range of 1 m) are determined based on the historical position, the robot operates to one of the candidate positioning points, an updated lidar point cloud is obtained through a lidar sensor, an updated obstacle map is generated based on the updated lidar point cloud, the steps 202 to 208 are performed, if the repositioning of the robot at the candidate positioning point is successful, a repositioning position is obtained, and if the positioning of the robot at the candidate positioning point fails, the robot operates to the next candidate positioning point, and the process is repeated until the repositioning position is obtained. After the robot fails to re-locate in situ, a plurality of candidate locating points are determined in the historical positions, and a plurality of repositioning is carried out on the plurality of candidate locating points, so that successful repositioning is determined, and repositioning positions are obtained.
In the repositioning method, in the position of the robot where pose information is lost, an obstacle map is formed through laser radar point clouds acquired by a laser radar sensor of the robot, the obstacle outline in the obstacle map reflects environment information of the current position of the robot, the static map is adjusted through triggering operation, the obstacle outline in the obstacle map is matched with the obstacle outline in the static map, namely, the local obstacle outline similar to or the same as the obstacle outline of the obstacle map in the static map is determined through triggering operation, an operation parameter value corresponding to the triggering operation is acquired, the initial position of the robot in a display screen coordinate system is determined according to the displacement parameter value in the operation parameter, the conversion relation between the display screen coordinate system and the static map coordinate system is determined according to the initial position and the conversion relation, and compared with the condition that the robot is pushed back to a starting point, the repositioning method can achieve in-situ repositioning, and accordingly repositioning time is shortened, and repositioning efficiency is improved.
In one embodiment, determining an initial position of the robot in the display screen based on the displacement parameter values of the operation parameter values comprises:
The method comprises the steps of obtaining a map height and a map width of a static map in a display screen, obtaining displacement parameter values from operation parameter values, wherein the displacement parameter values comprise horizontal displacement parameter values and vertical displacement parameter values, determining a horizontal initial position of a robot in a coordinate system of the display screen based on the map width and the horizontal displacement parameter values, determining a vertical initial position of the robot in the coordinate system of the display screen based on the map height and the vertical displacement parameter values, and determining an initial position of the robot in the display screen based on the horizontal initial position and the vertical initial position.
The map height refers to the height of the static map in the display screen, and the map width refers to the width of the static map in the display screen. The horizontal displacement parameter value refers to a parameter value for performing horizontal movement on the static map, i.e., a parameter value for lateral movement. The vertical displacement parameter value refers to a parameter value for performing a vertical movement of the static map, i.e., a parameter value for a longitudinal movement.
The method comprises the steps of obtaining a map height and a map width of a static map in a display screen by a robot, obtaining a horizontal displacement parameter value and a vertical displacement parameter value from operation parameter values, adding the horizontal displacement parameter value to one half of the map width to obtain a horizontal initial position of the robot in the display screen coordinate system, adding the vertical displacement parameter value to one half of the map height to obtain a vertical initial position of the robot in the display screen coordinate system, and combining the horizontal initial position and the vertical initial position into an initial position of the robot in the display screen.
In one embodiment, the robot obtains the map height nWidth and the map width nHeight through a control displaying a static map, obtains the horizontal displacement parameter value dx and the vertical displacement parameter value dy from the operation parameter values, and determines the initial position of the robot in the display screen to be ((nWidth/2) toFloat () +dx, (nHeight/2) toFloat () +dy). Where (nWidth/2) toFloat () means converting nWidth/2 into floating point type data and (nHeight/2) toFloat () means converting nHeight/2 into floating point type data.
In this embodiment, after the trigger operation is performed on the static map, the initial position of the robot in the display screen is determined through the horizontal displacement parameter value and the vertical displacement parameter value, and basic data is provided for subsequently determining the static position of the robot in the static map.
In one embodiment, determining a conversion relationship between a display screen coordinate system to a static map coordinate system based on the operating parameter values includes:
The method comprises the steps of determining a static map transformation matrix based on operation parameter values, determining an inverse matrix of the static map transformation matrix, and determining the inverse matrix as a conversion relation between a display screen coordinate system and a static map coordinate system.
The static map transformation matrix is a matrix representing the change caused by the triggering operation on the static map, the static map transformation matrix represents the movement, scaling and rotation of the triggering operation on the static map, the static map transformation matrix comprises the horizontal movement distance, the vertical movement distance, the scaling multiple and the rotation angle of the triggering operation on the static map, and the static map transformation matrix can be understood as the transformation of the points in the static map in a display screen coordinate system, and the position coordinates of the points in the static map in the display screen coordinate system after the triggering operation can be determined according to the static map transformation matrix. For example, using an initial static map transformation matrix as an identity matrix prior to triggering the static mapThe horizontal displacement parameter value dx and the vertical displacement parameter value dy are converted into a static map transformation matrixI.e. static map transformation matrix can be usedThe horizontal moving distance dx and the vertical moving distance dy of the static map are represented, and likewise, the static map transformation matrix can also be used for representing the scaling and rotation of the static map, for example, scaling the static map by 2 times, and then the static map transformation matrix isThe static map is rotated 45 degrees clockwise, and then the static map transformation matrix is as followsFor example, after scaling the static map by 2 times, the static map is shifted horizontally by dx and vertically by dy, and the static map transformation matrix is thenFor example, firstly zoom the static map by 2 times, secondly move the static map horizontally dx, vertically dy, and rotate the static map clockwise by 45 degrees again, then the static map transformation matrix is. The inverse matrix refers to an inverse transformation matrix of the static map transformation matrix, for example, the static map transformation matrix is Tview _map, and then the inverse matrix of the static map transformation matrix is tmap_view, and the product of Tview _map and tmap_view is an identity matrix.
It will be appreciated that displaying a static map in a display screen requires converting the static map coordinate system of the static map to the display screen coordinate system of the display screen, the static map transformation matrix Tview _map is typically a composite of scaling, translation and rotation transformations, and the point Pmap in the static map coordinate system can be converted to the point Pview in the display screen coordinate system by the static map transformation matrix Tview _map, i.e., pview = Tview _map×pmap, and thus the static map transformation matrix characterizes the conversion relationship from the static map coordinate system to the display screen coordinate system. Tmap_view= Tview _map-1, tmap_view is the inverse of Tview _map, tmap_view× Tview _map-1 =i, I is the identity matrix, then the point Pview in the display screen coordinate system can be converted into Pmap in the static map coordinate system by tmap_view, i.e. pmap=tmap_view× Pview, and thus the inverse of the static map transformation matrix characterizes the conversion relationship from the display screen coordinate system to the static map coordinate system.
Illustratively, the robot determines a static map transformation matrix based on the operation parameter values, then determines an inverse of the static map transformation matrix, and determines the inverse as a conversion relationship between the display screen coordinate system and the static map coordinate system.
In this embodiment, the conversion relationship between the display screen coordinate system and the static map coordinate system is obtained by determining the inverse matrix of the static map conversion matrix, which is a composite of scaling, translation and rotation conversion, so that the static map conversion matrix characterizes the conversion relationship from the static map coordinate system to the display screen coordinate system, and the inverse matrix of the static map conversion matrix characterizes the conversion relationship from the display screen coordinate system to the points in the static map coordinate system, so that the inverse matrix of the static map conversion matrix characterizes the conversion relationship from the display screen coordinate system to the static map coordinate system, and the inverse matrix of the static map conversion matrix is determined as the conversion relationship between the display screen coordinate system and the static map coordinate system, so that the initial position of the robot in the display screen coordinate system is converted into the static position in the static map coordinate system.
In one embodiment, determining the repositioning position of the robot based on the initial position and the conversion relationship includes:
The method comprises the steps of determining an initial position matrix of the robot based on the initial position, multiplying the initial position matrix by a conversion relation to obtain a target position matrix, and determining the repositioning position of the robot based on the target position matrix.
Wherein the initial position matrix refers to a matrix representing the initial position of the robot, for example, the initial position is (nWidth/2+dx, nHeight/2+dy), and the initial position matrix is。
Illustratively, the robot converts the initial position into an initial position matrix, multiplies the initial position matrix by a conversion relationship to obtain a target position matrix, and determines a repositioning position of the robot based on the target position matrix.
In this embodiment, the initial position matrix is multiplied by the conversion relationship to obtain the target position matrix, where the target position matrix includes the static position of the robot in the static map coordinate system, that is, the initial position of the robot in the display screen coordinate system is converted into the static position in the static map coordinate system, and the repositioning position of the robot can be determined according to the static position.
In one embodiment, determining a repositioning position of the robot based on the target location matrix includes:
The method comprises the steps of obtaining a static position of a robot in a static map from a target position matrix, obtaining a map building initial position of the robot and resolution of the static map, wherein the map building initial position is the position of a starting point of the robot for building the static map in a world map coordinate system, and determining a repositioning position of the robot based on the static position, the map building initial position and the resolution.
The static position refers to a position in the static map coordinate system corresponding to the initial position in the display screen coordinate system, namely, the position of the robot in the static map. The resolution refers to a correspondence relationship between a unit distance of the static map and an actual unit distance, and the resolution may be expressed as a distance/pixel, i.e., a distance represented by each pixel. The map-building start position refers to the position of the robot in the world map coordinate system at the start point of building the static map. The world map coordinate system refers to a global coordinate system describing the absolute position and orientation of the robot in its operating environment.
The method includes the steps that a robot obtains a static position of the robot in a static map from a target position matrix, obtains a map building initial position of the static map and resolution of the static map, wherein the map building initial position comprises a map building horizontal initial position and a map building vertical initial position, determines a repositioning horizontal position of the robot based on the static position, the map building horizontal initial position and the resolution, determines a repositioning vertical position of the robot based on the static position, the map building vertical initial position and the resolution, and obtains the repositioning position of the robot based on the repositioning horizontal position and the repositioning vertical position.
In one embodiment, the robot acquires a static position (x, y) of the robot in the static map from the target position matrix, acquires a map construction starting position (originX, originY) and a resolution of the static map, and then relocates a horizontal position val_x and a relocated vertical position val_y as follows:
val_x=x×resolution + originX formula (1)
Val_y= ((nhight-1) -y) ×resolution+ originY formula (2)
Where nhight is the map height of the static map in the display screen.
It can be understood that val_x=x×resolution+ originX, val_x is the X-axis coordinate value of the robot in the world coordinate system, that is, the repositioning horizontal position, because the X-axis direction of the world coordinate system is the same as the X-axis direction of the static map coordinate system, the repositioning horizontal position of the robot in the world coordinate system can be calculated directly by using x×resolution+ originX. However, the direction of the Y axis in the world coordinate system is upward, the direction of the Y axis in the static map coordinate system is downward, nHeight-1 represents the largest Y-axis coordinate value in the static map coordinate system, and the Y value in the static map coordinate system is turned to the Y-axis coordinate value in the upward direction by using (nHeight-1) -Y, so that the repositioning vertical position of the robot in the world coordinate system can be calculated by using ((nHeight-1) -Y) x resolution + originY. In one embodiment, the robot determines an initial angle of the robot based on the target location matrix, and determines a repositioning angle of the robot based on the initial angle. For example, the initial angle is z, and the repositioning angle val_z= -z-pi/2.
In this embodiment, the repositioning position of the robot is determined by the static position, the map-building starting position and the resolution, i.e. the static position of the robot in the static map is converted into the repositioning position in the world coordinate system.
In one embodiment, as shown in fig. 7, determining the repositioning position of the robot based on the static position, the mapping starting position, and the resolution includes:
Step 702, determining an initial positioning position based on the static position, the mapping starting position and the resolution.
The initial positioning position is a positioning position which is determined according to the static position, the initial mapping position and the resolution and is not checked.
Illustratively, the map creation start position includes a map creation horizontal start position and a map creation vertical start position, the robot determines a repositioning horizontal position of the robot based on the static position, the map creation horizontal start position, and the resolution, determines a repositioning vertical position of the robot based on the static position, the map creation vertical start position, and the resolution, and obtains an initial positioning position of the robot based on the repositioning horizontal position and the repositioning vertical position.
Step 704, determining obstacle characteristic data corresponding to the obstacle point cloud based on the obstacle point cloud corresponding to the obstacle outline in the obstacle map.
The obstacle point cloud refers to a set corresponding to an obstacle detected by the robot through the laser sensor. The obstacle characteristic data refers to data representing characteristics of an obstacle corresponding to the obstacle point cloud, and the obstacle characteristic data can be represented by vectors, matrixes or the like.
Illustratively, the robot inputs an obstacle point cloud corresponding to an obstacle outline in the obstacle map to the neural network model, and the neural network model outputs obstacle characteristic data corresponding to the obstacle point cloud.
Step 706, obtaining matching feature data corresponding to the initial positioning position, and determining the similarity between the matching feature data and the obstacle feature data.
The matching characteristic data represent the characteristic data of the obstacle corresponding to the initial positioning position. The degree of similarity refers to a numerical value representing the degree of similarity, and the greater the degree of similarity, the higher the degree of similarity.
Illustratively, the robot acquires matching feature data corresponding to the initial positioning position from the map configuration file, and calculates the similarity between the matching feature data and the obstacle feature data.
In step 708, in the event that the similarity is greater than the similarity threshold, the initial positioning position is determined to be the relocation position.
The similarity threshold is a preset numerical value for judging whether repositioning is accurate or not.
Illustratively, the robot compares the similarity to a similarity threshold, and if the similarity is greater than the similarity threshold, determines the initial position to be the relocation position.
In one embodiment, if the similarity is smaller than or equal to a similarity threshold, the preset angle is rotated to obtain updated obstacle point clouds, updated obstacle feature data corresponding to the updated obstacle point clouds are determined based on the updated obstacle point clouds, the similarity between the matched feature data and the updated obstacle feature data is calculated, the similarity is compared with the similarity threshold, if the similarity is smaller than or equal to the similarity threshold, the process is repeated until a circulation stop condition is met, and if the similarity is greater than the similarity threshold, the initial positioning position is determined to be the repositioning position. The cycle stop condition refers to a condition for stopping the cycle, and the cycle stop condition may be that the number of rotations reaches a preset number.
In this embodiment, by determining the magnitude relation between the similarity between the matching feature data and the obstacle feature data and the similarity threshold, it is determined whether the repositioning position is accurate, thereby improving the accuracy of the repositioning position.
In one embodiment, obtaining the operating parameter values in the event that the obstacle profile in the obstacle map matches the obstacle profile in the static map comprises:
The method comprises the steps of responding to repositioning operation, displaying a static map in a display screen, acquiring laser radar point cloud, displaying an obstacle map on the upper layer of the static map based on the laser radar point cloud, responding to triggering operation for the static map, and acquiring an operation parameter value corresponding to the triggering operation for the display screen under the condition that the outline of the obstacle in the obstacle map is matched with the outline of the obstacle in the static map.
The repositioning operation refers to an operation of clicking a repositioning control by an operator, wherein the repositioning control can be an entity control or a virtual control, and can be located on a robot or equipment for controlling the robot. The lidar point cloud is a point set obtained by a lidar sensor and used for representing an obstacle. The triggering operation refers to an operation of adjusting the static map by an operator, and the triggering operation includes, but is not limited to, at least one of movement, zooming, and rotation.
In an exemplary case that the robot loses pose information, an operator clicks a repositioning control, the robot responds to repositioning operation to display a static map in a display screen, then obtains laser radar point clouds, displays an obstacle map on the upper layer of the static map based on the laser radar point clouds, triggers the static map through the display screen, responds to triggering operation on the static map, adjusts the display position of the static map in the display screen, and obtains operation parameter values when the outline of the obstacle in the obstacle map is matched with the outline of the obstacle in the static map.
In one embodiment, acquiring the operation parameter value under the condition that the obstacle outline in the obstacle map is matched with the obstacle outline in the static map comprises the steps that the robot responds to the triggering operation aiming at the static map, acquires a first position when the gesture is put down and a second position after the gesture moves, determines single horizontal displacement and single vertical displacement corresponding to the single movement based on the second position and the first position, adds the plurality of single horizontal displacements under the condition that the obstacle outline in the obstacle map is matched with the obstacle outline in the static map to obtain the horizontal displacement parameter value, and adds the plurality of single vertical displacements to obtain the vertical displacement parameter value. Similarly, rotation parameter values and scaling factors may be obtained.
In one embodiment, in response to a triggering operation for a static map, dx, dy corresponding to different events are obtained, dx= eventx-lastx +oldx, dy= eventy-lasty + oldy, for example, the event is a movement event, dx= eventx-lastx +oldx, dy= eventy-lasty + oldy (eventx, eventy) is a position when a single movement gesture is lifted, (lastx, lasty) is a position when the single movement gesture is put down, oldx is a historical horizontal movement position accumulated by a plurality of single movements before the single movement, oldy is a historical vertical movement position accumulated by a plurality of single movements before the single movement, and dx is a horizontal displacement parameter value in a case that an obstacle profile in the obstacle map matches an obstacle profile in the static map, dy is a vertical displacement parameter value.
In this embodiment, an operator clicks a repositioning control to control the robot to display a static map and an obstacle map in a display screen, the operator triggers the static map through the display screen to enable an obstacle profile in the obstacle map to be matched with an obstacle profile in the static map, the robot obtains an operation parameter value under the condition that the obstacle profile in the obstacle map is matched with the obstacle profile in the static map, accurate basic data is provided for realizing in-situ repositioning of a subsequent robot, and compared with the condition that the robot needs to be pushed back to a starting point, the repositioning method can realize in-situ repositioning, so that repositioning time is shortened, and repositioning efficiency is improved.
In one exemplary embodiment, the repositioning flow is shown in fig. 8, and includes that an operator walks to a position where the robot is located in the case that the robot loses pose information, clicks a repositioning control in a display screen of the robot, the robot loads a static map in the display screen in response to repositioning operation, then acquires a laser radar point cloud, and loads an obstacle map on an upper layer of the static map based on the laser radar point cloud.
An operator performs a triggering operation on the static map through the display screen, wherein the triggering operation comprises at least one of translation, rotation and scaling, and the robot responds to the triggering operation on the static map to adjust the display position of the static map in the display screen and acquire operation parameter values when the obstacle outline in the obstacle map is matched with the obstacle outline in the static map. The robot acquires a map height nWidth and a map width nHeight through a control for displaying the static map.
The horizontal displacement parameter value dx and the vertical displacement parameter value dy are obtained from the operation parameter values, and the initial position of the robot in the display screen is determined to be ((nWidth/2) toFloat () +dx, (nHeight/2) toFloat () +dy). Determining an initial position matrix of the robot based on the initial position。
The robot determines a static map transformation matrix based on the operation parameter values, then determines an inverse matrix of the static map transformation matrix, and determines the inverse matrix as a conversion relationship between the display screen coordinate system and the static map coordinate system.
The robot multiplies the initial position matrix by the conversion relation to obtain a target position matrix. And acquiring the static position (x, y) of the robot in the static map from the target position matrix, and acquiring the map construction starting position (originX, originY) and resolution of the static map. Substituting a horizontal coordinate x, a resolution and a horizontal coordinate originX in a drawing starting position of the robot in a static map into a formula (1) to obtain a horizontal position val_x, substituting a vertical coordinate y, a resolution and a vertical coordinate originY in the drawing starting position of the robot in the static map into a formula (2) to obtain a repositioning vertical position val_y, wherein (val_x, val_y) is the repositioning position of the robot.
The robot acquires matching feature data corresponding to the repositioning position from the map configuration file, determines barrier feature data corresponding to the barrier point cloud based on the barrier point cloud, calculates similarity of the matching feature data and the barrier feature data, compares the similarity with a similarity threshold, determines that the repositioning position is correct if the similarity is equal to or greater than the similarity threshold, namely that the repositioning is successful, and determines that the repositioning position is incorrect if the similarity is less than the similarity threshold, namely that the repositioning is failed.
And under the condition that the repositioning fails, the robot rotates for 90 degrees, acquires updated laser radar point clouds through a laser radar sensor, generates updated obstacle maps based on the updated laser radar point clouds, displays the updated obstacle maps and the static maps in a display screen, and repeatedly executes the repositioning process.
In the repositioning method, in the position of the robot where pose information is lost, an obstacle map is formed through laser radar point clouds acquired by a laser radar sensor of the robot, the obstacle outline in the obstacle map reflects environment information of the current position of the robot, the static map is adjusted through triggering operation, the obstacle outline in the obstacle map is matched with the obstacle outline in the static map, namely, the local obstacle outline similar to or the same as the obstacle outline of the obstacle map in the static map is determined through triggering operation, an operation parameter value corresponding to the triggering operation is acquired, the initial position of the robot in a display screen coordinate system is determined according to the displacement parameter value in the operation parameter, the conversion relation between the display screen coordinate system and the static map coordinate system is determined according to the initial position and the conversion relation, and compared with the condition that the robot is pushed back to a starting point, the repositioning method can achieve in-situ repositioning, and accordingly repositioning time is shortened, and repositioning efficiency is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a repositioning device for realizing the repositioning method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitations in one or more embodiments of the repositioning apparatus provided below may be referred to above for limitations of the repositioning method, and are not repeated here.
In one embodiment, as shown in FIG. 9, there is provided a relocating device comprising an acquisition module 902, an initial location determination module 904, a conversion relationship determination module 906 and a relocation module 908 wherein:
The acquisition module 902 is configured to acquire an operation parameter value when an obstacle contour in the obstacle map matches an obstacle contour in the static map, where the operation parameter value is a parameter value corresponding to a triggering operation for a display screen;
An initial position determining module 904, configured to determine an initial position of the robot in the display screen coordinate system based on the displacement parameter values in the operation parameter values;
A conversion relation determining module 906, configured to determine a conversion relation between the display screen coordinate system and the static map coordinate system based on the operation parameter value;
A repositioning module 908 for determining a repositioning position of the robot based on the initial position and the conversion relationship.
In one embodiment, the initial position determining module 904 is further configured to obtain a map height and a map width of the static map in the display screen, obtain a displacement parameter value from the operation parameter values, the displacement parameter value includes a horizontal displacement parameter value and a vertical displacement parameter value, determine a horizontal initial position of the robot in the display screen coordinate system based on the map width and the horizontal displacement parameter value, determine a vertical initial position of the robot in the display screen coordinate system based on the map height and the vertical displacement parameter value, and determine an initial position of the robot in the display screen based on the horizontal initial position and the vertical initial position.
In one embodiment, the transformation relationship determination module 906 is further configured to determine a static map transformation matrix based on the values of the operating parameters, determine an inverse of the static map transformation matrix, and determine the inverse as a transformation relationship between the display screen coordinate system to the static map coordinate system.
In one embodiment, the repositioning module 908 is further configured to determine an initial position matrix of the robot based on the initial position, multiply the initial position matrix by the transformation relationship to obtain a target position matrix, and determine a repositioning position of the robot based on the target position matrix.
In one embodiment, the repositioning module 908 is further configured to obtain a static position of the robot in the static map from the target location matrix, obtain a map creation start position of the robot and a resolution of the static map, create a position of a start point of the static map for the robot in a world map coordinate system, and determine a repositioning position of the robot based on the static position, the map creation start position, and the resolution.
In one embodiment, the repositioning module 908 is further configured to determine an initial positioning location based on the static location, the map-building starting location, and the resolution, determine obstacle feature data corresponding to the obstacle point cloud based on the obstacle point cloud corresponding to the obstacle outline in the obstacle map, obtain matching feature data corresponding to the initial positioning location, determine a similarity between the matching feature data and the obstacle feature data, and determine the initial positioning location as a repositioning location if the similarity is greater than a similarity threshold.
In one embodiment, the obtaining module 902 is further configured to display a static map on the display screen in response to the repositioning operation, obtain a laser radar point cloud, display an obstacle map on an upper layer of the static map based on the laser radar point cloud, and obtain an operation parameter value corresponding to a triggering operation for the display screen in a case that an obstacle profile in the obstacle map matches an obstacle profile in the static map in response to the triggering operation for the static map.
The various modules in the relocation apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in the processor in the robot or independent of the processor in the robot in a hardware mode, and can also be stored in a memory in the robot in a software mode, so that the processor can call and execute the operations corresponding to the modules.
In one embodiment, a robot, which may be a terminal, is provided, and an internal structure thereof may be as shown in fig. 10. The robot includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the robot is adapted to provide computing and control capabilities. The memory of the robot includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the robot is used for exchanging information between the processor and the external device. The communication interface of the robot is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a repositioning method. The display unit of the robot is used for forming a visual picture and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the robot can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on a robot shell, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the robots to which the present inventive arrangements are applied, and that a particular robot may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a robot is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.