Disclosure of Invention
Aiming at the technical problems in the related art, the invention provides a dynamic updating method of a robot 3D point cloud map, which comprises the following steps:
s1, acquiring a first pose pose _map of a robot based on 3D point cloud map positioning;
S2, according to a robot second pose pose-odom output by the laser odometer, maintaining a sub map for a preset time;
s3, judging that the angle difference between the first pose pose _map and the second pose pose _ odom is always larger than a first threshold value or the displacement difference is always larger than a second threshold value in a continuous second time period; if yes, executing step S4;
s4, extracting a local map local_map corresponding to the current position from the existing 3D point cloud map;
s5, projecting the sub-map and the local map into a depth image format by using a point cloud map, and respectively marking the depth image format as a sub-map depth map sub-map img and a local map depth map local map img;
And S6, when the variance difference value between the local map depth map local_map_img and the sub map depth map sub_map_img is larger than a third threshold value, directly replacing the sub map sub_map with the local map local_map to complete dynamic updating of the map.
Specifically, the predetermined time is 5s.
Specifically, the second period of time is 5s.
Specifically, the first threshold is 3 degrees or the second threshold is 0.3m.
Specifically, the third threshold is 0.5.
In a second aspect, another embodiment of the present invention discloses a robot 3D point cloud map dynamic updating apparatus, which includes:
the first pose acquisition unit is used for acquiring a first pose pose _map of the robot based on the 3D point cloud map positioning;
The second pose acquisition unit is used for maintaining a sub map of a preset time according to the second pose pose-odom of the robot output by the laser odometer;
The pose comparison unit is used for judging that the angle difference between the first pose pose _map and the second pose pose _ odom is always larger than a first threshold value or the displacement difference is always larger than a second threshold value in a continuous second time period; if yes, executing a local map acquisition unit;
the local map acquisition unit is used for extracting a local map local_map corresponding to the current position from the existing 3D point cloud map;
The depth map projection unit is used for projecting the sub-map and the local map into a depth image format by using a point cloud map, and respectively recording the sub-map and the local map as a sub-map depth map sub-map img and a local map depth map img;
And the local map updating unit is used for directly replacing the sub map with the local map when the variance difference between the local map depth map local_map_img and the sub map depth map sub_map_img is larger than a third threshold value, so as to complete dynamic map updating.
Specifically, the predetermined time and the second time period are both 5s.
Specifically, the first threshold is 3 degrees or the second threshold is 0.3m.
Specifically, the third threshold is 0.5.
In a third aspect, another embodiment of the present invention provides a robot, where the robot includes a central processor, and a storage unit, where the storage unit stores instructions stored on the storage unit, and the instructions, when executed by the processor, are configured to implement the method for dynamically updating a 3D point cloud map of the robot.
The invention considers that the local map of the position may need to be updated when the angle difference between the first pose pose _map and the second pose pose _ odom is always larger than a first threshold value in a second time period or the displacement difference is always larger than a second threshold value in the second time period, and further considers that the local map of the position needs to be updated when the variance difference between the local map depth map local_map_img and the sub-map depth map_img is larger than a third threshold value. According to the method, the image algorithm is used for evaluating the change of the 3D point cloud map into the map according to the variance difference value of the local map depth map local_map_img and the sub-map depth map sub_map_img, so that the change degree of the 3D point cloud map can be efficiently and accurately detected, and whether map updating is needed or not can be accurately judged.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the invention, fall within the scope of protection of the invention.
Example 1
Referring to fig. 1, the embodiment discloses a dynamic update method for a robot 3D point cloud map, which includes the following steps:
s1, acquiring a first pose pose _map of a robot based on 3D point cloud map positioning;
specifically, the robot starts to realize positioning in the scene based on the 3D point cloud map, and outputs a first pose pose _map of the robot.
Specifically, when the robot patrol in a preset scene, the positioning mode based on the 3D point cloud map is started to realize the positioning of the robot.
The robot is provided with a laser radar, and is positioned according to the laser radar and a pre-stored 3D point cloud map.
S2, according to a robot second pose pose-odom output by the laser odometer, maintaining a sub map for a preset time;
Specifically, the robot in this embodiment is equipped with a laser odometer (LiDAR odometry), which is a method of estimating the propagation position and direction by tracking the laser speckle pattern reflected by surrounding objects.
Specifically, the predetermined time is 5s.
S3, judging that the angle difference between the first pose pose _map and the second pose pose _ odom is always larger than a first threshold value or the displacement difference is always larger than a second threshold value in a continuous second time period; if yes, executing step S4;
specifically, the present embodiment compares that the angle difference between the first pose pose _map and the second pose pose _ odom is always greater than the first threshold value in a continuous second period of time, for example, the angle difference between the first pose pose _map and the second pose pose _ odom is always greater than the first threshold value in 5 s.
The specific first threshold is 3 degrees.
Or the displacement difference between the first pose pose _map and the second pose pose _ odom is always greater than the second threshold in a continuous second period of time, for example, the displacement difference between the first pose pose _map and the second pose pose _ odom is always greater than the second threshold in 5 s.
Specifically, the second threshold is 0.3m.
If the angle difference between the first pose pose _map and the second pose pose _ odom is always greater than a first threshold value in a second time period or the displacement difference is always greater than a second threshold value in a second time period, the local map of the position is considered to be initially considered to be possibly updated.
S4, extracting a local map local_map corresponding to the current position from the existing 3D point cloud map;
specifically, the robot stores an established 3D point cloud map, and the established 3D point cloud map may have a difference from the current time.
According to the acquired current position of the robot, the embodiment acquires a local map of the current position from the established 3D map.
S5, projecting the sub-map and the local map into a depth image format by using a point cloud map, and respectively marking the depth image format as a sub-map depth map sub-map img and a local map depth map local map img;
specifically, converting the point cloud map (x, y, z) into a depth image (u, v, pixel);
The present embodiment converts the point cloud map (x, y, z) into a depth image (u, v, pixel) according to the following formula with a pixel size resolution of 0.1m per image: u=round (x 0.1), v=round (y 0.1), pixel=round (z 100/256), where round () is rounded up.
The depth image (DEPTH IMAGES), also referred to as a range image (RANGE IMAGE), refers to an image having as pixel values the range values from an image collector to points in the scene, which directly reflects the geometry of the scene's visible surface.
S6, when the variance difference value between the local map depth map local_map_img and the sub map depth map sub_map_img is larger than a third threshold value, directly replacing the sub map sub_map with the local map local_map to complete dynamic updating of the map;
Specifically, the present embodiment calculates the variance value of the local map depth map local_map_img, and the present embodiment calculates the mean value pmean=1/n (P (u, v, 1) +p (u, v, 2) +........... +p (u, v, n)) of the local map depth map local_map_img; the variance v=1/n of the local map depth map local_map_img is then calculated from the mean value [ (P (u, V, 1) -Pmean)/(2+ (P (u, V, 2) -Pmean)/(2 +........... + (P (u, V, n) -Pmean)/(2) ].
A specific third threshold is 0.5.
The present embodiment considers that the local map of the position may need to be updated when the angle difference between the first pose pose _map and the second pose pose _ odom is always greater than a first threshold value in the second period of time or the displacement difference is always greater than a second threshold value in the second period of time, and further considers that the local map of the position needs to be updated when the variance difference between the local map depth map local_map_img and the sub-map depth map sub_map_img is greater than a third threshold value. According to the embodiment, the change degree of the 3D point cloud map can be efficiently and accurately detected by an image algorithm, namely, the change degree of the 3D point cloud map is estimated according to the variance difference value of the local map depth map local_map_img and the sub map depth map sub_map_img, so that whether map updating is needed or not can be accurately judged.
Example two
Referring to fig. 2, the embodiment discloses a robot 3D point cloud map dynamic updating device, which includes the following units:
the first pose acquisition unit is used for acquiring a first pose pose _map of the robot based on the 3D point cloud map positioning;
specifically, the robot starts to realize positioning in the scene based on the 3D point cloud map, and outputs a first pose pose _map of the robot.
Specifically, when the robot patrol in a preset scene, the positioning mode based on the 3D point cloud map is started to realize the positioning of the robot.
The robot is provided with a laser radar, and is positioned according to the laser radar and a pre-stored 3D point cloud map.
The second pose acquisition unit is used for maintaining a sub map of a preset time according to the second pose pose-odom of the robot output by the laser odometer;
Specifically, the robot in this embodiment is equipped with a laser odometer (LiDAR odometry), which is a method of estimating the propagation position and direction by tracking the laser speckle pattern reflected by surrounding objects.
Specifically, the predetermined time is 5s.
The pose comparison unit is used for judging that the angle difference between the first pose pose _map and the second pose pose _ odom is always larger than a first threshold value or the displacement difference is always larger than a second threshold value in a continuous second time period; if yes, executing a local map acquisition unit;
specifically, the present embodiment compares that the angle difference between the first pose pose _map and the second pose pose _ odom is always greater than the first threshold value in a continuous second period of time, for example, the angle difference between the first pose pose _map and the second pose pose _ odom is always greater than the first threshold value in 5 s.
The specific first threshold is 3 degrees.
Or the displacement difference between the first pose pose _map and the second pose pose _ odom is always greater than the second threshold in a continuous second period of time, for example, the displacement difference between the first pose pose _map and the second pose pose _ odom is always greater than the second threshold in 5 s.
Specifically, the second threshold is 0.3m.
If the angle difference between the first pose pose _map and the second pose pose _ odom is always greater than a first threshold value in a second time period or the displacement difference is always greater than a second threshold value in a second time period, the local map of the position is considered to be initially considered to be possibly updated.
The local map acquisition unit is used for extracting a local map local_map corresponding to the current position from the existing 3D point cloud map;
specifically, the robot stores an established 3D point cloud map, and the established 3D point cloud map may have a difference from the current time.
According to the acquired current position of the robot, the embodiment acquires a local map of the current position from the established 3D map.
The depth map projection unit is used for projecting the sub-map and the local map into a depth image format by using a point cloud map, and respectively recording the sub-map and the local map as a sub-map depth map sub-map img and a local map depth map img;
specifically, converting the point cloud map (x, y, z) into a depth image (u, v, pixel);
The present embodiment converts the point cloud map (x, y, z) into a depth image (u, v, pixel) according to the following formula with a pixel size resolution of 0.1m per image: u=round (x 0.1), v=round (y 0.1), pixel=round (z 100/256), where round () is rounded up.
The depth image (DEPTH IMAGES), also referred to as a range image (RANGE IMAGE), refers to an image having as pixel values the range values from an image collector to points in the scene, which directly reflects the geometry of the scene's visible surface.
And the local map updating unit is used for directly replacing the sub map with the local map when the variance difference between the local map depth map local_map_img and the sub map depth map sub_map_img is larger than a third threshold value, so as to complete dynamic map updating.
Specifically, the present embodiment calculates the variance value of the local map depth map local_map_img, and the present embodiment calculates the mean value pmean=1/n (P (u, v, 1) +p (u, v, 2) +........... +p (u, v, n)) of the local map depth map local_map_img; the variance v=1/n of the local map depth map local_map_img is then calculated from the mean value [ (P (u, V, 1) -Pmean)/(2+ (P (u, V, 2) -Pmean)/(2 +........... + (P (u, V, n) -Pmean)/(2) ].
A specific third threshold is 0.5.
The present embodiment considers that the local map of the position may need to be updated when the angle difference between the first pose pose _map and the second pose pose _ odom is always greater than a first threshold value in the second period of time or the displacement difference is always greater than a second threshold value in the second period of time, and further considers that the local map of the position needs to be updated when the variance difference between the local map depth map local_map_img and the sub-map depth map sub_map_img is greater than a third threshold value. According to the embodiment, the change degree of the 3D point cloud map can be efficiently and accurately detected by an image algorithm, namely, the change degree of the 3D point cloud map is estimated according to the variance difference value of the local map depth map local_map_img and the sub map depth map sub_map_img, so that whether map updating is needed or not can be accurately judged.
Example III
Referring to fig. 3, fig. 3 is a schematic structural view of the robot of the present embodiment. The robot 20 of this embodiment comprises a processor 21, a memory 22 and a computer program stored in said memory 22 and executable on said processor 21. The steps of the above-described method embodiments are implemented by the processor 21 when executing the computer program. Or the processor 21, when executing the computer program, performs the functions of the modules/units in the above-described device embodiments.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 22 and executed by the processor 21 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program in the robot 20. For example, the computer program may be divided into modules in the second embodiment, and specific functions of each module refer to the working process of the apparatus described in the foregoing embodiment, which is not described herein.
The robot 20 may include, but is not limited to, a processor 21, a memory 22. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the robot 20 and is not limiting of the robot 20, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the robot 20 may also include input and output devices, network access devices, buses, etc.
The Processor 21 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 21 is a control center of the robot 20, and connects various parts of the entire robot 20 using various interfaces and lines.
The memory 22 may be used to store the computer program and/or module, and the processor 21 may implement various functions of the robot 20 by running or executing the computer program and/or module stored in the memory 22 and invoking data stored in the memory 22. The memory 22 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 22 may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Wherein the integrated modules/units of the robot 20 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each of the method embodiments described above when executed by the processor 21. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the invention, the connection relation between the modules represents that the modules have communication connection, and can be specifically implemented as one or more communication buses or signal lines. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.