Movatterモバイル変換


[0]ホーム

URL:


CN119247957A - A mobile robot capable of autonomously building a map in an unknown closed space and a working method thereof - Google Patents

A mobile robot capable of autonomously building a map in an unknown closed space and a working method thereof
Download PDF

Info

Publication number
CN119247957A
CN119247957ACN202411357955.2ACN202411357955ACN119247957ACN 119247957 ACN119247957 ACN 119247957ACN 202411357955 ACN202411357955 ACN 202411357955ACN 119247957 ACN119247957 ACN 119247957A
Authority
CN
China
Prior art keywords
points
point
point cloud
laser radar
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411357955.2A
Other languages
Chinese (zh)
Inventor
马永远
魏东
叶轩
陈明辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology Beijing CUMTB
Original Assignee
China University of Mining and Technology Beijing CUMTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology Beijing CUMTBfiledCriticalChina University of Mining and Technology Beijing CUMTB
Priority to CN202411357955.2ApriorityCriticalpatent/CN119247957A/en
Publication of CN119247957ApublicationCriticalpatent/CN119247957A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种在未知封闭空间内自主建图的移动机器人及工作方法,包括机器人主体、三维激光雷达、二维激光雷达、惯性测量单元和轮式里程计;其通过构建场景信息密度的数学模型,得到了信息密度与机器人主体的导航和地图构建性能之间的关系,实现了对未知环境空间点云的稀疏程度的量化,实现移动机器人在未知环境下的自主导航和决策;接着基于点云信息密度确定目标点,并利用三维激光雷达进行未知封闭环境的三维点云地图的构建,基于二维激光雷达在二维栅格地图上进行路径规划和导航,实现移动机器人在未知环境自主建图,能在完全不依靠人工干预的情况下实现在未知封闭环境中全自动路径规划及避障,从而对未知环境进行精确建图,且效率较高。

The present invention discloses a mobile robot and a working method for autonomously building a map in an unknown closed space, comprising a robot body, a three-dimensional laser radar, a two-dimensional laser radar, an inertial measurement unit and a wheel odometer; by constructing a mathematical model of scene information density, the relationship between information density and navigation and map building performance of the robot body is obtained, the sparsity degree of a point cloud in an unknown environment space is quantified, and autonomous navigation and decision-making of the mobile robot in an unknown environment is realized; then, a target point is determined based on the point cloud information density, and a three-dimensional point cloud map of the unknown closed environment is constructed by using the three-dimensional laser radar, and path planning and navigation are performed on a two-dimensional grid map based on the two-dimensional laser radar, so that the mobile robot can autonomously build a map in the unknown environment, and can realize fully automatic path planning and obstacle avoidance in an unknown closed environment without relying on human intervention, thereby accurately building a map of the unknown environment with high efficiency.

Description

Mobile robot capable of autonomously building map in unknown closed space and working method
Technical Field
The invention relates to a mobile robot capable of autonomously building a map in an unknown closed space and a working method thereof, belonging to the technical field of point cloud data processing and environment map building.
Background
With the rapid development of artificial intelligence and robotics, autonomous navigation and decision making systems are gradually exhibiting their great application potential in a variety of fields, particularly industry, mining and transportation. Mapping and navigation under an unknown environment, it is important to make full use of point cloud information of the 3D radar, and the point cloud information density refers to the amount of point cloud information contained in a unit space, and reflects the complexity and richness of the environment. In an unknown full-blind limited space scene, when a laser radar is used for mapping, the distribution of the point cloud information density tends to be uneven, the point cloud information density of some key areas may be higher, and the point cloud information density of some common areas is relatively lower. How to accurately evaluate and utilize these information densities is therefore critical to autonomous navigation and decision making of the robot in this scenario.
Existing environment mapping techniques mostly rely on fully manual control or semi-autonomous operation for implementation. The full-manual control mapping is to manually control the mobile equipment by an operator, collect unknown environment information and complete mapping, and the semi-automatic mapping is to autonomously plan a moving path through the equipment and combine manual intervention to improve the integrity of the map and ensure the moving safety of the equipment. However, these methods have certain application limitations, for example, in an underground closed environment, it is difficult to establish stable and reliable communication between the mobile device and the control terminal, so that human intervention is difficult, and thus, environment mapping under fully manual or semi-autonomous conditions cannot be successfully performed.
In summary, how to provide a mobile intelligent agent full-automatic mapping scheme for an unknown closed environment, so that the mobile intelligent agent full-automatic mapping scheme can accurately map the unknown environment without manual intervention, and the method is a direction of research required by the invention.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides the mobile robot capable of autonomously building the map in the unknown closed space and the working method thereof, and the mobile robot can realize full-automatic path planning and obstacle avoidance in the unknown closed environment under the condition of completely not depending on manual intervention, thereby accurately building the map of the unknown environment.
In order to achieve the aim, the invention adopts the technical scheme that the mobile robot capable of autonomously building the map in the unknown closed space comprises a robot main body, a three-dimensional laser radar, a two-dimensional laser radar, an Inertial Measurement Unit (IMU), a wheel type odometer and a central processor;
The three-dimensional laser radar is arranged at the top of the robot main body through a bracket, can rotate by taking the bracket as a center and is used for acquiring 360-degree real-time laser scanning data of the environment where the robot main body is positioned;
the two-dimensional laser radar is arranged at the top of the robot main body through a turntable and is positioned in front of a bracket of the three-dimensional laser radar, and the turntable can drive the two-dimensional laser radar to rotate 360 degrees and is used for acquiring real-time scanning data in the moving process of the robot main body and feeding back to the central processor to construct a two-dimensional grid map for autonomous path planning and obstacle avoidance;
The inertial measurement unit is arranged at the center of gravity position inside the robot main body and is used for measuring acceleration information and angular velocity information of the robot main body;
the wheel type odometer is arranged on a driving wheel of the robot main body and is used for acquiring wheel rotation information and travelling distance information of the robot main body;
The central processor is arranged in the robot main body and is used for acquiring monitoring data of the three-dimensional laser radar, the two-dimensional laser radar, the inertia measurement unit and the wheel speed odometer, analyzing and processing the monitoring data and then controlling the moving path of the robot main body until the autonomous graph construction work is completed.
Further, the robot main body is a wheeled robot, the central processor controls the steering angle and the steering speed of front and rear wheels of the robot main body to realize the forward, backward and turning actions of the robot, and the three-dimensional laser radar is a 256-line laser radar.
The autonomous mapping method of the mobile robot in the unknown closed space comprises the following specific steps:
The method comprises the steps that firstly, a real-time pose of a robot main body is determined, a central processor collects environmental information by utilizing a three-dimensional laser radar and a two-dimensional laser radar, performs fusion positioning by data fed back by an inertial measurement unit and a wheel speed odometer, combines the inertial measurement unit and the mileage data by using an Extended Kalman Filter (EKF), and predicts and updates the pose of the robot main body based on a motion model;
Step two, independently constructing a graph, namely extracting characteristic points from three-dimensional point cloud data acquired by the three-dimensional laser radar, wherein the characteristic points comprise angular points and plane points; according to the characteristic point matching result among a plurality of frames, the global pose of the three-dimensional laser radar is optimized through a Levenberg-Marquardt optimization algorithm, three-dimensional laser radar data of each frame are all inserted into a global point cloud map to construct a complete three-dimensional map, and the map is updated at a lower frequency;
Step three, determining a target point of the movement of the robot main body, namely performing radius filtering operation on point cloud data acquired by the two-dimensional laser radar, constructing a minimum bounding box of the current point cloud data and dividing the minimum bounding box into a plurality of voxel grids, calculating the point cloud information density of each voxel grid, and selecting a voxel grid with the center projection point which does not reach a density threshold as a local path planning target point in a two-dimensional grid map;
step four, robot movement and path optimization, namely ensuring that the planned movement path in the step three meets the requirements of robot kinematics and obstacle avoidance through kinematic constraint, time constraint, obstacle constraint and smoothness constraint, and simultaneously optimizing the geometric shape and time parameters of the path through a global cost function, and minimizing the cost function to obtain an optimal path;
And fifthly, continuously updating the global map by repeating the first to fourth steps until the autonomous map building work of the current environment is completed.
Further, the first step specifically comprises:
step 1.1, preprocessing data, acquiring data from an encoder of a driving wheel by a wheel type odometer, and converting the data into linear speed and angular speed;
Step 1.2, predicting the position, speed and posture of the robot at the next moment by using the monitoring data of the inertial measurement unit, and correcting the predicted state by using the measured value of the wheel type odometer;
And 1.3, estimating a state vector and a covariance matrix thereof based on Bayesian reasoning and combining prior probability (prediction) and observation probability (measurement), continuously receiving the data of the inertial measurement unit and the wheel type odometer, continuously carrying out prediction and updating steps, correcting the state estimation of the robot in real time, and realizing the prediction and updating of the pose of the robot main body based on a motion model.
Further, the second step specifically comprises:
step 2.1, extracting two characteristic points, namely a corner point and a face point in three-dimensional point cloud data, wherein a curvature calculation formula is as followsWherein k is a scanning period, L is a current coordinate system of the three-dimensional laser radar, i represents an ith point of the kth scanning point cloud,Representing the position, wherein the coordinate system of the kth scanning is L, the position of i points in the point cloud in L is SPoints of the rule are designated nearby, including front, back, left and right points, points in the 0.25 degree interval direction,A measure of the size of the length of the vector, which is the vector norm, defaults to 2 norms,For metric space, representing the Euclidean distance between two vectors;
Step 2.2, screening characteristic points based on the curvature calculation formula in the step 2.1, classifying the points in the data according to the thresholds of c in different ranges, and dividing the points into points with particularly large curvature, points with larger curvature and points with smaller curvature;
step 2.3, selecting points with particularly large curvature as edge points and points with smaller curvature as plane points from the classified points;
Step 2.4, matching the continuous characteristic frames, establishing an error model, and constructing an error function by utilizing the point-to-line distance and the point-to-surface distance, wherein the problem becomes a solutionI.e. the sum of all the point-to-line and point-to-plane distances from the transformation point of the ith point in the k+1st frame point cloud to the reference point cloud is the solved problem model;
Step 2.5, adding a rotation R and a translation t, wherein the relation between the ith point of the k+1st frame original data point cloud and the changed point cloud is as follows:
Step 2.6 deriving the Loss function from steps 2.4 and 2.5 asWherein the method comprises the steps ofFor the position vector of the i-th point in the k+1th scanning point cloud,Position vector for the (i) th point in the (k+1) th scan point cloudA rotated and translated position vector;
Modeling the three-dimensional laser radar motion by using constant angular velocity and linear velocity in the scanning process, and linearly interpolating the gesture transformation of the points received at different times in the scanning range;
Step 2.8, setting t as a time stamp, tk+1 as a start time of scanning k+1, settingFor the attitude transformation of the laser radar between [ tk+1, t ],Including rigid body motion of the three-dimensional lidar in 6 degrees of freedom,Wherein tx,ty,tz is the x, y, z axis translation along L, θxyz is the rotation angle of the three corresponding axes, a point i, i ε Pk+1 is given, ti is the timestamp thereof, andFor pose transformation between [ tk+1,ti ], assuming the radar is moving at constant angular and linear velocity, thenCan pass throughPerforming a linear interpolation formulaCalculating;
Step 2.9 Using nonlinear optimization methodSolving;
And 2.10, accumulating a certain number of point cloud data to start mapping, merging the point cloud data into a world map at a lower frequency, and accurately estimating the pose of the laser in a world coordinate system.
Further, the third step specifically comprises:
Step 3.1, constructing a K-D tree on point cloud data acquired by the current two-dimensional laser radar, and establishing a point cloud topological relation, solving the number of adjacent points in the neighborhood range of any point in the point cloud, judging whether the number of the adjacent points is smaller than a judgment threshold, if so, considering noise points and removing the noise points, and repeating the steps until all the points in the point cloud are processed;
Step 3.2, traversing each point in the point cloud data by using the PCL by taking a map coordinate system as a reference, and comparing the values of X, Y and Z in sequence;
Step 3.3, obtaining current local point cloud data X, Y and Z from the step 3.2, wherein the maximum values in three directions are marked as Xmax,ymax and Zmax, and the minimum values are marked as Xmin,ymin and Zmin;
step 3.4, obtaining the maximum and minimum values on each coordinate axis according to the step 3.3, calculating the side length lx,ly,lz of the minimum bounding box, wherein the calculation formula is as follows
Step 3.5, dividing the side lengths in the directions of three coordinate axes into m1,m2,m3 parts, dividing the minimum bounding box into n individual element lattices, and makingThe volume of each voxel grid is V=a.b.c, wherein a, b and c are the side lengths of the voxel grid in the x direction, the y direction and the z direction;
Step 3.6, setting the coordinates of the p-th point in the point cloud data as (xp,yp,zp), and marking each voxel grid to enableQ=i.4+j+k, whereRepresenting a downward rounding symbol, the point (xp,yp,zp) being located in the q-th voxel bin;
step 3.7, traversing each point in the point cloud data, distributing the points to the corresponding voxel grids according to the step 3.6, setting the total number of the points in the q-th voxel grid as Wq, and setting the point cloud density of the q-th voxel grid asWherein v=a b.c;
and 3.8, comparing the set point cloud density threshold value with the set point cloud density threshold value, and taking the center of the q-th voxel grid as a target point of the local path planning if the point cloud density of the q-th area with the minimum point cloud density does not reach the threshold value requirement.
Further, the step four specifically includes:
Step 4.1, detecting the position of an obstacle through a two-dimensional laser radar, mapping the position of the obstacle into a map, and endowing a grid where the obstacle is positioned with high cost value;
Step 4.2, marking a certain range around the obstacle as an expansion area, and updating the position information of the obstacle in real time by utilizing a local cost map;
step 4.3, taking an initial track generated by the global path planner as input, dispersing an initial motion route into a series of path points, and distributing a time segment for each path point;
Step 4.4, each path point is represented by Xi=(xi,yii)T, wherein Xi and yi are the spatial positions of the path points, ti is time information, and the position and posture coordinate sequence of the point where the robot main body is located in the motion space is represented by Q= { Xi}i=0,…,n, N epsilon N;
Step 4.5, the time interval Deltati=ti+1-ti between two adjacent path points represents the moving time of the robot between the two path points, the time sequence is represented by tau= { Deltati}i=0,…,n, N epsilon N, and N represents the number of control points;
Step 4.6, merging the time sequence and the position sequence of the control points to obtain B= (Q, tau), wherein N represents the number of the control points, and performing multi-objective optimization in a weighted manner on the two aspects of the time sequence and the control point sequence;
step 4.7 passing through the cost functionMinimizing angular variation between the path points, constraining smoothness between the path points, where i=0,..n, N e N, N represents the number of control points;
Step 4.8 passing through the cost functionMinimizing the total time of the path, maintaining an even distribution of time, whereinIs the average time interval between path points, i=0,., N, N e N, N represents the number of control points;
step 4.9 distance cost function from minimum Path Point to obstacleEnsuring that the waypoint is far from the obstacle, wherein d (Xi, o) is the distance from waypoint Xi to the obstacle, i=0..;
step 4.10 passing through the cost functionMaintaining the speed and acceleration of the robot on the path satisfying its kinematic constraints, where i=0..n, N e N, N represents the number of control points;
step 4.11 utilization ofObtaining an optimal solution of B, and finding optimal path point positions and time distribution, wherein B* represents an optimal result, f (B) is the total cost of comprehensively considering various targets, gammak represents the weight of a cost function of each optimization target, and k=1, 2,3 and 4;
and 4.12, moving the robot body according to the determined path, speed and acceleration data until the robot body moves to the target point calculated in the third step.
Compared with the prior art, the invention has the following advantages:
1. according to the invention, the relation between the information density and the navigation and map construction performance of the robot main body is obtained by constructing the mathematical model of the scene information density, so that the quantification of the sparseness degree of the space point cloud of the unknown environment is realized, and the autonomous navigation and decision-making of the mobile robot in the unknown environment are realized.
2. According to the method, the target point is determined based on the point cloud information density, the three-dimensional point cloud map of the unknown closed environment is constructed by utilizing the three-dimensional laser radar, the path planning and the navigation are carried out on the two-dimensional grid map based on the two-dimensional laser radar, the mobile robot can autonomously construct the map in the unknown environment, the full-automatic path planning and obstacle avoidance in the unknown closed environment can be realized under the condition of completely not depending on manual intervention, and therefore the accurate map construction is carried out on the unknown environment, and the efficiency is high.
Drawings
FIG. 1 is a frame diagram of the autonomous mapping process of the present invention;
FIG. 2 is a schematic diagram showing the relative positional relationship of the sensors according to the present invention;
FIG. 3 is a schematic diagram of a three-dimensional point cloud map generation process according to the present invention;
FIG. 4 is a schematic flow chart of determining a target point according to the cloud information density;
fig. 5 is a schematic diagram of a path planning procedure in the present invention.
In the figure, 1, a three-dimensional laser radar, 2, a bracket, 3, a two-dimensional laser radar, 4, a robot main body and 5, an inertial measurement unit.
Detailed Description
The present invention will be further described below.
As shown in fig. 2, a mobile robot autonomously constructing a map in an unknown closed space includes a robot main body 4, a three-dimensional laser radar 1, a two-dimensional laser radar 3, an inertial measurement unit 5, a wheel type odometer and a central processor;
the three-dimensional laser radar 1 is arranged at the top of the robot main body 4 through a bracket 2, can rotate by taking the bracket 2 as a center and is used for acquiring 360-degree real-time laser scanning data of the environment where the robot main body 4 is positioned;
The two-dimensional laser radar 3 is arranged at the top of the robot main body 4 through a turntable and is positioned in front of a bracket of the three-dimensional laser radar 1, and the turntable can drive the two-dimensional laser radar to rotate 360 degrees and is used for acquiring real-time scanning data in the moving process of the robot main body 4 and feeding back to a central processor to construct a two-dimensional grid map for autonomous path planning and obstacle avoidance;
The inertial measurement unit 5 is arranged at the center of gravity position inside the robot main body 4 and is used for measuring acceleration information and angular velocity information of the robot main body 4;
the wheel type odometer is arranged on a driving wheel of the robot main body 4 and is used for acquiring wheel rotation information and travelling distance information of the robot main body 4;
The central processor is arranged in the robot main body 4 and is used for acquiring monitoring data of the three-dimensional laser radar 1, the two-dimensional laser radar 3, the inertia measurement unit 5 and the wheel speed odometer, and controlling the moving path of the robot main body 4 after analysis processing until the autonomous graph construction work is completed;
The robot main body 4 is a wheeled robot, the central processor controls the steering angle and the steering speed of front and rear wheels of the robot main body 4 to realize the forward, backward and turning actions of the robot, and the three-dimensional laser radar 1 is a 256-line laser radar.
As shown in fig. 1, the autonomous mapping method of the mobile robot in the unknown closed space completes autonomous mapping and positioning of the unknown closed space through a mapping algorithm based on radar odometry and nonlinear optimization, an exploration strategy and a path planning algorithm, quantifies the sparseness of a space point cloud by constructing a point cloud information density model, realizes voxel grid division of a local point cloud map, provides a path planning target point for the mobile robot, and realizes autonomous exploration of the unknown environment, and the specific steps are as follows:
Step one, determining the real-time pose of a robot main body, namely acquiring environmental information by a central processor through a three-dimensional laser radar 1 and a two-dimensional laser radar 3, carrying out fusion positioning through data fed back by an inertial measurement unit 5 and a wheel speed odometer, combining the inertial measurement unit 5 and the odometer data through an Extended Kalman Filter (EKF), and predicting and updating the pose of the robot main body 4 based on a motion model, wherein the method comprises the following specific steps:
step 1.1, preprocessing data, acquiring data from an encoder of a driving wheel by a wheel type odometer, and converting the data into linear speed and angular speed;
Step 1.2, predicting the position, speed and posture of the robot at the next moment by using the monitoring data of the inertial measurement unit 5, and correcting the predicted state by using the measured value of the wheel type odometer, wherein the respective data of the wheel type odometer and the inertial measurement unit 5 are fused in a state space model, the wheel type odometer is regarded as an observed value, and the state estimation of a Kalman filter is updated;
And step 1.3, estimating a state vector and a covariance matrix thereof based on Bayesian reasoning and combining prior probability (prediction) and observation probability (measurement), continuously receiving the inertial measurement unit 5 and wheel type odometer data, continuously carrying out prediction and updating steps, correcting the state estimation of the robot in real time, and realizing the prediction and updating of the pose of the robot main body 4 based on a motion model.
Secondly, autonomously constructing a map, namely extracting characteristic points from three-dimensional point cloud data acquired by the three-dimensional laser radar 1, wherein the characteristic points comprise angular points and plane points, scanning characteristic point matching between each frame by the three-dimensional laser radar 1, utilizing the angular points to match edge characteristics and the plane points to match plane characteristics, estimating relative pose change between a current frame and a previous frame, optimizing the global pose of the three-dimensional laser radar 1 according to characteristic point matching results among a plurality of frames by a Levenberg-Marquardt optimization algorithm, inserting three-dimensional laser radar 1 data of each frame into a global point cloud map, constructing a complete three-dimensional map, updating the map at a lower frequency, converting local point cloud data of each frame into point cloud information in a global coordinate system, adding the global point cloud map, and particularly, according to the characteristic point matching results among a plurality of frames, inserting the three-dimensional point cloud data of each frame into the global point cloud map, wherein the global point cloud map is shown in the figure 3:
step 2.1, extracting two characteristic points, namely a corner point and a face point in three-dimensional point cloud data, wherein a curvature calculation formula is as followsWherein k is a scanning period, L is a current coordinate system of the three-dimensional laser radar, i represents an ith point of the kth scanning point cloud,Representing the position, wherein the coordinate system of the kth scanning is L, the position of i points in the point cloud in L is SPoints of the rule are designated nearby, including front, back, left and right points, points in the 0.25 degree interval direction,A measure of the size of the length of the vector, which is the vector norm, defaults to 2 norms,For metric space, representing the Euclidean distance between two vectors;
Step 2.2, screening characteristic points based on the curvature calculation formula in the step 2.1, classifying the points in the data according to the thresholds of c in different ranges, and dividing the points into points with particularly large curvature, points with larger curvature and points with smaller curvature;
step 2.3, selecting points with particularly large curvature as edge points and points with smaller curvature as plane points from the classified points;
Step 2.4, matching the continuous characteristic frames, establishing an error model, and constructing an error function by utilizing the point-to-line distance and the point-to-surface distance, wherein the problem becomes a solutionI.e. the sum of all the point-to-line and point-to-plane distances from the transformation point of the ith point in the k+1st frame point cloud to the reference point cloud is the solved problem model;
Step 2.5, adding a rotation R and a translation t, wherein the relation between the ith point of the k+1st frame original data point cloud and the changed point cloud is as follows:
Step 2.6 deriving the Loss function from steps 2.4 and 2.5 asWherein the method comprises the steps ofFor the position vector of the i-th point in the k+1th scanning point cloud,Position vector for the (i) th point in the (k+1) th scan point cloudA rotated and translated position vector;
modeling the motion of the three-dimensional laser radar 1 by using constant angular velocity and linear velocity in the scanning process, and linearly interpolating the gesture transformation of points received at different times in the scanning range;
Step 2.8, setting t as a time stamp, tk+1 as a start time of scanning k+1, settingFor the attitude transformation of the laser radar between [ tk+1, t ],Including rigid body motion of the three-dimensional lidar in 6 degrees of freedom,Wherein tx,ty,tz is the x, y, z axis translation along L, θxyz is the rotation angle of the three corresponding axes, a point i, i ε Pk+1 is given, ti is the timestamp thereof, andFor pose transformation between [ tk+1,ti ], assuming the radar is moving at constant angular and linear velocity, thenCan pass throughPerforming a linear interpolation formulaCalculating;
Step 2.9 Using nonlinear optimization methodSolving;
And 2.10, accumulating a certain number of point cloud data to start mapping, merging the point cloud data into a world map at a lower frequency, and accurately estimating the pose of the laser in a world coordinate system.
Step three, determining a target point of the movement of the robot main body, namely performing radius filtering operation on point cloud data acquired by the two-dimensional laser radar 3, constructing a minimum bounding box of the current point cloud data and dividing the minimum bounding box into a plurality of voxel grids, calculating the point cloud information density of each voxel grid, and selecting a voxel grid which does not reach a density threshold value, wherein a central projection point of the voxel grid is used as a local path planning target point in a two-dimensional grid map, as shown in fig. 4, specifically comprising the following steps:
step 3.1, constructing a K-D tree on the point cloud data acquired by the current two-dimensional laser radar 3, and establishing a point cloud topological relation, solving the number of adjacent points in the neighborhood range of any point in the point cloud, judging whether the number of the adjacent points is smaller than a judgment threshold, if so, considering noise points and removing the noise points, and repeating the steps until all the points in the point cloud are processed;
Step 3.2, traversing each point in the point cloud data by using the PCL by taking a map coordinate system as a reference, and comparing the values of X, Y and Z in sequence;
Step 3.3, obtaining current local point cloud data X, Y and Z from the step 3.2, wherein the maximum values in three directions are marked as Xmax,ymax and Zmax, and the minimum values are marked as Xmin,ymin and Zmin;
step 3.4, obtaining the maximum and minimum values on each coordinate axis according to the step 3.3, calculating the side length lx,ly,lz of the minimum bounding box, wherein the calculation formula is as follows
Step 3.5, dividing the side lengths in the three coordinate axis directions into m1,m2,m3 parts, wherein m1=4,m2=4,m3 =1 in the embodiment, dividing the minimum bounding box into n individual grids, and makingThe volume of each voxel grid is V=a.b.c, wherein a, b and c are the side lengths of the voxel grid in the x direction, the y direction and the z direction;
Step 3.6, setting the coordinates of the p-th point in the point cloud data as (xp,yp,zp), and marking each voxel grid to enableQ=i.4+j+k, whereRepresenting a downward rounding symbol, the point (xp,yp,zp) being located in the q-th voxel bin;
step 3.7, traversing each point in the point cloud data, distributing the points to the corresponding voxel grids according to the step 3.6, setting the total number of the points in the q-th voxel grid as Wq, and setting the point cloud density of the q-th voxel grid asWherein v=a b.c;
and 3.8, comparing the set point cloud density threshold value with the set point cloud density threshold value, and taking the center of the q-th voxel grid as a target point of the local path planning if the point cloud density of the q-th area with the minimum point cloud density does not reach the threshold value requirement.
Step four, robot movement and path optimization, which is to ensure that the planned movement path in step three meets the requirements of robot kinematics and obstacle avoidance through kinematic constraint, time constraint, obstacle constraint and smoothness constraint, then optimize the geometric shape and time parameters of the path simultaneously through a global cost function, minimize the cost function to obtain an optimal path, and perform environment sensing and real-time obstacle avoidance by utilizing the two-dimensional laser radar 3 until the robot main body 4 moves to the target point calculated in step three, as shown in fig. 5, specifically:
step 4.1, detecting the position of an obstacle through a two-dimensional laser radar 3, mapping the position of the obstacle into a map, and endowing a grid where the obstacle is positioned with high cost value;
Step 4.2, marking a certain range around the obstacle as an expansion area, and updating the position information of the obstacle in real time by utilizing a local cost map;
step 4.3, taking an initial track generated by the global path planner as input, dispersing an initial motion route into a series of path points, and distributing a time segment for each path point;
Step 4.4, each path point is represented by Xi=(xi,yii)T, wherein Xi and yi are the spatial positions of the path points, ti is time information, and the position and posture coordinate sequence of the point where the robot main body is located in the motion space is represented by Q= { Xi}i=0,…,n, N epsilon N;
Step 4.5, the time interval Deltati=ti+1-ti between two adjacent path points represents the moving time of the robot between the two path points, the time sequence is represented by tau= { Deltati}i=0,…,n, N epsilon N, and N represents the number of control points;
Step 4.6, merging the time sequence and the position sequence of the control points to obtain B= (Q, tau), wherein N represents the number of the control points, and performing multi-objective optimization in a weighted manner on the two aspects of the time sequence and the control point sequence;
step 4.7 passing through the cost functionMinimizing angular variation between the path points, constraining smoothness between the path points, where i=0,..n, N e N, N represents the number of control points;
Step 4.8 passing through the cost functionMinimizing the total time of the path, maintaining an even distribution of time, whereinIs the average time interval between path points, i=0,., N, N e N, N represents the number of control points;
step 4.9 distance cost function from minimum Path Point to obstacleEnsuring that the waypoint is far from the obstacle, wherein d (Xi, o) is the distance from waypoint Xi to the obstacle, i=0..;
step 4.10 passing through the cost functionMaintaining the speed and acceleration of the robot on the path satisfying its kinematic constraints, where i=0..n, N e N, N represents the number of control points;
step 4.11 utilization ofObtaining an optimal solution of B, and finding optimal path point positions and time distribution, wherein B* represents an optimal result, f (B) is the total cost of comprehensively considering various targets, gammak represents the weight of a cost function of each optimization target, and k=1, 2,3 and 4;
And 4.12, moving the robot main body 4 according to the determined path, speed and acceleration data and avoiding the obstacle in real time until the robot main body 4 moves to the target point calculated in the step three.
And fifthly, continuously updating the global map by repeating the first to fourth steps until the point cloud information density values of all the divided areas reach the threshold requirement, and completing autonomous mapping work of the current environment.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (7)

CN202411357955.2A2024-09-272024-09-27 A mobile robot capable of autonomously building a map in an unknown closed space and a working method thereofPendingCN119247957A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411357955.2ACN119247957A (en)2024-09-272024-09-27 A mobile robot capable of autonomously building a map in an unknown closed space and a working method thereof

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411357955.2ACN119247957A (en)2024-09-272024-09-27 A mobile robot capable of autonomously building a map in an unknown closed space and a working method thereof

Publications (1)

Publication NumberPublication Date
CN119247957Atrue CN119247957A (en)2025-01-03

Family

ID=94017936

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411357955.2APendingCN119247957A (en)2024-09-272024-09-27 A mobile robot capable of autonomously building a map in an unknown closed space and a working method thereof

Country Status (1)

CountryLink
CN (1)CN119247957A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120370956A (en)*2025-06-232025-07-25山东理工大学Three-dimensional dynamic path planning method based on high-density radar point cloud

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103926925A (en)*2014-04-222014-07-16江苏久祥汽车电器集团有限公司Improved VFH algorithm-based positioning and obstacle avoidance method and robot
CN109313810A (en)*2016-07-062019-02-05高通股份有限公司System and method for being surveyed and drawn to environment
CN112965495A (en)*2021-02-102021-06-15苏州清乐智能科技有限公司Disinfection robot and autonomous navigation method thereof
CN114581519A (en)*2022-02-242022-06-03江苏云幕智造科技有限公司Laser autonomous positioning mapping method for quadruped robot in cross-country environment
CN115188026A (en)*2022-07-272022-10-14深圳市普渡科技有限公司 Pedestrian detection method, device, computer equipment and storage medium
CN117095382A (en)*2023-09-052023-11-21北京星网船电科技有限公司Obstacle detection method, device, equipment and medium based on camera and radar
CN117109624A (en)*2023-10-132023-11-24淮安中科晶上智能网联研究院有限公司Hybrid path planning method of low-speed unmanned vehicle based on A and parallel TEB
CN118640910A (en)*2024-06-272024-09-13南昌航空大学 A ROS unmanned vehicle mapping and navigation method in unstructured environments

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103926925A (en)*2014-04-222014-07-16江苏久祥汽车电器集团有限公司Improved VFH algorithm-based positioning and obstacle avoidance method and robot
CN109313810A (en)*2016-07-062019-02-05高通股份有限公司System and method for being surveyed and drawn to environment
CN112965495A (en)*2021-02-102021-06-15苏州清乐智能科技有限公司Disinfection robot and autonomous navigation method thereof
CN114581519A (en)*2022-02-242022-06-03江苏云幕智造科技有限公司Laser autonomous positioning mapping method for quadruped robot in cross-country environment
CN115188026A (en)*2022-07-272022-10-14深圳市普渡科技有限公司 Pedestrian detection method, device, computer equipment and storage medium
CN117095382A (en)*2023-09-052023-11-21北京星网船电科技有限公司Obstacle detection method, device, equipment and medium based on camera and radar
CN117109624A (en)*2023-10-132023-11-24淮安中科晶上智能网联研究院有限公司Hybrid path planning method of low-speed unmanned vehicle based on A and parallel TEB
CN118640910A (en)*2024-06-272024-09-13南昌航空大学 A ROS unmanned vehicle mapping and navigation method in unstructured environments

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120370956A (en)*2025-06-232025-07-25山东理工大学Three-dimensional dynamic path planning method based on high-density radar point cloud

Similar Documents

PublicationPublication DateTitle
Shang et al.A co-optimal coverage path planning method for aerial scanning of complex structures
JP6855524B2 (en) Unsupervised learning of metric representations from slow features
Hardouin et al.Next-Best-View planning for surface reconstruction of large-scale 3D environments with multiple UAVs
Zghair et al.A one decade survey of autonomous mobile robot systems
Charrow et al.Information-Theoretic Planning with Trajectory Optimization for Dense 3D Mapping.
Song et al.Online coverage and inspection planning for 3D modeling
Deng et al.Robotic exploration of unknown 2d environment using a frontier-based automatic-differentiable information gain measure
Levine et al.Information-rich path planning with general constraints using rapidly-exploring random trees
CN119247957A (en) A mobile robot capable of autonomously building a map in an unknown closed space and a working method thereof
Miller et al.Optimal planning for target localization and coverage using range sensing
CN113554705A (en)Robust positioning method for laser radar in changing scene
CN114998276A (en)Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN115373383A (en) Autonomous obstacle avoidance method, device and related equipment for garbage recycling unmanned boat
Leung et al.An improved weighting strategy for rao-blackwellized probability hypothesis density simultaneous localization and mapping
CN115690343A (en)Robot laser radar scanning and mapping method based on visual following
CN119509556A (en) A path planning method and device for unmanned aerial vehicle inspection
Bhargava et al.An omnidirectional mecanum wheel automated guided vehicle control using hybrid modified A* algorithm
Piccinelli et al.Hybrid motion planner integrating global voronoi diagrams and local velocity obstacle method
CN117470241A (en) An adaptive mapless navigation method and system for uneven terrain
CN116929379A (en)Intelligent car navigation system based on multi-mode perception visual angle
Zheng et al.Indoor Mobile Robot Map Construction Based on Improved Cartographer Algorithm
Luo et al.Boundary aware navigation and mapping for a mobile automaton
TaoResearch on intelligent robot patrol route based on cloud computing
Cieslewski et al.Exploration without global consistency using local volume consolidation
Yi et al.Lidar odometry and mapping optimized by the theory of functional systems in the parking lot

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp