Movatterモバイル変換


[0]ホーム

URL:


CN110645974A - Mobile robot indoor map construction method fusing multiple sensors - Google Patents

Mobile robot indoor map construction method fusing multiple sensors
Download PDF

Info

Publication number
CN110645974A
CN110645974ACN201910915091.4ACN201910915091ACN110645974ACN 110645974 ACN110645974 ACN 110645974ACN 201910915091 ACN201910915091 ACN 201910915091ACN 110645974 ACN110645974 ACN 110645974A
Authority
CN
China
Prior art keywords
mobile robot
pose
vertex
information
constructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910915091.4A
Other languages
Chinese (zh)
Other versions
CN110645974B (en
Inventor
刘冉
秦正泓
张华�
何永平
肖宇峰
付文鹏
张静
刘满禄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and TechnologyfiledCriticalSouthwest University of Science and Technology
Priority to CN201910915091.4ApriorityCriticalpatent/CN110645974B/en
Publication of CN110645974ApublicationCriticalpatent/CN110645974A/en
Application grantedgrantedCritical
Publication of CN110645974BpublicationCriticalpatent/CN110645974B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种融合多传感器的移动机器人室内地图构建方法,包括以下步骤:通过UWB、里程计和激光雷达分别采集移动机器人与锚点的距离信息、移动机器人的位姿信息和环境信息;根据距离信息、位姿信息和环境信息,构建顶点‑约束图;通过图优化算法对顶点‑约束图进行优化,获取优化后的移动机器人轨迹数据;通过优化后的移动机器人轨迹数据和环境信息构建栅格地图。本发明的里程计提供了机器人短时间内精确的位姿变化,融合UWB定位信息能够提供长时间精确的位姿变化,结合激光雷达的数据进行地图构建,解决了激光雷达在构建室内复杂环境地图时精度不高的问题。

Figure 201910915091

The invention discloses a multi-sensor fusion mobile robot indoor map construction method, comprising the following steps: separately collecting distance information between the mobile robot and the anchor point, pose information and environment information of the mobile robot through UWB, odometer and laser radar; According to the distance information, pose information and environmental information, the vertex-constraint graph is constructed; the vertex-constraint graph is optimized by the graph optimization algorithm, and the optimized trajectory data of the mobile robot is obtained; the optimized mobile robot trajectory data and environmental information are constructed. Raster map. The odometer of the present invention provides the robot with accurate pose changes in a short time, and the fusion of UWB positioning information can provide long-term accurate pose changes. Combined with the laser radar data for map construction, it solves the problem that the laser radar can build a complex indoor environment map. the problem of low accuracy.

Figure 201910915091

Description

Translated fromChinese
一种融合多传感器的移动机器人室内地图构建方法An indoor map construction method for mobile robots based on fusion of multi-sensors

技术领域technical field

本发明属于移动机器人室内建图技术领域,具体涉及一种融合多传感器的移动机器人室内地图构建方法。The invention belongs to the technical field of indoor mapping of mobile robots, and in particular relates to a method for constructing indoor maps of mobile robots that integrates multiple sensors.

背景技术Background technique

近年来,移动机器人技术在工业领域、医疗领域和服务领域都发挥了重要作用,且在国防和空间探测领域等有害与危险场合也得到了很好的应用。在移动机器人的研究领域中,SLAM一直是一个热门的研究话题,它为机器人提供导航地图和实时位置,而这些都是机器人执行路径规划、路径跟踪的前提,因此它在移动机器人导航中占有十分重要的位置。In recent years, mobile robotics technology has played an important role in industrial, medical and service fields, and has also been well used in harmful and dangerous situations such as defense and space exploration. In the research field of mobile robots, SLAM has always been a hot research topic. It provides navigation maps and real-time positions for robots, which are the prerequisites for robots to perform path planning and path tracking, so it occupies a very important position in mobile robot navigation. important location.

由于激光雷达具有精度高、范围广和传输速度快的优点,激光雷达在移动机器人导航应用中日益广泛,基于激光扫描系统的室内环境地图的构建技术被广泛地运用到机器人的导航中,常用于机器人定位,环境地图的构建以及路径规划。一般的激光扫描仪价格十分昂贵,虽然目前市面上有很多便宜的激光出现,但是往往测量范围有限,而且分辨率较低。机器人的里程计往往可以通过光电编码器获取,其所带来的误差,会随着时间的增长越来越大,从而造成机器人位姿估计出现严重偏差。所以二维激光雷达在面对复杂的室内环境时建图精度会严重降低。Because lidar has the advantages of high precision, wide range and fast transmission speed, lidar is increasingly used in mobile robot navigation applications. The construction technology of indoor environment map based on laser scanning system is widely used in robot navigation. Robot positioning, construction of environment maps, and path planning. The general laser scanner is very expensive. Although there are many cheap lasers on the market, the measurement range is often limited and the resolution is low. The odometer of the robot can often be obtained through the photoelectric encoder, and the error brought by it will become larger and larger with the growth of time, resulting in serious deviation of the robot pose estimation. Therefore, the mapping accuracy of 2D LiDAR will be seriously reduced in the face of complex indoor environment.

发明内容SUMMARY OF THE INVENTION

针对现有技术中的上述不足,本发明提供的一种融合多传感器的移动机器人室内地图构建方法解决了现有技术室内建图精度低的问题。In view of the above deficiencies in the prior art, the present invention provides a multi-sensor fusion mobile robot indoor map construction method to solve the problem of low indoor mapping accuracy in the prior art.

为了达到上述发明目的,本发明采用的技术方案为:一种融合多传感器的移动机器人室内地图构建方法,包括以下步骤:In order to achieve the above purpose of the invention, the technical solution adopted in the present invention is: a method for constructing an indoor map of a mobile robot fused with multiple sensors, comprising the following steps:

S1、通过UWB、里程计和激光雷达分别采集移动机器人与锚点的距离信息、移动机器人的位姿信息和环境信息;S1. Collect the distance information between the mobile robot and the anchor point, the pose information and the environment information of the mobile robot through UWB, odometer and lidar respectively;

S2、根据距离信息、位姿信息和环境信息,构建顶点-约束图;S2. Construct a vertex-constraint graph according to distance information, pose information and environment information;

S3、通过图优化算法对顶点-约束图进行优化,获取优化后的移动机器人轨迹数据;S3, optimize the vertex-constraint graph through a graph optimization algorithm, and obtain the optimized trajectory data of the mobile robot;

S4、通过优化后的移动机器人轨迹数据和环境信息构建栅格地图。S4, constructing a grid map through the optimized trajectory data and environmental information of the mobile robot.

进一步地,所述步骤S1包括以下步骤:Further, the step S1 includes the following steps:

S1.1、在移动机器人上搭载UWB标签和激光雷达,所述机器人上设置有里程计和编码器;S1.1. UWB tags and lidars are mounted on the mobile robot, and the robot is provided with an odometer and an encoder;

S1.2、通过UWB标签采集移动机器人与锚点的距离信息;S1.2. Collect the distance information between the mobile robot and the anchor point through the UWB tag;

S1.3、通过里程计获得机器人的位姿数据,所述里程计通过编码器获取数据;S1.3, obtain the pose data of the robot through the odometer, and the odometer obtains the data through the encoder;

S1.4、通过激光雷达扫描得到环境信息。S1.4, obtain environmental information through lidar scanning.

进一步地,所述步骤S2包括以下步骤:Further, the step S2 includes the following steps:

S2.1、根据移动机器人的位姿信息构成顶点,构建基于里程计的边,得到第一初始图;S2.1. Construct vertices according to the pose information of the mobile robot, construct edges based on odometer, and obtain the first initial graph;

S2.2、在第一初始图上添加UWB约束,构建基于UWB的边,得到第二初始图;S2.2. Add UWB constraints on the first initial graph, construct UWB-based edges, and obtain the second initial graph;

S2.3、通过环境数据进行闭环检测,在第二初始图上添加基于激光的边,构建激光-闭环边,得到顶点-约束图。S2.3. Perform closed loop detection through environmental data, add laser-based edges to the second initial graph, construct laser-closed loop edges, and obtain a vertex-constraint graph.

进一步地,所述步骤S2.3中激光-闭环边的构建步骤如下:Further, the construction steps of the laser-closed loop edge in the step S2.3 are as follows:

A1、构建源点云集合Q={q1,q2,…,qN}和目标点云集合P={p1,p2,…,pN};A1. Construct the source point cloud set Q={q1 ,q2 ,...,qN } and the target point cloud set P={p1 ,p2 ,...,pN };

A2、构建目标点云集合P的旋转矩阵R和平移矩阵T,通过旋转矩阵R和平移矩阵T构建目标函数E(R,T);A2. Construct the rotation matrix R and translation matrix T of the target point cloud set P, and construct the objective function E(R, T) through the rotation matrix R and the translation matrix T;

A3、设定一个阈值,并判断E(R,T)是否小于阈值,若是则判断为闭环,并结束本次激光-闭环边的构建,若否则进入步骤A4;A3. Set a threshold, and judge whether E(R, T) is less than the threshold, if so, judge it as a closed loop, and end the construction of the laser-closed loop edge, if not, go to step A4;

A4、将旋转矩阵R和平移矩阵T代入源点云集合Q中,得到点集M;A4. Substitute the rotation matrix R and the translation matrix T into the source point cloud set Q to obtain the point set M;

A5、将点集M代入目标点云集合P,得到新的旋转矩阵R'和新的平移矩阵T';A5. Substitute the point set M into the target point cloud set P to obtain a new rotation matrix R' and a new translation matrix T';

A6令R=R',T=T',将更新过后的R和T代入目标函数E(R,T)中,并返回步骤A3。A6 sets R=R', T=T', substitutes the updated R and T into the objective function E(R, T), and returns to step A3.

进一步地,所述目标函数E(R,T)为:Further, the objective function E(R, T) is:

Figure BDA0002215880290000031
Figure BDA0002215880290000031

其中,N表示目标点云集合中点的总数,i=1,2,...,N,qi表示源点云集合中的第i个点,pi表示目标点云集合中的第i个点。Among them, N represents the total number of points in the target point cloud set, i=1,2,...,N, qi represents theith point in the source point cloud set, pi represents the ith point in the target point cloud set point.

进一步地,所述步骤S3中通过图优化算法对顶点-约束图进行优化的具体方法为:调整顶点-约束图中的位姿顶点,使位姿信息的误差函数F(x)最小,得到最大化满足约束的位姿顶点。Further, the specific method for optimizing the vertex-constraint graph by the graph optimization algorithm in the step S3 is: adjusting the pose vertices in the vertex-constraint graph, so that the error function F(x) of the pose information is minimized, and the maximum value is obtained. Transform the pose vertices that satisfy the constraints.

进一步地,所述误差函数F(x)为:Further, the error function F(x) is:

其中,xi表示位姿顶点i,xj表示位姿顶点j,C表示图的顶点间存在约束关系的集合,Ωij表示xi和xj之间观察值的信息矩阵,eij(xi,xj,zij)表示xi和xj满足约束zij的程度,zij表示位姿顶点i和位姿顶点j之间通过传感器获取的实际观察值,所述实际观察值zij包括相邻位姿顶点i与位姿顶点j之间的位姿变换、位姿顶点i与锚点j之间的距离和非相邻位姿顶点i与位姿顶点j之间的位姿变换。Among them, xi represents the pose vertex i, xj represents the pose vertex j, C represents the set of constraint relationships between the vertices of the graph, Ωij represents the information matrix of the observed values between xi and xj , eij (xi , xj , zij ) represents the degree to which xi and xj satisfy the constraint ziij , ziij represents the actual observation value obtained by the sensor between the pose vertex i and the pose vertex j, the actual observation value ziij Including the pose transformation between the adjacent pose vertex i and the pose vertex j, the distance between the pose vertex i and the anchor point j, and the pose transformation between the non-adjacent pose vertex i and the pose vertex j .

进一步地,所述步骤S4包括以下分步骤:Further, the step S4 includes the following sub-steps:

S4.1、根据环境信息,将环境分割为若干个栅格单元;S4.1. According to the environment information, divide the environment into several grid units;

S4.2、计算每个栅格单元被占用的概率lt,ij,被占用的概率lt,ij大于或等于0.8的栅格单元表示为障碍物,将有障碍物的栅格用灰色表示,将无障碍物的栅格用白色表示,得到栅格地图。S4.2. Calculate the probability lt,ij that each grid cell is occupied. The grid cells whose occupied probability lt,ij is greater than or equal to 0.8 are represented as obstacles, and the grids with obstacles are represented in gray , the grid without obstacles is represented in white to get a grid map.

进一步地,所述每个栅格单元被占用的概率lt,ijFurther, the probability lt,ij that each grid unit is occupied:

Figure BDA0002215880290000041
Figure BDA0002215880290000041

其中,lt-1,ij表示前一刻时间栅格被占用的概率,p(mij|zt,xt)表示栅格mij被占用的后验概率,mij表示横坐标为i纵坐标为j的栅格单元,zt为t时刻的观测值,xt为t时刻移动机器人的位姿。Among them, lt-1,ij represents the probability that the grid is occupied at the previous moment, p(mij |zt ,xt ) represents the posterior probability that the grid mij is occupied, and mij represents that the abscissa is i and the vertical The grid cell whose coordinate is j, zt is the observation value at time t, and xt is the pose of the mobile robot at time t.

本发明的有益效果为:The beneficial effects of the present invention are:

(1)本发明通过UWB、里程计和激光雷达分别采集移动机器人与锚点的距离信息、移动机器人的位姿信息和环境信息进行地图构建,多种传感器采集数据,使构建的地图更为准确。(1) The present invention collects the distance information between the mobile robot and the anchor point, the pose information and the environment information of the mobile robot through UWB, odometer and laser radar respectively to construct a map, and various sensors collect data, so that the constructed map is more accurate. .

(2)本发明通过构建顶点-约束图,并用图优化算法对顶点-约束图进行优化,使移动机器人的位姿信息更加准确,为构建精准的室内地图打好了基础。(2) The present invention makes the pose information of the mobile robot more accurate by constructing a vertex-constraint graph and using a graph optimization algorithm to optimize the vertex-constraint graph, laying a solid foundation for constructing an accurate indoor map.

(3)本发明使用里程计获得了机器人短时间内精确的位姿变化,融合UWB的定位信息能够获得机器人长时间的精确位姿变化,同时结合激光雷达的测量数据进行精准的地图构建,解决了激光雷达在构建室内复杂环境地图时精度不高的问题。(3) The present invention uses the odometer to obtain the precise position and attitude change of the robot in a short time, and integrates the positioning information of UWB to obtain the precise position and attitude change of the robot for a long time. This solves the problem of low accuracy of lidar in building complex indoor environment maps.

附图说明Description of drawings

图1为本发明提出的一种融合多传感器的移动机器人室内地图构建方法的流程图。FIG. 1 is a flow chart of a method for constructing an indoor map of a mobile robot with fusion of multiple sensors proposed by the present invention.

图2为本发明的实验结果对比图。FIG. 2 is a comparison diagram of experimental results of the present invention.

具体实施方式Detailed ways

下面对本发明的具体实施方式进行描述,以便于本技术领域的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。The specific embodiments of the present invention are described below to facilitate those skilled in the art to understand the present invention, but it should be clear that the present invention is not limited to the scope of the specific embodiments. For those of ordinary skill in the art, as long as various changes Such changes are obvious within the spirit and scope of the present invention as defined and determined by the appended claims, and all inventions and creations utilizing the inventive concept are within the scope of protection.

下面结合附图详细说明本发明的实施例。The embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

如图1所示,一种融合多传感器的移动机器人室内地图构建方法,包括以下步骤:As shown in Figure 1, a multi-sensor fusion mobile robot indoor map construction method includes the following steps:

S1、通过UWB、里程计和激光雷达分别采集移动机器人与锚点的距离信息、移动机器人的位姿信息和环境信息;S1. Collect the distance information between the mobile robot and the anchor point, the pose information and the environment information of the mobile robot through UWB, odometer and lidar respectively;

S2、根据距离信息、位姿信息和环境信息,构建顶点-约束图;S2. Construct a vertex-constraint graph according to distance information, pose information and environment information;

S3、通过图优化算法对顶点-约束图进行优化,获取优化后的移动机器人轨迹数据;S3, optimize the vertex-constraint graph through a graph optimization algorithm, and obtain the optimized trajectory data of the mobile robot;

S4、通过优化后的移动机器人轨迹数据和环境信息构建栅格地图。S4, constructing a grid map through the optimized trajectory data and environmental information of the mobile robot.

步骤S1包括以下步骤:Step S1 includes the following steps:

S1.1、在移动机器人上搭载UWB标签和激光雷达,所述机器人上设置有里程计和编码器;S1.1. UWB tags and lidars are mounted on the mobile robot, and the robot is provided with an odometer and an encoder;

S1.2、通过UWB标签采集移动机器人与锚点的距离信息;S1.2. Collect the distance information between the mobile robot and the anchor point through the UWB tag;

S1.3、通过里程计获得机器人的位姿数据,所述里程计通过编码器获取数据;S1.3, obtain the pose data of the robot through the odometer, and the odometer obtains the data through the encoder;

S1.4、通过激光雷达扫描得到环境信息。S1.4, obtain environmental information through lidar scanning.

步骤S2包括以下步骤:Step S2 includes the following steps:

S2.1、根据移动机器人的位姿信息构成顶点,构建基于里程计的边,得到第一初始图;S2.1. Construct vertices according to the pose information of the mobile robot, construct edges based on odometer, and obtain the first initial graph;

S2.2、在第一初始图上添加UWB约束,构建基于UWB的边,得到第二初始图;S2.2. Add UWB constraints on the first initial graph, construct UWB-based edges, and obtain the second initial graph;

S2.3、通过环境数据进行闭环检测,在第二初始图上添加基于激光的边,构建激光-闭环边,得到顶点-约束图。S2.3. Perform closed loop detection through environmental data, add laser-based edges to the second initial graph, construct laser-closed loop edges, and obtain a vertex-constraint graph.

步骤S2.3中激光-闭环边的计算步骤如下:The calculation steps of the laser-closed loop edge in step S2.3 are as follows:

A1、构建源点云集合Q={q1,q2,…,qN}和目标点云集合P={p1,p2,…,pN};A1. Construct the source point cloud set Q={q1 ,q2 ,...,qN } and the target point cloud set P={p1 ,p2 ,...,pN };

A2、构建目标点云集合P的旋转矩阵R和平移矩阵T,通过旋转矩阵R和平移矩阵T构建目标函数E(R,T);A2. Construct the rotation matrix R and translation matrix T of the target point cloud set P, and construct the objective function E(R, T) through the rotation matrix R and the translation matrix T;

A3、设定一个阈值,并判断E(R,T)是否小于阈值,若是则判断为闭环,并结束本次激光-闭环边的构建,若否则进入步骤A4;A3. Set a threshold, and judge whether E(R, T) is less than the threshold, if so, judge it as a closed loop, and end the construction of the laser-closed loop edge, if not, go to step A4;

A4、将旋转矩阵R和平移矩阵T代入源点云集合Q中,得到点集M;A4. Substitute the rotation matrix R and the translation matrix T into the source point cloud set Q to obtain the point set M;

A5、将点集M代入目标点云集合P,得到新的旋转矩阵R'和新的平移矩阵T';A5. Substitute the point set M into the target point cloud set P to obtain a new rotation matrix R' and a new translation matrix T';

A6令R=R',T=T',将更新过后的R和T代入目标函数E(R,T)中,并返回步骤A3。A6 sets R=R', T=T', substitutes the updated R and T into the objective function E(R, T), and returns to step A3.

目标函数E(R,T)为:The objective function E(R,T) is:

Figure BDA0002215880290000061
Figure BDA0002215880290000061

其中,N表示目标点云集合中点的总数,i=1,2,...,N,qi表示源点云集合中的第i个点,pi表示目标点云集合中的第i个点。Among them, N represents the total number of points in the target point cloud set, i=1,2,...,N, qi represents theith point in the source point cloud set, pi represents the ith point in the target point cloud set point.

步骤S3中通过图优化算法对顶点-约束图进行优化的具体方法为:调整顶点-约束图中的位姿顶点,使位姿信息的误差函数F(x)最小,得到最大化满足约束的位姿顶点。The specific method for optimizing the vertex-constraint graph by the graph optimization algorithm in step S3 is: adjusting the pose vertices in the vertex-constraint graph, so that the error function F(x) of the pose information is minimized, and the maximum position satisfying the constraint is obtained. pose vertex.

误差函数F(x)为:The error function F(x) is:

Figure BDA0002215880290000071
Figure BDA0002215880290000071

其中,xi表示位姿顶点i,xj表示位姿顶点j,C表示图的顶点间存在约束关系的集合,Ωij表示xi和xj之间观察值的信息矩阵,eij(xi,xj,zij)表示xi和xj满足约束zij的程度,zij表示位姿顶点i和位姿顶点j之间通过传感器获取的实际观察值,所述实际观察值zij包括相邻位姿顶点i与位姿顶点j之间的位姿变换、位姿顶点i与锚点j之间的距离和非相邻位姿顶点i与位姿顶点j之间的位姿变换。Among them, xi represents the pose vertex i, xj represents the pose vertex j, C represents the set of constraint relationships between the vertices of the graph, Ωij represents the information matrix of the observed values between xi and xj , eij (xi , xj , zij ) represents the degree to which xi and xj satisfy the constraint ziij , ziij represents the actual observation value obtained by the sensor between the pose vertex i and the pose vertex j, the actual observation value ziij Including the pose transformation between the adjacent pose vertex i and the pose vertex j, the distance between the pose vertex i and the anchor point j, and the pose transformation between the non-adjacent pose vertex i and the pose vertex j .

步骤S4包括以下分步骤:Step S4 includes the following sub-steps:

S4.1、根据环境信息,将环境分割为若干个栅格单元;S4.1. According to the environment information, divide the environment into several grid units;

S4.2、计算每个栅格单元被占用的概率lt,ij,被占用的概率lt,ij大于或等于0.8的栅格单元表示为障碍物,将有障碍物的栅格用灰色表示,将无障碍物的栅格用白色表示,得到栅格地图。S4.2. Calculate the probability lt,ij that each grid cell is occupied. The grid cells whose occupied probability lt,ij is greater than or equal to 0.8 are represented as obstacles, and the grids with obstacles are represented in gray , the grid without obstacles is represented in white to get a grid map.

所述每个栅格单元被占用的概率lt,ijThe probability lt,ij that each grid cell is occupied:

其中,lt-1,ij表示前一刻时间栅格被占用的概率,p(mij|zt,xt)表示栅格mij被占用的后验概率,mij表示横坐标为i纵坐标为j的栅格单元,zt为t时刻的观测值,所述观测值表示激光雷达测量得到的移动机器人与障碍物之间的距离和角度,xt为t时刻移动机器人的位姿。Among them, lt-1,ij represents the probability that the grid is occupied at the previous moment, p(mij |zt ,xt ) represents the posterior probability that the grid mij is occupied, and mij represents that the abscissa is i and the vertical The grid unit whose coordinate is j, zt is the observation value at time t, the observation value represents the distance and angle between the mobile robot and the obstacle measured by the lidar, and xt is the pose of the mobile robot at time t.

在本实施例中,实验场景选择为两对边为墙体的走廊。In this embodiment, the experimental scene is selected as a corridor with two opposite sides being walls.

如图2所示,a图为原始里程计所构建的地图,b图为里程计和UWB共同构建的地图,c图为本发明通过UWB、里程计和激光雷达构建的地图;通过实验结果对比分析,可以发现本发明所构建的地图对墙体的构建更为精准,且有效的识别了障碍物。As shown in Figure 2, picture a is the map constructed by the original odometer, picture b is the map jointly constructed by the odometer and UWB, and picture c is the map constructed by the present invention through UWB, odometer and lidar; through the comparison of experimental results Through analysis, it can be found that the map constructed by the present invention is more accurate for the construction of the wall, and the obstacles are effectively identified.

本发明通过UWB、里程计和激光雷达分别采集移动机器人与锚点的距离信息、移动机器人的位姿信息和环境信息进行地图构建,多种传感器采集数据,使构建的地图更为准确。The invention collects the distance information between the mobile robot and the anchor point, the pose information and the environment information of the mobile robot through UWB, odometer and laser radar respectively to construct a map, and various sensors collect data, so that the constructed map is more accurate.

本发明通过构建顶点-约束图,并用图优化算法对顶点-约束图进行优化,使移动机器人的位姿信息更加准确,为构建精准的室内地图打好了基础。By constructing a vertex-constraint graph and optimizing the vertex-constraint graph with a graph optimization algorithm, the present invention makes the pose information of the mobile robot more accurate, and lays a solid foundation for constructing an accurate indoor map.

本发明使用里程计获得了机器人短时间内精确的位姿变化,融合UWB的定位信息能够获得机器人长时间的精确位姿变化,同时结合激光雷达的测量数据进行精准的地图构建,解决了激光雷达在构建室内复杂环境地图时精度不高的问题。The invention uses the odometer to obtain the precise position and attitude change of the robot in a short time, and integrates the positioning information of UWB to obtain the precise position and attitude change of the robot for a long time. The problem of low accuracy when building indoor complex environment maps.

Claims (9)

1. A mobile robot indoor map construction method fused with multiple sensors is characterized by comprising the following steps:
s1, respectively acquiring distance information between the mobile robot and an anchor point, pose information of the mobile robot and environment information through UWB, a speedometer and a laser radar;
s2, constructing a vertex-constraint graph according to the distance information, the pose information and the environment information;
s3, optimizing the vertex-constraint graph through a graph optimization algorithm to obtain optimized mobile robot track data;
and S4, constructing a grid map through the optimized mobile robot track data and the environment information.
2. The multi-sensor-integrated mobile robot indoor mapping method of claim 1, wherein the step S1 comprises the steps of:
s1.1, carrying a UWB (ultra wide band) label and a laser radar on a mobile robot, wherein the robot is provided with a speedometer and an encoder;
s1.2, acquiring distance information between the mobile robot and an anchor point through a UWB (ultra wide band) label;
s1.3, acquiring pose data of the robot through a speedometer, wherein the speedometer acquires data through an encoder;
and S1.4, scanning by a laser radar to obtain environment information.
3. The multi-sensor-integrated mobile robot indoor mapping method of claim 1, wherein the step S2 comprises the steps of:
s2.1, forming a vertex according to the pose information of the mobile robot, and constructing an edge based on the odometer to obtain a first initial image;
s2.2, adding UWB constraint on the first initial graph, and constructing an edge based on UWB to obtain a second initial graph;
and S2.3, performing closed-loop detection through the environment data, adding a laser-based edge on the second initial graph, and constructing a laser-closed-loop edge to obtain a vertex-constraint graph.
4. The method for constructing the indoor map of the mobile robot fusing the multiple sensors as claimed in claim 3, wherein the step S2.3 of constructing the laser-closed loop edge comprises the following steps:
a1, constructing a source point cloud set Q ═ { Q ═ Q1,q2,…,qNAnd a target point cloud set P ═ P1,p2,…,pN};
A2, constructing a rotation matrix R and a translation matrix T of the target point cloud set P, and constructing a target function E (R, T) through the rotation matrix R and the translation matrix T;
a3, setting a threshold, and judging whether E (R, T) is smaller than the threshold, if yes, judging the laser-closed loop is closed, and finishing the construction of the laser-closed loop edge, otherwise, entering the step A4;
a4, substituting the rotation matrix R and the translation matrix T into a source point cloud set Q to obtain a point set M;
a5, substituting the point set M into a target point cloud set P to obtain a new rotation matrix R 'and a new translation matrix T';
a6 assigns R to R ', T to T', substitutes the updated R and T into the objective function E (R, T), and returns to step A3.
5. The multi-sensor-fused mobile robot indoor mapping method according to claim 4, wherein the objective function E (R, T) is:
Figure FDA0002215880280000021
where N represents the total number of points in the target point cloud set, i 1,2iRepresenting the ith point, p, in the source point cloud setiRepresenting the ith point in the target point cloud set.
6. The method for constructing the indoor map of the mobile robot with the fusion of the multiple sensors as claimed in claim 1, wherein the specific method for optimizing the vertex-constraint map by the map optimization algorithm in the step S3 is as follows: and adjusting the pose vertex in the vertex-constraint graph to minimize an error function F (x) of the pose information and obtain the pose vertex which satisfies the constraint to the maximum extent.
7. The method for constructing the indoor map of the mobile robot with the fusion of the multiple sensors as claimed in claim 6, wherein the error function F (x) is:
Figure FDA0002215880280000031
wherein x isiRepresenting pose vertices i, xjRepresenting a set of pose vertices j, C representing a constraint relationship between the vertices of the graph, ΩijDenotes xiAnd xjAn information matrix of observed values between, eij(xi,xj,zij) Denotes xiAnd xjSatisfies the constraint zijDegree matrix of (z)ijRepresenting the actual observed value between the pose vertex i and the pose vertex j acquired by the sensor, and the actual observed value zijThe method comprises the steps of pose transformation between adjacent pose vertexes i and j, the distance between the pose vertexes i and anchor points j and the pose transformation between non-adjacent pose vertexes i and j.
8. The method for constructing an indoor map of a mobile robot with multiple sensors integrated therein according to claim 1, wherein the step S4 comprises the following substeps:
s4.1, dividing the environment into a plurality of grid units according to the environment information;
s4.2, calculating the probability l of each grid unit being occupiedt,ijProbability of being occupied lt,ijA grid cell of 0.8 or more is represented as an obstacle, a grid with an obstacle is represented in gray, and a grid without an obstacle is represented in white, to obtain a grid map.
9. The multi-sensor-fused mobile robot indoor mapping method of claim 8, wherein the probability/, that each grid cell is occupiedt,ijComprises the following steps:
Figure FDA0002215880280000032
wherein lt-1,ijRepresenting the probability that the time grid was occupied at the previous moment, p (m)ij|zt,xt) Representing a grid mijOccupied posterior probability, mijRepresenting a grid cell with abscissa i and ordinate j, ztIs an observed value at time t, xtThe pose of the mobile robot at time t.
CN201910915091.4A2019-09-262019-09-26 An indoor map construction method for mobile robots based on fusion of multi-sensorsExpired - Fee RelatedCN110645974B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910915091.4ACN110645974B (en)2019-09-262019-09-26 An indoor map construction method for mobile robots based on fusion of multi-sensors

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910915091.4ACN110645974B (en)2019-09-262019-09-26 An indoor map construction method for mobile robots based on fusion of multi-sensors

Publications (2)

Publication NumberPublication Date
CN110645974Atrue CN110645974A (en)2020-01-03
CN110645974B CN110645974B (en)2020-11-27

Family

ID=68992647

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910915091.4AExpired - Fee RelatedCN110645974B (en)2019-09-262019-09-26 An indoor map construction method for mobile robots based on fusion of multi-sensors

Country Status (1)

CountryLink
CN (1)CN110645974B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111427047A (en)*2020-03-302020-07-17哈尔滨工程大学 A SLAM method for autonomous mobile robots in large scenes
CN111551184A (en)*2020-03-272020-08-18上海大学Map optimization method and system for SLAM of mobile robot
CN111694006A (en)*2020-05-292020-09-22杭州电子科技大学Navigation sensing system for indoor unmanned system
CN111862162A (en)*2020-07-312020-10-30湖北亿咖通科技有限公司Loop detection method and system, readable storage medium and electronic device
CN111862216A (en)*2020-07-292020-10-30上海高仙自动化科技发展有限公司Computer equipment positioning method and device, computer equipment and storage medium
CN111983636A (en)*2020-08-122020-11-24深圳华芯信息技术股份有限公司Pose fusion method, pose fusion system, terminal, medium and mobile robot
CN112179353A (en)*2020-09-302021-01-05深圳市银星智能科技股份有限公司Positioning method and device of self-moving robot, robot and readable storage medium
CN112230243A (en)*2020-10-282021-01-15西南科技大学Indoor map construction method for mobile robot
CN112325872A (en)*2020-10-272021-02-05上海懒书智能科技有限公司Positioning method of mobile equipment based on multi-sensor coupling
CN112362045A (en)*2020-11-192021-02-12佛山科学技术学院Device for building graph based on laser SLAM and memory optimization method
CN112578798A (en)*2020-12-182021-03-30珠海格力智能装备有限公司Robot map acquisition method and device, processor and electronic device
CN112833876A (en)*2020-12-302021-05-25西南科技大学 A multi-robot cooperative localization method integrating odometer and UWB
CN112965063A (en)*2021-02-112021-06-15深圳市安泽智能机器人有限公司Robot mapping and positioning method
CN113111081A (en)*2021-04-162021-07-13四川阿泰因机器人智能装备有限公司Mobile robot mapping method under weak characteristic environment
CN113124880A (en)*2021-05-182021-07-16杭州迦智科技有限公司Mapping and positioning method and device based on data fusion of two sensors
CN113252042A (en)*2021-05-182021-08-13杭州迦智科技有限公司Laser and UWB (ultra wide band) based positioning method and device in tunnel
CN113311452A (en)*2021-05-262021-08-27上海新纪元机器人有限公司Positioning method and system based on multiple sensors
CN113640738A (en)*2021-06-242021-11-12西南科技大学Rotary target positioning method combining IMU and single-group UWB
CN113670290A (en)*2021-06-302021-11-19西南科技大学 An indoor map construction method for mobile robots based on multi-robot collaboration
CN113741473A (en)*2021-09-132021-12-03深圳本云国际发展有限公司Photocatalyst mobile robot and map construction method
CN113848912A (en)*2021-09-282021-12-28北京理工大学重庆创新中心Indoor map establishing method and device based on autonomous exploration
CN113916232A (en)*2021-10-182022-01-11济南大学Map construction method and system for improving map optimization
CN115235464A (en)*2021-04-232022-10-25中煤科工开采研究院有限公司 A positioning method, device and moving tool
CN116295439A (en)*2023-04-122023-06-23北京化工大学 Robust localization method for indoor mobile robot based on UWB and lidar fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20070293985A1 (en)*2006-06-202007-12-20Samsung Electronics Co., Ltd.Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map
CN106643720A (en)*2016-09-282017-05-10深圳市普渡科技有限公司Method for map construction based on UWB indoor locating technology and laser radar
CN107179080A (en)*2017-06-072017-09-19纳恩博(北京)科技有限公司The localization method and device of electronic equipment, electronic equipment, electronic positioning system
CN109059942A (en)*2018-08-222018-12-21中国矿业大学A kind of high-precision underground navigation map building system and construction method
CN109848996A (en)*2019-03-192019-06-07西安交通大学Extensive three-dimensional environment map creating method based on figure optimum theory
CN109974712A (en)*2019-04-222019-07-05广东亿嘉和科技有限公司It is a kind of that drawing method is built based on the Intelligent Mobile Robot for scheming optimization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20070293985A1 (en)*2006-06-202007-12-20Samsung Electronics Co., Ltd.Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map
CN106643720A (en)*2016-09-282017-05-10深圳市普渡科技有限公司Method for map construction based on UWB indoor locating technology and laser radar
CN107179080A (en)*2017-06-072017-09-19纳恩博(北京)科技有限公司The localization method and device of electronic equipment, electronic equipment, electronic positioning system
CN109059942A (en)*2018-08-222018-12-21中国矿业大学A kind of high-precision underground navigation map building system and construction method
CN109848996A (en)*2019-03-192019-06-07西安交通大学Extensive three-dimensional environment map creating method based on figure optimum theory
CN109974712A (en)*2019-04-222019-07-05广东亿嘉和科技有限公司It is a kind of that drawing method is built based on the Intelligent Mobile Robot for scheming optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JENS-STEFFEN GUTMANN ET AL.: "Incremental Mapping of Large Cyclic Environments", 《PROC THE CONFERENCE ON INTELLIGENT ROBOTS & APPLICATIONS》*

Cited By (35)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111551184A (en)*2020-03-272020-08-18上海大学Map optimization method and system for SLAM of mobile robot
CN111551184B (en)*2020-03-272021-11-26上海大学Map optimization method and system for SLAM of mobile robot
CN111427047A (en)*2020-03-302020-07-17哈尔滨工程大学 A SLAM method for autonomous mobile robots in large scenes
CN111427047B (en)*2020-03-302023-05-05哈尔滨工程大学SLAM method for autonomous mobile robot in large scene
CN111694006A (en)*2020-05-292020-09-22杭州电子科技大学Navigation sensing system for indoor unmanned system
CN111862216B (en)*2020-07-292023-05-26上海高仙自动化科技发展有限公司Computer equipment positioning method, device, computer equipment and storage medium
CN111862216A (en)*2020-07-292020-10-30上海高仙自动化科技发展有限公司Computer equipment positioning method and device, computer equipment and storage medium
CN111862162A (en)*2020-07-312020-10-30湖北亿咖通科技有限公司Loop detection method and system, readable storage medium and electronic device
CN111983636A (en)*2020-08-122020-11-24深圳华芯信息技术股份有限公司Pose fusion method, pose fusion system, terminal, medium and mobile robot
CN112179353A (en)*2020-09-302021-01-05深圳市银星智能科技股份有限公司Positioning method and device of self-moving robot, robot and readable storage medium
CN112325872A (en)*2020-10-272021-02-05上海懒书智能科技有限公司Positioning method of mobile equipment based on multi-sensor coupling
CN112325872B (en)*2020-10-272022-09-30上海懒书智能科技有限公司Positioning method of mobile equipment based on multi-sensor coupling
CN112230243A (en)*2020-10-282021-01-15西南科技大学Indoor map construction method for mobile robot
CN112362045B (en)*2020-11-192022-03-29佛山科学技术学院 A device and memory optimization method based on laser SLAM mapping
CN112362045A (en)*2020-11-192021-02-12佛山科学技术学院Device for building graph based on laser SLAM and memory optimization method
CN112578798B (en)*2020-12-182024-02-27珠海格力智能装备有限公司Robot map acquisition method and device, processor and electronic device
CN112578798A (en)*2020-12-182021-03-30珠海格力智能装备有限公司Robot map acquisition method and device, processor and electronic device
CN112833876B (en)*2020-12-302022-02-11西南科技大学Multi-robot cooperative positioning method integrating odometer and UWB
CN112833876A (en)*2020-12-302021-05-25西南科技大学 A multi-robot cooperative localization method integrating odometer and UWB
CN112965063B (en)*2021-02-112022-04-01深圳市安泽智能机器人有限公司Robot mapping and positioning method
CN112965063A (en)*2021-02-112021-06-15深圳市安泽智能机器人有限公司Robot mapping and positioning method
CN113111081A (en)*2021-04-162021-07-13四川阿泰因机器人智能装备有限公司Mobile robot mapping method under weak characteristic environment
CN115235464A (en)*2021-04-232022-10-25中煤科工开采研究院有限公司 A positioning method, device and moving tool
CN113252042B (en)*2021-05-182023-06-13杭州迦智科技有限公司Positioning method and device based on laser and UWB in tunnel
CN113124880A (en)*2021-05-182021-07-16杭州迦智科技有限公司Mapping and positioning method and device based on data fusion of two sensors
CN113252042A (en)*2021-05-182021-08-13杭州迦智科技有限公司Laser and UWB (ultra wide band) based positioning method and device in tunnel
CN113311452A (en)*2021-05-262021-08-27上海新纪元机器人有限公司Positioning method and system based on multiple sensors
CN113640738A (en)*2021-06-242021-11-12西南科技大学Rotary target positioning method combining IMU and single-group UWB
CN113670290A (en)*2021-06-302021-11-19西南科技大学 An indoor map construction method for mobile robots based on multi-robot collaboration
CN113670290B (en)*2021-06-302023-05-12西南科技大学Mobile robot indoor map construction method based on multi-robot cooperation
CN113741473A (en)*2021-09-132021-12-03深圳本云国际发展有限公司Photocatalyst mobile robot and map construction method
CN113848912A (en)*2021-09-282021-12-28北京理工大学重庆创新中心Indoor map establishing method and device based on autonomous exploration
CN113916232A (en)*2021-10-182022-01-11济南大学Map construction method and system for improving map optimization
CN113916232B (en)*2021-10-182023-10-13济南大学 A map construction method and system that improves graph optimization
CN116295439A (en)*2023-04-122023-06-23北京化工大学 Robust localization method for indoor mobile robot based on UWB and lidar fusion

Also Published As

Publication numberPublication date
CN110645974B (en)2020-11-27

Similar Documents

PublicationPublication DateTitle
CN110645974B (en) An indoor map construction method for mobile robots based on fusion of multi-sensors
CN112859859B (en) A Dynamic Grid Map Updating Method Based on 3D Obstacle Pixel Object Mapping
CN112230243B (en)Indoor map construction method for mobile robot
CN107239076B (en)AGV laser SLAM method based on virtual scanning and distance measurement matching
CN115272596B (en) A multi-sensor fusion SLAM method for monotonous and textureless large scenes
CN114419147A (en)Rescue robot intelligent remote human-computer interaction control method and system
CN110675307A (en)Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
CN116182837A (en)Positioning and mapping method based on visual laser radar inertial tight coupling
CN107036594A (en)The positioning of intelligent Power Station inspection intelligent body and many granularity environment perception technologies
CN114323038B (en)Outdoor positioning method integrating binocular vision and 2D laser radar
Ceriani et al.Pose interpolation slam for large maps using moving 3d sensors
CN111522022A (en)Dynamic target detection method of robot based on laser radar
CN113376638A (en)Unmanned logistics trolley environment sensing method and system
Chen et al.Trajectory optimization of LiDAR SLAM based on local pose graph
CN119399282B (en)Positioning and mapping method and system based on multi-source data
CN117576220A (en) A key frame selection algorithm for real-time calculation of spatiotemporal synchronization of multi-sensor data
CN118736145A (en) A semantic elevation map construction method and system for unstructured environments
CN116858219A (en)Multi-sensor fusion-based pipe robot map building and navigation method
CN116953723A (en)Mobile robot instant positioning and map construction method in locomotive maintenance workshop environment
Wang et al.UWB/LiDAR tightly coupled positioning algorithm based on ISSA optimized particle filter
CN119879897A (en)Efficient mapped navigation positioning method and system based on multi-sensor fusion
CN115200601A (en) Navigation method, device, wheeled robot and storage medium
CN114353802A (en)Robot three-dimensional space positioning method based on laser tracking
CN113075687A (en)Cable trench intelligent inspection robot positioning method based on multi-sensor fusion
CN116878488B (en)Picture construction method and device, storage medium and electronic device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20201127


[8]ページ先頭

©2009-2025 Movatter.jp