

技术领域technical field
本发明属于移动机器人室内建图技术领域,具体涉及一种融合多传感器的移动机器人室内地图构建方法。The invention belongs to the technical field of indoor mapping of mobile robots, and in particular relates to a method for constructing indoor maps of mobile robots that integrates multiple sensors.
背景技术Background technique
近年来,移动机器人技术在工业领域、医疗领域和服务领域都发挥了重要作用,且在国防和空间探测领域等有害与危险场合也得到了很好的应用。在移动机器人的研究领域中,SLAM一直是一个热门的研究话题,它为机器人提供导航地图和实时位置,而这些都是机器人执行路径规划、路径跟踪的前提,因此它在移动机器人导航中占有十分重要的位置。In recent years, mobile robotics technology has played an important role in industrial, medical and service fields, and has also been well used in harmful and dangerous situations such as defense and space exploration. In the research field of mobile robots, SLAM has always been a hot research topic. It provides navigation maps and real-time positions for robots, which are the prerequisites for robots to perform path planning and path tracking, so it occupies a very important position in mobile robot navigation. important location.
由于激光雷达具有精度高、范围广和传输速度快的优点,激光雷达在移动机器人导航应用中日益广泛,基于激光扫描系统的室内环境地图的构建技术被广泛地运用到机器人的导航中,常用于机器人定位,环境地图的构建以及路径规划。一般的激光扫描仪价格十分昂贵,虽然目前市面上有很多便宜的激光出现,但是往往测量范围有限,而且分辨率较低。机器人的里程计往往可以通过光电编码器获取,其所带来的误差,会随着时间的增长越来越大,从而造成机器人位姿估计出现严重偏差。所以二维激光雷达在面对复杂的室内环境时建图精度会严重降低。Because lidar has the advantages of high precision, wide range and fast transmission speed, lidar is increasingly used in mobile robot navigation applications. The construction technology of indoor environment map based on laser scanning system is widely used in robot navigation. Robot positioning, construction of environment maps, and path planning. The general laser scanner is very expensive. Although there are many cheap lasers on the market, the measurement range is often limited and the resolution is low. The odometer of the robot can often be obtained through the photoelectric encoder, and the error brought by it will become larger and larger with the growth of time, resulting in serious deviation of the robot pose estimation. Therefore, the mapping accuracy of 2D LiDAR will be seriously reduced in the face of complex indoor environment.
发明内容SUMMARY OF THE INVENTION
针对现有技术中的上述不足,本发明提供的一种融合多传感器的移动机器人室内地图构建方法解决了现有技术室内建图精度低的问题。In view of the above deficiencies in the prior art, the present invention provides a multi-sensor fusion mobile robot indoor map construction method to solve the problem of low indoor mapping accuracy in the prior art.
为了达到上述发明目的,本发明采用的技术方案为:一种融合多传感器的移动机器人室内地图构建方法,包括以下步骤:In order to achieve the above purpose of the invention, the technical solution adopted in the present invention is: a method for constructing an indoor map of a mobile robot fused with multiple sensors, comprising the following steps:
S1、通过UWB、里程计和激光雷达分别采集移动机器人与锚点的距离信息、移动机器人的位姿信息和环境信息;S1. Collect the distance information between the mobile robot and the anchor point, the pose information and the environment information of the mobile robot through UWB, odometer and lidar respectively;
S2、根据距离信息、位姿信息和环境信息,构建顶点-约束图;S2. Construct a vertex-constraint graph according to distance information, pose information and environment information;
S3、通过图优化算法对顶点-约束图进行优化,获取优化后的移动机器人轨迹数据;S3, optimize the vertex-constraint graph through a graph optimization algorithm, and obtain the optimized trajectory data of the mobile robot;
S4、通过优化后的移动机器人轨迹数据和环境信息构建栅格地图。S4, constructing a grid map through the optimized trajectory data and environmental information of the mobile robot.
进一步地,所述步骤S1包括以下步骤:Further, the step S1 includes the following steps:
S1.1、在移动机器人上搭载UWB标签和激光雷达,所述机器人上设置有里程计和编码器;S1.1. UWB tags and lidars are mounted on the mobile robot, and the robot is provided with an odometer and an encoder;
S1.2、通过UWB标签采集移动机器人与锚点的距离信息;S1.2. Collect the distance information between the mobile robot and the anchor point through the UWB tag;
S1.3、通过里程计获得机器人的位姿数据,所述里程计通过编码器获取数据;S1.3, obtain the pose data of the robot through the odometer, and the odometer obtains the data through the encoder;
S1.4、通过激光雷达扫描得到环境信息。S1.4, obtain environmental information through lidar scanning.
进一步地,所述步骤S2包括以下步骤:Further, the step S2 includes the following steps:
S2.1、根据移动机器人的位姿信息构成顶点,构建基于里程计的边,得到第一初始图;S2.1. Construct vertices according to the pose information of the mobile robot, construct edges based on odometer, and obtain the first initial graph;
S2.2、在第一初始图上添加UWB约束,构建基于UWB的边,得到第二初始图;S2.2. Add UWB constraints on the first initial graph, construct UWB-based edges, and obtain the second initial graph;
S2.3、通过环境数据进行闭环检测,在第二初始图上添加基于激光的边,构建激光-闭环边,得到顶点-约束图。S2.3. Perform closed loop detection through environmental data, add laser-based edges to the second initial graph, construct laser-closed loop edges, and obtain a vertex-constraint graph.
进一步地,所述步骤S2.3中激光-闭环边的构建步骤如下:Further, the construction steps of the laser-closed loop edge in the step S2.3 are as follows:
A1、构建源点云集合Q={q1,q2,…,qN}和目标点云集合P={p1,p2,…,pN};A1. Construct the source point cloud set Q={q1 ,q2 ,...,qN } and the target point cloud set P={p1 ,p2 ,...,pN };
A2、构建目标点云集合P的旋转矩阵R和平移矩阵T,通过旋转矩阵R和平移矩阵T构建目标函数E(R,T);A2. Construct the rotation matrix R and translation matrix T of the target point cloud set P, and construct the objective function E(R, T) through the rotation matrix R and the translation matrix T;
A3、设定一个阈值,并判断E(R,T)是否小于阈值,若是则判断为闭环,并结束本次激光-闭环边的构建,若否则进入步骤A4;A3. Set a threshold, and judge whether E(R, T) is less than the threshold, if so, judge it as a closed loop, and end the construction of the laser-closed loop edge, if not, go to step A4;
A4、将旋转矩阵R和平移矩阵T代入源点云集合Q中,得到点集M;A4. Substitute the rotation matrix R and the translation matrix T into the source point cloud set Q to obtain the point set M;
A5、将点集M代入目标点云集合P,得到新的旋转矩阵R'和新的平移矩阵T';A5. Substitute the point set M into the target point cloud set P to obtain a new rotation matrix R' and a new translation matrix T';
A6令R=R',T=T',将更新过后的R和T代入目标函数E(R,T)中,并返回步骤A3。A6 sets R=R', T=T', substitutes the updated R and T into the objective function E(R, T), and returns to step A3.
进一步地,所述目标函数E(R,T)为:Further, the objective function E(R, T) is:
其中,N表示目标点云集合中点的总数,i=1,2,...,N,qi表示源点云集合中的第i个点,pi表示目标点云集合中的第i个点。Among them, N represents the total number of points in the target point cloud set, i=1,2,...,N, qi represents theith point in the source point cloud set, pi represents the ith point in the target point cloud set point.
进一步地,所述步骤S3中通过图优化算法对顶点-约束图进行优化的具体方法为:调整顶点-约束图中的位姿顶点,使位姿信息的误差函数F(x)最小,得到最大化满足约束的位姿顶点。Further, the specific method for optimizing the vertex-constraint graph by the graph optimization algorithm in the step S3 is: adjusting the pose vertices in the vertex-constraint graph, so that the error function F(x) of the pose information is minimized, and the maximum value is obtained. Transform the pose vertices that satisfy the constraints.
进一步地,所述误差函数F(x)为:Further, the error function F(x) is:
其中,xi表示位姿顶点i,xj表示位姿顶点j,C表示图的顶点间存在约束关系的集合,Ωij表示xi和xj之间观察值的信息矩阵,eij(xi,xj,zij)表示xi和xj满足约束zij的程度,zij表示位姿顶点i和位姿顶点j之间通过传感器获取的实际观察值,所述实际观察值zij包括相邻位姿顶点i与位姿顶点j之间的位姿变换、位姿顶点i与锚点j之间的距离和非相邻位姿顶点i与位姿顶点j之间的位姿变换。Among them, xi represents the pose vertex i, xj represents the pose vertex j, C represents the set of constraint relationships between the vertices of the graph, Ωij represents the information matrix of the observed values between xi and xj , eij (xi , xj , zij ) represents the degree to which xi and xj satisfy the constraint ziij , ziij represents the actual observation value obtained by the sensor between the pose vertex i and the pose vertex j, the actual observation value ziij Including the pose transformation between the adjacent pose vertex i and the pose vertex j, the distance between the pose vertex i and the anchor point j, and the pose transformation between the non-adjacent pose vertex i and the pose vertex j .
进一步地,所述步骤S4包括以下分步骤:Further, the step S4 includes the following sub-steps:
S4.1、根据环境信息,将环境分割为若干个栅格单元;S4.1. According to the environment information, divide the environment into several grid units;
S4.2、计算每个栅格单元被占用的概率lt,ij,被占用的概率lt,ij大于或等于0.8的栅格单元表示为障碍物,将有障碍物的栅格用灰色表示,将无障碍物的栅格用白色表示,得到栅格地图。S4.2. Calculate the probability lt,ij that each grid cell is occupied. The grid cells whose occupied probability lt,ij is greater than or equal to 0.8 are represented as obstacles, and the grids with obstacles are represented in gray , the grid without obstacles is represented in white to get a grid map.
进一步地,所述每个栅格单元被占用的概率lt,ij:Further, the probability lt,ij that each grid unit is occupied:
其中,lt-1,ij表示前一刻时间栅格被占用的概率,p(mij|zt,xt)表示栅格mij被占用的后验概率,mij表示横坐标为i纵坐标为j的栅格单元,zt为t时刻的观测值,xt为t时刻移动机器人的位姿。Among them, lt-1,ij represents the probability that the grid is occupied at the previous moment, p(mij |zt ,xt ) represents the posterior probability that the grid mij is occupied, and mij represents that the abscissa is i and the vertical The grid cell whose coordinate is j, zt is the observation value at time t, and xt is the pose of the mobile robot at time t.
本发明的有益效果为:The beneficial effects of the present invention are:
(1)本发明通过UWB、里程计和激光雷达分别采集移动机器人与锚点的距离信息、移动机器人的位姿信息和环境信息进行地图构建,多种传感器采集数据,使构建的地图更为准确。(1) The present invention collects the distance information between the mobile robot and the anchor point, the pose information and the environment information of the mobile robot through UWB, odometer and laser radar respectively to construct a map, and various sensors collect data, so that the constructed map is more accurate. .
(2)本发明通过构建顶点-约束图,并用图优化算法对顶点-约束图进行优化,使移动机器人的位姿信息更加准确,为构建精准的室内地图打好了基础。(2) The present invention makes the pose information of the mobile robot more accurate by constructing a vertex-constraint graph and using a graph optimization algorithm to optimize the vertex-constraint graph, laying a solid foundation for constructing an accurate indoor map.
(3)本发明使用里程计获得了机器人短时间内精确的位姿变化,融合UWB的定位信息能够获得机器人长时间的精确位姿变化,同时结合激光雷达的测量数据进行精准的地图构建,解决了激光雷达在构建室内复杂环境地图时精度不高的问题。(3) The present invention uses the odometer to obtain the precise position and attitude change of the robot in a short time, and integrates the positioning information of UWB to obtain the precise position and attitude change of the robot for a long time. This solves the problem of low accuracy of lidar in building complex indoor environment maps.
附图说明Description of drawings
图1为本发明提出的一种融合多传感器的移动机器人室内地图构建方法的流程图。FIG. 1 is a flow chart of a method for constructing an indoor map of a mobile robot with fusion of multiple sensors proposed by the present invention.
图2为本发明的实验结果对比图。FIG. 2 is a comparison diagram of experimental results of the present invention.
具体实施方式Detailed ways
下面对本发明的具体实施方式进行描述,以便于本技术领域的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。The specific embodiments of the present invention are described below to facilitate those skilled in the art to understand the present invention, but it should be clear that the present invention is not limited to the scope of the specific embodiments. For those of ordinary skill in the art, as long as various changes Such changes are obvious within the spirit and scope of the present invention as defined and determined by the appended claims, and all inventions and creations utilizing the inventive concept are within the scope of protection.
下面结合附图详细说明本发明的实施例。The embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
如图1所示,一种融合多传感器的移动机器人室内地图构建方法,包括以下步骤:As shown in Figure 1, a multi-sensor fusion mobile robot indoor map construction method includes the following steps:
S1、通过UWB、里程计和激光雷达分别采集移动机器人与锚点的距离信息、移动机器人的位姿信息和环境信息;S1. Collect the distance information between the mobile robot and the anchor point, the pose information and the environment information of the mobile robot through UWB, odometer and lidar respectively;
S2、根据距离信息、位姿信息和环境信息,构建顶点-约束图;S2. Construct a vertex-constraint graph according to distance information, pose information and environment information;
S3、通过图优化算法对顶点-约束图进行优化,获取优化后的移动机器人轨迹数据;S3, optimize the vertex-constraint graph through a graph optimization algorithm, and obtain the optimized trajectory data of the mobile robot;
S4、通过优化后的移动机器人轨迹数据和环境信息构建栅格地图。S4, constructing a grid map through the optimized trajectory data and environmental information of the mobile robot.
步骤S1包括以下步骤:Step S1 includes the following steps:
S1.1、在移动机器人上搭载UWB标签和激光雷达,所述机器人上设置有里程计和编码器;S1.1. UWB tags and lidars are mounted on the mobile robot, and the robot is provided with an odometer and an encoder;
S1.2、通过UWB标签采集移动机器人与锚点的距离信息;S1.2. Collect the distance information between the mobile robot and the anchor point through the UWB tag;
S1.3、通过里程计获得机器人的位姿数据,所述里程计通过编码器获取数据;S1.3, obtain the pose data of the robot through the odometer, and the odometer obtains the data through the encoder;
S1.4、通过激光雷达扫描得到环境信息。S1.4, obtain environmental information through lidar scanning.
步骤S2包括以下步骤:Step S2 includes the following steps:
S2.1、根据移动机器人的位姿信息构成顶点,构建基于里程计的边,得到第一初始图;S2.1. Construct vertices according to the pose information of the mobile robot, construct edges based on odometer, and obtain the first initial graph;
S2.2、在第一初始图上添加UWB约束,构建基于UWB的边,得到第二初始图;S2.2. Add UWB constraints on the first initial graph, construct UWB-based edges, and obtain the second initial graph;
S2.3、通过环境数据进行闭环检测,在第二初始图上添加基于激光的边,构建激光-闭环边,得到顶点-约束图。S2.3. Perform closed loop detection through environmental data, add laser-based edges to the second initial graph, construct laser-closed loop edges, and obtain a vertex-constraint graph.
步骤S2.3中激光-闭环边的计算步骤如下:The calculation steps of the laser-closed loop edge in step S2.3 are as follows:
A1、构建源点云集合Q={q1,q2,…,qN}和目标点云集合P={p1,p2,…,pN};A1. Construct the source point cloud set Q={q1 ,q2 ,...,qN } and the target point cloud set P={p1 ,p2 ,...,pN };
A2、构建目标点云集合P的旋转矩阵R和平移矩阵T,通过旋转矩阵R和平移矩阵T构建目标函数E(R,T);A2. Construct the rotation matrix R and translation matrix T of the target point cloud set P, and construct the objective function E(R, T) through the rotation matrix R and the translation matrix T;
A3、设定一个阈值,并判断E(R,T)是否小于阈值,若是则判断为闭环,并结束本次激光-闭环边的构建,若否则进入步骤A4;A3. Set a threshold, and judge whether E(R, T) is less than the threshold, if so, judge it as a closed loop, and end the construction of the laser-closed loop edge, if not, go to step A4;
A4、将旋转矩阵R和平移矩阵T代入源点云集合Q中,得到点集M;A4. Substitute the rotation matrix R and the translation matrix T into the source point cloud set Q to obtain the point set M;
A5、将点集M代入目标点云集合P,得到新的旋转矩阵R'和新的平移矩阵T';A5. Substitute the point set M into the target point cloud set P to obtain a new rotation matrix R' and a new translation matrix T';
A6令R=R',T=T',将更新过后的R和T代入目标函数E(R,T)中,并返回步骤A3。A6 sets R=R', T=T', substitutes the updated R and T into the objective function E(R, T), and returns to step A3.
目标函数E(R,T)为:The objective function E(R,T) is:
其中,N表示目标点云集合中点的总数,i=1,2,...,N,qi表示源点云集合中的第i个点,pi表示目标点云集合中的第i个点。Among them, N represents the total number of points in the target point cloud set, i=1,2,...,N, qi represents theith point in the source point cloud set, pi represents the ith point in the target point cloud set point.
步骤S3中通过图优化算法对顶点-约束图进行优化的具体方法为:调整顶点-约束图中的位姿顶点,使位姿信息的误差函数F(x)最小,得到最大化满足约束的位姿顶点。The specific method for optimizing the vertex-constraint graph by the graph optimization algorithm in step S3 is: adjusting the pose vertices in the vertex-constraint graph, so that the error function F(x) of the pose information is minimized, and the maximum position satisfying the constraint is obtained. pose vertex.
误差函数F(x)为:The error function F(x) is:
其中,xi表示位姿顶点i,xj表示位姿顶点j,C表示图的顶点间存在约束关系的集合,Ωij表示xi和xj之间观察值的信息矩阵,eij(xi,xj,zij)表示xi和xj满足约束zij的程度,zij表示位姿顶点i和位姿顶点j之间通过传感器获取的实际观察值,所述实际观察值zij包括相邻位姿顶点i与位姿顶点j之间的位姿变换、位姿顶点i与锚点j之间的距离和非相邻位姿顶点i与位姿顶点j之间的位姿变换。Among them, xi represents the pose vertex i, xj represents the pose vertex j, C represents the set of constraint relationships between the vertices of the graph, Ωij represents the information matrix of the observed values between xi and xj , eij (xi , xj , zij ) represents the degree to which xi and xj satisfy the constraint ziij , ziij represents the actual observation value obtained by the sensor between the pose vertex i and the pose vertex j, the actual observation value ziij Including the pose transformation between the adjacent pose vertex i and the pose vertex j, the distance between the pose vertex i and the anchor point j, and the pose transformation between the non-adjacent pose vertex i and the pose vertex j .
步骤S4包括以下分步骤:Step S4 includes the following sub-steps:
S4.1、根据环境信息,将环境分割为若干个栅格单元;S4.1. According to the environment information, divide the environment into several grid units;
S4.2、计算每个栅格单元被占用的概率lt,ij,被占用的概率lt,ij大于或等于0.8的栅格单元表示为障碍物,将有障碍物的栅格用灰色表示,将无障碍物的栅格用白色表示,得到栅格地图。S4.2. Calculate the probability lt,ij that each grid cell is occupied. The grid cells whose occupied probability lt,ij is greater than or equal to 0.8 are represented as obstacles, and the grids with obstacles are represented in gray , the grid without obstacles is represented in white to get a grid map.
所述每个栅格单元被占用的概率lt,ij:The probability lt,ij that each grid cell is occupied:
其中,lt-1,ij表示前一刻时间栅格被占用的概率,p(mij|zt,xt)表示栅格mij被占用的后验概率,mij表示横坐标为i纵坐标为j的栅格单元,zt为t时刻的观测值,所述观测值表示激光雷达测量得到的移动机器人与障碍物之间的距离和角度,xt为t时刻移动机器人的位姿。Among them, lt-1,ij represents the probability that the grid is occupied at the previous moment, p(mij |zt ,xt ) represents the posterior probability that the grid mij is occupied, and mij represents that the abscissa is i and the vertical The grid unit whose coordinate is j, zt is the observation value at time t, the observation value represents the distance and angle between the mobile robot and the obstacle measured by the lidar, and xt is the pose of the mobile robot at time t.
在本实施例中,实验场景选择为两对边为墙体的走廊。In this embodiment, the experimental scene is selected as a corridor with two opposite sides being walls.
如图2所示,a图为原始里程计所构建的地图,b图为里程计和UWB共同构建的地图,c图为本发明通过UWB、里程计和激光雷达构建的地图;通过实验结果对比分析,可以发现本发明所构建的地图对墙体的构建更为精准,且有效的识别了障碍物。As shown in Figure 2, picture a is the map constructed by the original odometer, picture b is the map jointly constructed by the odometer and UWB, and picture c is the map constructed by the present invention through UWB, odometer and lidar; through the comparison of experimental results Through analysis, it can be found that the map constructed by the present invention is more accurate for the construction of the wall, and the obstacles are effectively identified.
本发明通过UWB、里程计和激光雷达分别采集移动机器人与锚点的距离信息、移动机器人的位姿信息和环境信息进行地图构建,多种传感器采集数据,使构建的地图更为准确。The invention collects the distance information between the mobile robot and the anchor point, the pose information and the environment information of the mobile robot through UWB, odometer and laser radar respectively to construct a map, and various sensors collect data, so that the constructed map is more accurate.
本发明通过构建顶点-约束图,并用图优化算法对顶点-约束图进行优化,使移动机器人的位姿信息更加准确,为构建精准的室内地图打好了基础。By constructing a vertex-constraint graph and optimizing the vertex-constraint graph with a graph optimization algorithm, the present invention makes the pose information of the mobile robot more accurate, and lays a solid foundation for constructing an accurate indoor map.
本发明使用里程计获得了机器人短时间内精确的位姿变化,融合UWB的定位信息能够获得机器人长时间的精确位姿变化,同时结合激光雷达的测量数据进行精准的地图构建,解决了激光雷达在构建室内复杂环境地图时精度不高的问题。The invention uses the odometer to obtain the precise position and attitude change of the robot in a short time, and integrates the positioning information of UWB to obtain the precise position and attitude change of the robot for a long time. The problem of low accuracy when building indoor complex environment maps.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910915091.4ACN110645974B (en) | 2019-09-26 | 2019-09-26 | An indoor map construction method for mobile robots based on fusion of multi-sensors |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910915091.4ACN110645974B (en) | 2019-09-26 | 2019-09-26 | An indoor map construction method for mobile robots based on fusion of multi-sensors |
| Publication Number | Publication Date |
|---|---|
| CN110645974Atrue CN110645974A (en) | 2020-01-03 |
| CN110645974B CN110645974B (en) | 2020-11-27 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910915091.4AExpired - Fee RelatedCN110645974B (en) | 2019-09-26 | 2019-09-26 | An indoor map construction method for mobile robots based on fusion of multi-sensors |
| Country | Link |
|---|---|
| CN (1) | CN110645974B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111427047A (en)* | 2020-03-30 | 2020-07-17 | 哈尔滨工程大学 | A SLAM method for autonomous mobile robots in large scenes |
| CN111551184A (en)* | 2020-03-27 | 2020-08-18 | 上海大学 | Map optimization method and system for SLAM of mobile robot |
| CN111694006A (en)* | 2020-05-29 | 2020-09-22 | 杭州电子科技大学 | Navigation sensing system for indoor unmanned system |
| CN111862162A (en)* | 2020-07-31 | 2020-10-30 | 湖北亿咖通科技有限公司 | Loop detection method and system, readable storage medium and electronic device |
| CN111862216A (en)* | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
| CN111983636A (en)* | 2020-08-12 | 2020-11-24 | 深圳华芯信息技术股份有限公司 | Pose fusion method, pose fusion system, terminal, medium and mobile robot |
| CN112179353A (en)* | 2020-09-30 | 2021-01-05 | 深圳市银星智能科技股份有限公司 | Positioning method and device of self-moving robot, robot and readable storage medium |
| CN112230243A (en)* | 2020-10-28 | 2021-01-15 | 西南科技大学 | Indoor map construction method for mobile robot |
| CN112325872A (en)* | 2020-10-27 | 2021-02-05 | 上海懒书智能科技有限公司 | Positioning method of mobile equipment based on multi-sensor coupling |
| CN112362045A (en)* | 2020-11-19 | 2021-02-12 | 佛山科学技术学院 | Device for building graph based on laser SLAM and memory optimization method |
| CN112578798A (en)* | 2020-12-18 | 2021-03-30 | 珠海格力智能装备有限公司 | Robot map acquisition method and device, processor and electronic device |
| CN112833876A (en)* | 2020-12-30 | 2021-05-25 | 西南科技大学 | A multi-robot cooperative localization method integrating odometer and UWB |
| CN112965063A (en)* | 2021-02-11 | 2021-06-15 | 深圳市安泽智能机器人有限公司 | Robot mapping and positioning method |
| CN113111081A (en)* | 2021-04-16 | 2021-07-13 | 四川阿泰因机器人智能装备有限公司 | Mobile robot mapping method under weak characteristic environment |
| CN113124880A (en)* | 2021-05-18 | 2021-07-16 | 杭州迦智科技有限公司 | Mapping and positioning method and device based on data fusion of two sensors |
| CN113252042A (en)* | 2021-05-18 | 2021-08-13 | 杭州迦智科技有限公司 | Laser and UWB (ultra wide band) based positioning method and device in tunnel |
| CN113311452A (en)* | 2021-05-26 | 2021-08-27 | 上海新纪元机器人有限公司 | Positioning method and system based on multiple sensors |
| CN113640738A (en)* | 2021-06-24 | 2021-11-12 | 西南科技大学 | Rotary target positioning method combining IMU and single-group UWB |
| CN113670290A (en)* | 2021-06-30 | 2021-11-19 | 西南科技大学 | An indoor map construction method for mobile robots based on multi-robot collaboration |
| CN113741473A (en)* | 2021-09-13 | 2021-12-03 | 深圳本云国际发展有限公司 | Photocatalyst mobile robot and map construction method |
| CN113848912A (en)* | 2021-09-28 | 2021-12-28 | 北京理工大学重庆创新中心 | Indoor map establishing method and device based on autonomous exploration |
| CN113916232A (en)* | 2021-10-18 | 2022-01-11 | 济南大学 | Map construction method and system for improving map optimization |
| CN115235464A (en)* | 2021-04-23 | 2022-10-25 | 中煤科工开采研究院有限公司 | A positioning method, device and moving tool |
| CN116295439A (en)* | 2023-04-12 | 2023-06-23 | 北京化工大学 | Robust localization method for indoor mobile robot based on UWB and lidar fusion |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070293985A1 (en)* | 2006-06-20 | 2007-12-20 | Samsung Electronics Co., Ltd. | Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map |
| CN106643720A (en)* | 2016-09-28 | 2017-05-10 | 深圳市普渡科技有限公司 | Method for map construction based on UWB indoor locating technology and laser radar |
| CN107179080A (en)* | 2017-06-07 | 2017-09-19 | 纳恩博(北京)科技有限公司 | The localization method and device of electronic equipment, electronic equipment, electronic positioning system |
| CN109059942A (en)* | 2018-08-22 | 2018-12-21 | 中国矿业大学 | A kind of high-precision underground navigation map building system and construction method |
| CN109848996A (en)* | 2019-03-19 | 2019-06-07 | 西安交通大学 | Extensive three-dimensional environment map creating method based on figure optimum theory |
| CN109974712A (en)* | 2019-04-22 | 2019-07-05 | 广东亿嘉和科技有限公司 | It is a kind of that drawing method is built based on the Intelligent Mobile Robot for scheming optimization |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070293985A1 (en)* | 2006-06-20 | 2007-12-20 | Samsung Electronics Co., Ltd. | Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map |
| CN106643720A (en)* | 2016-09-28 | 2017-05-10 | 深圳市普渡科技有限公司 | Method for map construction based on UWB indoor locating technology and laser radar |
| CN107179080A (en)* | 2017-06-07 | 2017-09-19 | 纳恩博(北京)科技有限公司 | The localization method and device of electronic equipment, electronic equipment, electronic positioning system |
| CN109059942A (en)* | 2018-08-22 | 2018-12-21 | 中国矿业大学 | A kind of high-precision underground navigation map building system and construction method |
| CN109848996A (en)* | 2019-03-19 | 2019-06-07 | 西安交通大学 | Extensive three-dimensional environment map creating method based on figure optimum theory |
| CN109974712A (en)* | 2019-04-22 | 2019-07-05 | 广东亿嘉和科技有限公司 | It is a kind of that drawing method is built based on the Intelligent Mobile Robot for scheming optimization |
| Title |
|---|
| JENS-STEFFEN GUTMANN ET AL.: "Incremental Mapping of Large Cyclic Environments", 《PROC THE CONFERENCE ON INTELLIGENT ROBOTS & APPLICATIONS》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111551184A (en)* | 2020-03-27 | 2020-08-18 | 上海大学 | Map optimization method and system for SLAM of mobile robot |
| CN111551184B (en)* | 2020-03-27 | 2021-11-26 | 上海大学 | Map optimization method and system for SLAM of mobile robot |
| CN111427047A (en)* | 2020-03-30 | 2020-07-17 | 哈尔滨工程大学 | A SLAM method for autonomous mobile robots in large scenes |
| CN111427047B (en)* | 2020-03-30 | 2023-05-05 | 哈尔滨工程大学 | SLAM method for autonomous mobile robot in large scene |
| CN111694006A (en)* | 2020-05-29 | 2020-09-22 | 杭州电子科技大学 | Navigation sensing system for indoor unmanned system |
| CN111862216B (en)* | 2020-07-29 | 2023-05-26 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method, device, computer equipment and storage medium |
| CN111862216A (en)* | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
| CN111862162A (en)* | 2020-07-31 | 2020-10-30 | 湖北亿咖通科技有限公司 | Loop detection method and system, readable storage medium and electronic device |
| CN111983636A (en)* | 2020-08-12 | 2020-11-24 | 深圳华芯信息技术股份有限公司 | Pose fusion method, pose fusion system, terminal, medium and mobile robot |
| CN112179353A (en)* | 2020-09-30 | 2021-01-05 | 深圳市银星智能科技股份有限公司 | Positioning method and device of self-moving robot, robot and readable storage medium |
| CN112325872A (en)* | 2020-10-27 | 2021-02-05 | 上海懒书智能科技有限公司 | Positioning method of mobile equipment based on multi-sensor coupling |
| CN112325872B (en)* | 2020-10-27 | 2022-09-30 | 上海懒书智能科技有限公司 | Positioning method of mobile equipment based on multi-sensor coupling |
| CN112230243A (en)* | 2020-10-28 | 2021-01-15 | 西南科技大学 | Indoor map construction method for mobile robot |
| CN112362045B (en)* | 2020-11-19 | 2022-03-29 | 佛山科学技术学院 | A device and memory optimization method based on laser SLAM mapping |
| CN112362045A (en)* | 2020-11-19 | 2021-02-12 | 佛山科学技术学院 | Device for building graph based on laser SLAM and memory optimization method |
| CN112578798B (en)* | 2020-12-18 | 2024-02-27 | 珠海格力智能装备有限公司 | Robot map acquisition method and device, processor and electronic device |
| CN112578798A (en)* | 2020-12-18 | 2021-03-30 | 珠海格力智能装备有限公司 | Robot map acquisition method and device, processor and electronic device |
| CN112833876B (en)* | 2020-12-30 | 2022-02-11 | 西南科技大学 | Multi-robot cooperative positioning method integrating odometer and UWB |
| CN112833876A (en)* | 2020-12-30 | 2021-05-25 | 西南科技大学 | A multi-robot cooperative localization method integrating odometer and UWB |
| CN112965063B (en)* | 2021-02-11 | 2022-04-01 | 深圳市安泽智能机器人有限公司 | Robot mapping and positioning method |
| CN112965063A (en)* | 2021-02-11 | 2021-06-15 | 深圳市安泽智能机器人有限公司 | Robot mapping and positioning method |
| CN113111081A (en)* | 2021-04-16 | 2021-07-13 | 四川阿泰因机器人智能装备有限公司 | Mobile robot mapping method under weak characteristic environment |
| CN115235464A (en)* | 2021-04-23 | 2022-10-25 | 中煤科工开采研究院有限公司 | A positioning method, device and moving tool |
| CN113252042B (en)* | 2021-05-18 | 2023-06-13 | 杭州迦智科技有限公司 | Positioning method and device based on laser and UWB in tunnel |
| CN113124880A (en)* | 2021-05-18 | 2021-07-16 | 杭州迦智科技有限公司 | Mapping and positioning method and device based on data fusion of two sensors |
| CN113252042A (en)* | 2021-05-18 | 2021-08-13 | 杭州迦智科技有限公司 | Laser and UWB (ultra wide band) based positioning method and device in tunnel |
| CN113311452A (en)* | 2021-05-26 | 2021-08-27 | 上海新纪元机器人有限公司 | Positioning method and system based on multiple sensors |
| CN113640738A (en)* | 2021-06-24 | 2021-11-12 | 西南科技大学 | Rotary target positioning method combining IMU and single-group UWB |
| CN113670290A (en)* | 2021-06-30 | 2021-11-19 | 西南科技大学 | An indoor map construction method for mobile robots based on multi-robot collaboration |
| CN113670290B (en)* | 2021-06-30 | 2023-05-12 | 西南科技大学 | Mobile robot indoor map construction method based on multi-robot cooperation |
| CN113741473A (en)* | 2021-09-13 | 2021-12-03 | 深圳本云国际发展有限公司 | Photocatalyst mobile robot and map construction method |
| CN113848912A (en)* | 2021-09-28 | 2021-12-28 | 北京理工大学重庆创新中心 | Indoor map establishing method and device based on autonomous exploration |
| CN113916232A (en)* | 2021-10-18 | 2022-01-11 | 济南大学 | Map construction method and system for improving map optimization |
| CN113916232B (en)* | 2021-10-18 | 2023-10-13 | 济南大学 | A map construction method and system that improves graph optimization |
| CN116295439A (en)* | 2023-04-12 | 2023-06-23 | 北京化工大学 | Robust localization method for indoor mobile robot based on UWB and lidar fusion |
| Publication number | Publication date |
|---|---|
| CN110645974B (en) | 2020-11-27 |
| Publication | Publication Date | Title |
|---|---|---|
| CN110645974B (en) | An indoor map construction method for mobile robots based on fusion of multi-sensors | |
| CN112859859B (en) | A Dynamic Grid Map Updating Method Based on 3D Obstacle Pixel Object Mapping | |
| CN112230243B (en) | Indoor map construction method for mobile robot | |
| CN107239076B (en) | AGV laser SLAM method based on virtual scanning and distance measurement matching | |
| CN115272596B (en) | A multi-sensor fusion SLAM method for monotonous and textureless large scenes | |
| CN114419147A (en) | Rescue robot intelligent remote human-computer interaction control method and system | |
| CN110675307A (en) | Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM | |
| CN116182837A (en) | Positioning and mapping method based on visual laser radar inertial tight coupling | |
| CN107036594A (en) | The positioning of intelligent Power Station inspection intelligent body and many granularity environment perception technologies | |
| CN114323038B (en) | Outdoor positioning method integrating binocular vision and 2D laser radar | |
| Ceriani et al. | Pose interpolation slam for large maps using moving 3d sensors | |
| CN111522022A (en) | Dynamic target detection method of robot based on laser radar | |
| CN113376638A (en) | Unmanned logistics trolley environment sensing method and system | |
| Chen et al. | Trajectory optimization of LiDAR SLAM based on local pose graph | |
| CN119399282B (en) | Positioning and mapping method and system based on multi-source data | |
| CN117576220A (en) | A key frame selection algorithm for real-time calculation of spatiotemporal synchronization of multi-sensor data | |
| CN118736145A (en) | A semantic elevation map construction method and system for unstructured environments | |
| CN116858219A (en) | Multi-sensor fusion-based pipe robot map building and navigation method | |
| CN116953723A (en) | Mobile robot instant positioning and map construction method in locomotive maintenance workshop environment | |
| Wang et al. | UWB/LiDAR tightly coupled positioning algorithm based on ISSA optimized particle filter | |
| CN119879897A (en) | Efficient mapped navigation positioning method and system based on multi-sensor fusion | |
| CN115200601A (en) | Navigation method, device, wheeled robot and storage medium | |
| CN114353802A (en) | Robot three-dimensional space positioning method based on laser tracking | |
| CN113075687A (en) | Cable trench intelligent inspection robot positioning method based on multi-sensor fusion | |
| CN116878488B (en) | Picture construction method and device, storage medium and electronic device |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20201127 |