Movatterモバイル変換


[0]ホーム

URL:


CN115407357B - Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes - Google Patents

Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes
Download PDF

Info

Publication number
CN115407357B
CN115407357BCN202210784265.XACN202210784265ACN115407357BCN 115407357 BCN115407357 BCN 115407357BCN 202210784265 ACN202210784265 ACN 202210784265ACN 115407357 BCN115407357 BCN 115407357B
Authority
CN
China
Prior art keywords
imu
points
point cloud
rtk
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210784265.XA
Other languages
Chinese (zh)
Other versions
CN115407357A (en
Inventor
高旺
刘宏
潘树国
乔龙雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast UniversityfiledCriticalSoutheast University
Priority to CN202210784265.XApriorityCriticalpatent/CN115407357B/en
Publication of CN115407357ApublicationCriticalpatent/CN115407357A/en
Application grantedgrantedCritical
Publication of CN115407357BpublicationCriticalpatent/CN115407357B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a low-beam laser radar-IMU-RTK positioning mapping algorithm based on a large scene, which comprises the following steps: in the initialization stage, the RTK position information is used for rapidly and accurately completing the IMU alignment, and the correct initial pose of the laser radar-IMU-RTK system under the global coordinate is solved; extracting surrounding point cloud environment characteristics, and distinguishing corner points and face points according to curvature; constructing a local point cloud map in real time, and calculating the current laser radar pose by utilizing the dynamic matching of the current frame point cloud information and the local map; the graph optimization model is used for constructing a laser radar odometer factor, an IMU pre-integral factor and an RTK absolute position factor, optimizing and solving a positioning result and performing real-time point cloud map splicing. According to the method, the problem of positioning and mapping of the low-beam laser radar in a large scene feature sparse environment can be solved through the graph optimization model, and meanwhile real-time acquisition can be guaranteed.

Description

Translated fromChinese
基于大场景的低线束激光雷达-IMU-RTK定位建图算法Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes

技术领域Technical Field

本发明属于激光雷达SLAM(Simultaneous Localization And Mapping)领域,涉及多传感器融合技术,具体涉及基于大场景的低线束激光雷达-IMU-RTK定位建图算法。The present invention belongs to the field of laser radar SLAM (Simultaneous Localization And Mapping), relates to multi-sensor fusion technology, and specifically to a low-beam laser radar-IMU-RTK positioning and mapping algorithm based on a large scene.

背景技术Background technique

获得精准、可靠的定位是移动机器人、自动驾驶的基本需求。近年来,视觉、激光同步定位与建图技术取得长足的发展。视觉定位依靠相机可实现六自由度状态估计,但视觉定位受限环境较多,容易受到系统初始化、光照、低纹理特征等因素影响。而激光传感器可直接获得深度信息,并且具备较高分辨率,可在夜间工作,能实现更为准确的位姿估计。Obtaining accurate and reliable positioning is a basic requirement for mobile robots and autonomous driving. In recent years, vision, laser synchronous positioning and mapping technologies have made great progress. Visual positioning relies on cameras to achieve six-degree-of-freedom state estimation, but visual positioning has many restricted environments and is easily affected by factors such as system initialization, lighting, and low-texture features. Laser sensors can directly obtain depth information and have higher resolutions. They can work at night and achieve more accurate pose estimation.

点云匹配方案以往是迭代ICP算法,但容易陷入局部最小值,且配准耗时长。2014年发布的LOAM首次采用点线、点面特征匹配位姿计算,在建图过程利用帧与帧(scan-to-scan) 里程计结果初值和帧与图(scan-to-map)匹配共同更新维护点云地图。2020年发布的LIO-SAM 融合了激光雷达-IMU-GNSS三种传感器信息,采用IMU里程计高频解算、激光雷达里程计特征匹配的方式,GNSS和回环信息在建图过程用于抑制定位结果漂移,获得全局一致性较好的点云地图。The point cloud matching solution used to be an iterative ICP algorithm, but it is easy to fall into local minima and the registration takes a long time. LOAM, released in 2014, first used point-line and point-surface feature matching for pose calculation. In the mapping process, the initial value of the scan-to-scan odometer result and the scan-to-map matching are used to jointly update and maintain the point cloud map. LIO-SAM, released in 2020, integrates information from three sensors: lidar, IMU, and GNSS. It uses high-frequency solution of the IMU odometer and feature matching of the lidar odometer. GNSS and loop information are used to suppress the drift of positioning results in the mapping process to obtain a point cloud map with good global consistency.

根据上述原理,激光雷达SLAM技术主要采用提取角点和面点特征的前端处理方式,通过IMU里程计或者scan-to-scan匹配提供预测值。优化过程采用当前帧和局部地图匹配获得更为精准的定位结果。在后端处理普遍利用因子图框架,进行多源信息处理融合。但是现有的算法主要针对小场景运动状态。而对于场景复杂,载体环境变化大,易出现累计误差,稳健性差。为此,需在保证定位和建图的条件下,精度探索基于大场景的城市环境下可靠的激光SLAM技术。According to the above principles, LiDAR SLAM technology mainly uses the front-end processing method of extracting corner points and surface point features, and provides prediction values through IMU odometer or scan-to-scan matching. The optimization process uses the current frame and local map matching to obtain more accurate positioning results. The factor graph framework is generally used in the back-end processing to perform multi-source information processing and fusion. However, the existing algorithms are mainly aimed at the motion state of small scenes. For complex scenes and large changes in the carrier environment, cumulative errors are prone to occur and the robustness is poor. To this end, it is necessary to accurately explore reliable laser SLAM technology based on large-scale urban environments while ensuring positioning and mapping.

发明内容Summary of the invention

为解决上述问题,本发明公开了基于大场景的低线束激光雷达-IMU-RTK定位建图算法,根据如今城市环境定位的复杂性,利用低线束激光雷达、IMU与RTK实现信息互补利用,实时采集并获得可靠的定位建图结果。To solve the above problems, the present invention discloses a large-scene based low-beam lidar-IMU-RTK positioning and mapping algorithm. According to the complexity of today's urban environment positioning, low-beam lidar, IMU and RTK are used to achieve information complementary utilization, and reliable positioning and mapping results are collected in real time.

为达到上述目的,本发明的技术方案如下:To achieve the above object, the technical solution of the present invention is as follows:

基于大场景的低线束激光雷达-IMU-RTK定位建图算法,包含以下步骤:The low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes includes the following steps:

步骤1、初始化利用RTK位置信息快速精准的完成IMU对准,并解算激光雷达 -IMU-RTK系统在全局坐标下的正确初始位姿;Step 1: Initialize and use the RTK position information to quickly and accurately complete the IMU alignment and solve the correct initial pose of the LiDAR-IMU-RTK system in global coordinates;

步骤2、提取周围点云环境特征,根据扫描点曲率区分角点和面点,采用IMU数据进行点云去畸变校正;Step 2: Extract the surrounding point cloud environment features, distinguish corner points and surface points according to the curvature of the scanning points, and use IMU data to perform point cloud distortion correction;

步骤3、实时构建局部点云地图,对地图与当前帧进行体素滤波提升后续计算效率,并利用当前帧点云信息与局部地图动态匹配,迭代计算当前激光雷达位姿;Step 3: Build a local point cloud map in real time, perform voxel filtering on the map and the current frame to improve subsequent calculation efficiency, and use the current frame point cloud information to dynamically match the local map to iteratively calculate the current lidar pose;

步骤4、图优化模型构建激光雷达里程计因子、IMU预积分因子、RTK绝对位置因子,优化求解定位结果并进行实时点云地图拼接,获得一致性较好的全局地图。Step 4: The graph optimization model constructs the lidar odometer factor, IMU pre-integration factor, and RTK absolute position factor, optimizes the positioning result, and performs real-time point cloud map stitching to obtain a global map with good consistency.

具体步骤为:The specific steps are:

步骤1.IMU快速精准的初始对准Step 1. Fast and accurate initial alignment of the IMU

式中,表示导航坐标系到载体坐标系的转换矩阵;ab、bb、cb分别表示载体坐标系下的位置坐标;an、bn、cn表示对应点在导航坐标系下的坐标值。初始对准确定惯性器件在导航坐标系下的姿态,获得激光雷达-IMU-RTK融合定位在全局坐标下的姿态初值。In the formula, represents the transformation matrix from the navigation coordinate system to the carrier coordinate system; ab , bb , cb represent the position coordinates in the carrier coordinate system; an , bn , cn represent the coordinate values of the corresponding points in the navigation coordinate system. Initial alignment determines the attitude of the inertial device in the navigation coordinate system and obtains the initial attitude value of the LiDAR-IMU-RTK fusion positioning in the global coordinate system.

步骤2.提取周围点云环境特征,根据扫描点曲率区分角点和面点Step 2. Extract the surrounding point cloud environment features and distinguish corner points and surface points based on the curvature of the scanning points

提取环境点云特征,根据扫描点曲率区分角点和面点,曲率计算公式如下式:Extract the features of the environmental point cloud and distinguish corner points and surface points according to the curvature of the scanning points. The curvature calculation formula is as follows:

|S|代表此扫描点的同线束周围点集合,ri表示此点的深度值。通过此式可依次计算每帧激光雷达扫描到的所有点的曲率。同时,利用每个点的深度信息剔除平行点和遮挡点。为了提升遍历效率,每帧点云坐标信息投影成二维图像,并分为六个子图,对子图分别提取角点和面点。根据阈值cth进行判定,c<cth为角点,c>cth为面点。由于载体运动,每帧扫描过程会出现点云畸变,采用高频IMU数据短时间积分可实现运动补偿。|S| represents the set of points around the same line beam of this scanning point, andri represents the depth value of this point. This formula can be used to calculate the curvature of all points scanned by the lidar in each frame in turn. At the same time, the depth information of each point is used to eliminate parallel points and occluded points. In order to improve the traversal efficiency, the coordinate information of each frame of the point cloud is projected into a two-dimensional image and divided into six sub-images, and the corner points and surface points are extracted from the sub-images respectively. The judgment is made according to the thresholdcth , c<cth is a corner point, and c>cth is a surface point. Due to the movement of the carrier, point cloud distortion will occur in each frame scanning process, and motion compensation can be achieved by using high-frequency IMU data short-time integration.

步骤3.构建局部点云地图,点云配准优化位姿Step 3. Build a local point cloud map and optimize the pose by point cloud registration

采用滑窗的方式合并一定范围内的关键帧信息。为了提升点云匹配速率,局部地图的点云信息都存储在kd-tree的数据结构中,方便后续搜索。运动估计模块将当前帧与局部地图匹配求解运动状态。The key frame information within a certain range is merged by sliding window. In order to improve the point cloud matching rate, the point cloud information of the local map is stored in the kd-tree data structure to facilitate subsequent search. The motion estimation module matches the current frame with the local map to solve the motion state.

对于角点,搜索其在局部地图最近的五个点,计算五点均值和协方差矩阵。当有满足条件的线特征,可计算当前角点到线的距离。距离计算公式如下:For a corner point, search for the five nearest points in the local map and calculate the mean and covariance matrix of the five points. When there is a line feature that meets the conditions, the distance from the current corner point to the line can be calculated. The distance calculation formula is as follows:

其中·和×分别表示点积和叉乘运算,pn是单位向量,pε代表当前角点,代表特征值对应的特征向量。同理,面点ps在局部地图中搜索出五点构成平面。此时五点的协方差矩阵的最小特征值对应的特征向量/>为这个平面所对应法向量。点面距离计算如下:Where · and × represent dot product and cross product operations respectively, pn is a unit vector, pε represents the current corner point, Represents the eigenvector corresponding to the eigenvalue. Similarly, the surface pointps searches for five points in the local map to form a plane. At this time, the eigenvector corresponding to the minimum eigenvalue of the covariance matrix of the five points is is the normal vector corresponding to this plane. The point-to-plane distance is calculated as follows:

式子和/>分别表示局部地图提取角点和面点的几何中心。通过联合最小化点线残差 f(pε)和点面残差f(ps),得到当前点在局部地图的最佳位姿估计Tk。求解过程如下式:formula and/> Represent the geometric centers of corner points and surface points extracted from the local map respectively. By jointly minimizing the point-line residual f(pε ) and the point-surface residual f(ps ), the optimal pose estimate Tk of the current point in the local map is obtained. The solution process is as follows:

min{∑f(pε)+f(ps)} (6)min{∑f(pε )+f(ps )} (6)

步骤4.图优化模型求解定位结果并进行实时点云地图拼接Step 4. Graph optimization model solves positioning results and performs real-time point cloud map stitching

对于一定窗口内的关键帧,数量为n,优化状态量最小化求解方程如下式:For key frames within a certain window, the number is n, and the state quantity is optimized The minimization equation is as follows:

式中Rp(x)代表边缘化残差,边缘化的目的是减少优化方程规模,移出滑窗的关键帧信息保存为当前优化的先验信息。Where Rp (x) represents the marginalized residual. The purpose of marginalization is to reduce the scale of the optimization equation. The key frame information removed from the sliding window is saved as the prior information of the current optimization.

代表滑窗内激光雷达里程计的帧间约束,残差计算如下式: Represents the inter-frame constraint of the lidar odometer in the sliding window, and the residual is calculated as follows:

式中i和j分别表示上一帧和当前帧,T是位姿的欧式矩阵表示,ΔT即为位姿变化量。Where i and j represent the previous frame and the current frame respectively, T is the Euclidean matrix representation of the pose, and ΔT is the pose change.

代表滑窗内IMU预积分约束,残差按下式解得: represents the IMU pre-integration constraint in the sliding window, and the residual is solved as follows:

式中是旋转矩阵表示的姿态,W是世界坐标系。δα、δβ和δθ分别表示位置、速度和角度的变化量,δba和δbw分别表示加速度计和陀螺仪的偏置变化。gW表示世界坐标系下的重力值,pW、vW和qW是世界系下的位置、速度和四元数表示的姿态。[×]xyz指四元数虚部,分别代表IMU预积分计算的相邻两帧之间的位置、速度和角度。同时每帧解算估计的加速度计和陀螺仪的偏置为ba和bwIn the formula is the attitude expressed by the rotation matrix, and W is the world coordinate system. δα, δβ, and δθ represent the changes in position, velocity, and angle, respectively.δba andδbw represent the changes in the bias of the accelerometer and gyroscope, respectively.gW represents the gravity value in the world coordinate system, andpW ,vW , andqW represent the position, velocity, and attitude expressed by the quaternion in the world system. [×]xyz refers to the imaginary part of the quaternion, They represent the position, velocity and angle between two adjacent frames calculated by IMU pre-integration. At the same time, the bias of the accelerometer and gyroscope estimated in each frame areba andbw .

Rg(x)表示RTK绝对位置残差约束,计算公式如下:Rg (x) represents the RTK absolute position residual constraint, and its calculation formula is as follows:

式中表示RTK测量的位置结果。RTK因子在优化问题的权重根据其协方差变化。In the formula Represents the position result of RTK measurement. The weight of RTK factor in the optimization problem changes according to its covariance.

本步骤根据(7)式利用Ceres优化库求解滑窗内每帧的状态量,并实时更新采集到的点云姿态,将载体运动状态下的激光雷达扫描点转到世界坐标系下,依次拼接合成全局一致的点云地图。In this step, the Ceres optimization library is used to solve the state quantity of each frame in the sliding window according to formula (7), and the collected point cloud posture is updated in real time. The laser radar scanning points under the carrier's moving state are transferred to the world coordinate system, and then stitched together to synthesize a globally consistent point cloud map.

本发明的有益效果包括:The beneficial effects of the present invention include:

本发明提出的基于大场景的低线束激光雷达-IMU-RTK定位建图算法,利用图优化模型,合理利用三种传感器的数据信息,克服低线束激光雷达在大场景易定位发散,RTK在城市环境信号遮挡,IMU长时间出现累积漂移的缺点,实现高精度定位和建图的SLAM系统。所提方法在城市环境中车载测试,定位精度和鲁棒性可明显提升,对于三维实景测绘、自动驾驶等行业具有重要借鉴意义。The low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes proposed in this invention uses a graph optimization model and rationally utilizes the data information of the three sensors to overcome the shortcomings of low-beam LiDAR’s easy positioning divergence in large scenes, RTK signal occlusion in urban environments, and IMU’s long-term cumulative drift, thus realizing a high-precision positioning and mapping SLAM system. The proposed method is tested on a vehicle in an urban environment, and the positioning accuracy and robustness can be significantly improved, which has important reference significance for industries such as three-dimensional real-scene mapping and autonomous driving.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本发明所述的基于低线束激光雷达-IMU-RTK定位建图算法流程图;FIG1 is a flow chart of a positioning and mapping algorithm based on low-beam laser radar-IMU-RTK according to the present invention;

图2是应用图优化模型在后端解算定位结果的示意图;FIG2 is a schematic diagram of applying a graph optimization model to solve positioning results at the back end;

图3是激光雷达-IMU-RTK实验设备,用于采集数据并实时定位与建图;Figure 3 shows the LiDAR-IMU-RTK experimental equipment, which is used to collect data and perform real-time positioning and mapping;

图4是实验系统在车载测试中大场景定位和建图的示意图,可实时输出高精度结果且稳定运行。Figure 4 is a schematic diagram of the experimental system's large-scene positioning and mapping in vehicle-mounted testing, which can output high-precision results in real time and operate stably.

具体实施方式Detailed ways

下面结合附图和具体实施方式,进一步阐明本发明,应理解下述具体实施方式仅用于说明本发明而不用于限制本发明的范围。The present invention will be further explained below in conjunction with the accompanying drawings and specific embodiments. It should be understood that the following specific embodiments are only used to illustrate the present invention and are not used to limit the scope of the present invention.

如附图1所示,本实施例公开一种基于低线束激光雷达-IMU-RTK定位建图算法,具体步骤如下:As shown in FIG1 , this embodiment discloses a positioning and mapping algorithm based on low-beam laser radar-IMU-RTK, and the specific steps are as follows:

步骤1.IMU快速精准的初始对准Step 1. Fast and accurate initial alignment of the IMU

式中,表示导航坐标系到载体坐标系的转换矩阵;ab、bb、cb分别表示载体坐标系下的位置坐标;an、bn、cn表示对应点在导航坐标系下的坐标值。初始对准确定惯性器件在导航坐标系下的姿态,获得激光雷达-IMU-RTK融合定位在全局坐标下的姿态初值。In the formula, represents the transformation matrix from the navigation coordinate system to the carrier coordinate system; ab , bb , cb represent the position coordinates in the carrier coordinate system; an , bn , cn represent the coordinate values of the corresponding points in the navigation coordinate system. Initial alignment determines the attitude of the inertial device in the navigation coordinate system and obtains the initial attitude value of the LiDAR-IMU-RTK fusion positioning in the global coordinate system.

步骤2.提取周围点云环境特征,根据扫描点曲率区分角点和面点Step 2. Extract the surrounding point cloud environment features and distinguish corner points and surface points based on the curvature of the scanning points

提取环境点云特征,根据扫描点曲率区分角点和面点,曲率计算公式如下式:Extract the features of the environmental point cloud and distinguish corner points and surface points according to the curvature of the scanning points. The curvature calculation formula is as follows:

|S|代表此扫描点的同线束周围点集合,ri表示此点的深度值。通过此式可依次计算每帧激光雷达扫描到的所有点的曲率。同时,利用每个点的深度信息剔除平行点和遮挡点。为了提升遍历效率,每帧点云坐标信息投影成二维图像,并分为六个子图,对子图分别提取角点和面点。根据阈值cth进行判定,c<cth为角点,c>cth为面点。由于载体运动,每帧扫描过程会出现点云畸变,采用高频IMU数据短时间积分可实现运动补偿。|S| represents the set of points around the same line beam of this scanning point, andri represents the depth value of this point. This formula can be used to calculate the curvature of all points scanned by the lidar in each frame in turn. At the same time, the depth information of each point is used to eliminate parallel points and occluded points. In order to improve the traversal efficiency, the coordinate information of each frame of the point cloud is projected into a two-dimensional image and divided into six sub-images, and the corner points and surface points are extracted from the sub-images respectively. The judgment is made according to the thresholdcth , c<cth is a corner point, and c>cth is a surface point. Due to the movement of the carrier, point cloud distortion will occur in each frame scanning process, and motion compensation can be achieved by using high-frequency IMU data short-time integration.

步骤3.构建局部点云地图,点云配准优化位姿Step 3. Build a local point cloud map and optimize the pose by point cloud registration

采用滑窗的方式合并一定范围内的关键帧信息。为了提升点云匹配速率,局部地图的点云信息都存储在kd-tree的数据结构中,方便后续搜索。运动估计模块将当前帧与局部地图匹配求解运动状态。The key frame information within a certain range is merged by sliding window. In order to improve the point cloud matching rate, the point cloud information of the local map is stored in the kd-tree data structure to facilitate subsequent search. The motion estimation module matches the current frame with the local map to solve the motion state.

对于角点,搜索其在局部地图最近的五个点,计算五点均值和协方差矩阵。当有满足条件的线特征,可计算当前角点到线的距离。距离计算公式如下:For a corner point, search for the five nearest points in the local map and calculate the mean and covariance matrix of the five points. When there is a line feature that meets the conditions, the distance from the current corner point to the line can be calculated. The distance calculation formula is as follows:

其中·和×分别表示点积和叉乘运算,pn是单位向量,pε代表当前角点,代表特征值对应的特征向量。同理,面点ps在局部地图中搜索出五点构成平面。此时五点的协方差矩阵的最小特征值对应的特征向量/>为这个平面所对应法向量。点面距离计算如下:Where · and × represent dot product and cross product operations respectively, pn is a unit vector, pε represents the current corner point, Represents the eigenvector corresponding to the eigenvalue. Similarly, the surface pointps searches for five points in the local map to form a plane. At this time, the eigenvector corresponding to the minimum eigenvalue of the covariance matrix of the five points is is the normal vector corresponding to this plane. The point-to-plane distance is calculated as follows:

式子和/>分别表示局部地图提取角点和面点的几何中心。通过联合最小化点线残差f(pε)和点面残差f(ps),得到当前点在局部地图的最佳位姿估计Tk。求解过程如下式:formula and/> Represent the geometric centers of corner points and surface points extracted from the local map respectively. By jointly minimizing the point-line residual f(pε ) and the point-surface residual f(ps ), the optimal pose estimate Tk of the current point in the local map is obtained. The solution process is as follows:

min{∑f(pε)+f(ps)} (6)min{∑f(pε )+f(ps )} (6)

步骤4.图优化模型求解定位结果并进行实时点云地图拼接Step 4. Graph optimization model solves positioning results and performs real-time point cloud map stitching

对于一定窗口内的关键帧,数量为n,优化状态量最小化求解方程如下式:For key frames within a certain window, the number is n, and the state quantity is optimized The minimization equation is as follows:

式中Rp(x)代表边缘化残差,边缘化的目的是减少优化方程规模,移出滑窗的关键帧信息保存为当前优化的先验信息。Where Rp (x) represents the marginalized residual. The purpose of marginalization is to reduce the scale of the optimization equation. The key frame information removed from the sliding window is saved as the prior information of the current optimization.

代表滑窗内激光雷达里程计的帧间约束,残差计算如下式: Represents the inter-frame constraint of the lidar odometer in the sliding window, and the residual is calculated as follows:

式中i和j分别表示上一帧和当前帧,T是位姿的欧式矩阵表示,ΔT即为位姿变化量。Where i and j represent the previous frame and the current frame respectively, T is the Euclidean matrix representation of the pose, and ΔT is the pose change.

代表滑窗内IMU预积分约束,残差按下式解得: Represents the IMU pre-integration constraint in the sliding window, and the residual is solved as follows:

式中是旋转矩阵表示的姿态,W是世界坐标系。δα、δβ和δθ分别表示位置、速度和角度的变化量,δba和δbw分别表示加速度计和陀螺仪的偏置变化。gW表示世界坐标系下的重力值,pW、vW和qW是世界系下的位置、速度和四元数表示的姿态。[×]xyz指四元数虚部,分别代表IMU预积分计算的相邻两帧之间的位置、速度和角度。同时每帧解算估计的加速度计和陀螺仪的偏置为ba和bwIn the formula is the attitude expressed by the rotation matrix, and W is the world coordinate system. δα, δβ, and δθ represent the changes in position, velocity, and angle, respectively.δba andδbw represent the changes in the bias of the accelerometer and gyroscope, respectively.gW represents the gravity value in the world coordinate system, andpW ,vW , andqW represent the position, velocity, and attitude expressed by the quaternion in the world system. [×]xyz refers to the imaginary part of the quaternion, They represent the position, velocity and angle between two adjacent frames calculated by IMU pre-integration. At the same time, the bias of the accelerometer and gyroscope estimated in each frame areba andbw .

Rg(x)表示RTK绝对位置残差约束,计算公式如下:Rg (x) represents the RTK absolute position residual constraint, and its calculation formula is as follows:

式中表示RTK测量的位置结果。RTK因子在优化问题的权重根据其协方差变化。In the formula Represents the position result of RTK measurement. The weight of RTK factor in the optimization problem changes according to its covariance.

本步骤根据(7)式利用Ceres优化库求解滑窗内每帧的状态量,并实时更新采集到的点云姿态,将载体运动状态下的激光雷达扫描点转到世界坐标系下,依次拼接合成全局一致的点云地图。In this step, the Ceres optimization library is used to solve the state quantity of each frame in the sliding window according to formula (7), and the collected point cloud posture is updated in real time. The laser radar scanning points under the carrier's moving state are transferred to the world coordinate system, and then stitched together to synthesize a globally consistent point cloud map.

附图3是在城市环境下定位与建图的实验设备,采用Velodyne-VLP16激光雷达、ADIS16488惯导、天宝接收机。传感器之间安装参数已知,硬件方式已实现时间同步。附图 4是车载平台在大场景下的实验定位与建图结果,在城市环境23.6公里车载测试(包含RTK遮挡的高架、林荫场景,点云稀疏退化的大场景环境等)全程定位精度(均方根误差RMSE)指标为0.24m。同时本系统可利用点云实时构建周围场景,实现自动驾驶建图与感知。Figure 3 is an experimental device for positioning and mapping in an urban environment, using Velodyne-VLP16 lidar, ADIS16488 inertial navigation, and Trimble receiver. The installation parameters between sensors are known, and time synchronization has been achieved in hardware. Figure 4 is the experimental positioning and mapping results of the vehicle-mounted platform in a large scene. In the 23.6-kilometer vehicle test in the urban environment (including elevated and tree-lined scenes blocked by RTK, large scene environments with sparse and degraded point clouds, etc.), the full-course positioning accuracy (root mean square error RMSE) index is 0.24m. At the same time, the system can use point clouds to construct surrounding scenes in real time to achieve autonomous driving mapping and perception.

需要说明的是,以上内容仅仅说明了本发明的技术思想,不能以此限定本发明的保护范围,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰均落入本发明权利要求书的保护范围之内。It should be noted that the above content only illustrates the technical idea of the present invention and cannot be used to limit the protection scope of the present invention. For ordinary technicians in this technical field, several improvements and modifications can be made without departing from the principle of the present invention. These improvements and modifications all fall within the protection scope of the claims of the present invention.

Claims (3)

In the middle ofIs the gesture represented by the rotation matrix, W is the world coordinate system, bi and bj are the carrier system of the ith frame and the jth frame, delta alpha, delta beta and delta theta respectively represent the variation of the position, the speed and the angle, delta ba and delta bw respectively represent the offset variation of an accelerometer and a gyroscope, delta t is the time interval between the two frames, gW represents the gravity value under the world coordinate system, pW、vW and qW are the gesture represented by the position, the speed and the quaternion under the world system, and [ (x ]xyz is the quaternion imaginary part,/>Representing the position, velocity and angle between two adjacent frames calculated by IMU pre-integration, respectively, while calculating the estimated accelerometer and gyroscope offsets per frame as ba and bw,
CN202210784265.XA2022-07-052022-07-05 Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenesActiveCN115407357B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210784265.XACN115407357B (en)2022-07-052022-07-05 Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210784265.XACN115407357B (en)2022-07-052022-07-05 Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes

Publications (2)

Publication NumberPublication Date
CN115407357A CN115407357A (en)2022-11-29
CN115407357Btrue CN115407357B (en)2024-05-31

Family

ID=84157404

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210784265.XAActiveCN115407357B (en)2022-07-052022-07-05 Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes

Country Status (1)

CountryLink
CN (1)CN115407357B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113551666A (en)*2021-07-062021-10-26杭州鸿泉物联网技术股份有限公司Automatic driving multi-sensor fusion positioning method and device, equipment and medium
CN115561731B (en)*2022-12-052023-03-10安徽蔚来智驾科技有限公司Pose optimization method, point cloud map establishment method, computer device and medium
CN116242372B (en)*2023-01-032024-07-19东南大学UWB-laser radar-inertial navigation fusion positioning method under GNSS refusing environment
CN115797425B (en)*2023-01-192023-06-16中国科学技术大学 A laser global positioning method based on point cloud bird's-eye view and coarse-to-fine strategy
CN116295341B (en)*2023-03-062025-06-06酷哇科技有限公司 Laser positioning method and system based on building wall feature extraction and matching
CN116358525A (en)*2023-03-312023-06-30三一重机有限公司 Mapping and positioning method, system and engineering vehicle based on lidar
CN116299527B (en)*2023-04-062025-09-19重庆邮电大学Indoor high-precision self-positioning method based on laser odometer
CN116592888A (en)*2023-05-082023-08-15五八智能科技(杭州)有限公司Global positioning method, system, device and medium for patrol robot
CN117250623B (en)*2023-11-202024-02-27苏州新坐标智能装备有限公司Positioning method, system and mobile robot for fusion of laser radar and complementary positioning
CN117671008B (en)*2023-12-142025-01-28安徽蔚来智驾科技有限公司 Position and posture estimation method, readable storage medium and intelligent device
CN117433511B (en)*2023-12-202024-03-12绘见科技(深圳)有限公司Multi-sensor fusion positioning method
CN117928573B (en)*2024-01-182025-02-18广州行深智能科技有限公司SLAM factor graph optimization method, device and equipment based on plane information
CN118816909B (en)*2024-06-252025-06-03江苏大学Vehicle positioning method in highway tunnel environment
CN119091411B (en)*2024-11-072025-05-16江苏大块头智驾科技有限公司Environment modeling method and system based on rtk and laser radar
CN119439133B (en)*2024-11-142025-05-27江苏岚江智能科技有限公司 A fast joint calibration method of laser radar and GPS-RTK
CN119229036B (en)*2024-12-032025-03-07中科云谷科技有限公司Three-dimensional image construction method, device and storage medium
CN119573709B (en)*2025-02-072025-04-04杭州旗晟智能科技有限公司Continuous time pose estimation method based on multi-resolution hash map
CN120125628B (en)*2025-05-142025-08-22石家庄铁道大学 A geometric registration method and system based on BIM model and AR real scene
CN120252748B (en)*2025-06-092025-09-16中国电子科技集团公司第五十四研究所Map/laser/inertial tight coupling-based underground large space positioning method

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108303710A (en)*2018-06-122018-07-20江苏中科院智能科学技术应用研究院Drawing method is built in the more scene positioning of unmanned plane based on three-dimensional laser radar
CN110009739A (en)*2019-01-292019-07-12浙江省北大信息技术高等研究院 Extraction and Coding Method of Motion Features of Digital Retina of Mobile Cameras
CN113781582A (en)*2021-09-182021-12-10四川大学Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration
CN114136311A (en)*2021-11-082022-03-04上海应用技术大学Laser SLAM positioning method based on IMU pre-integration
CN114659514A (en)*2022-03-212022-06-24东南大学LiDAR-IMU-GNSS fusion positioning method based on voxelized fine registration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113945206B (en)*2020-07-162024-04-19北京图森未来科技有限公司Positioning method and device based on multi-sensor fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108303710A (en)*2018-06-122018-07-20江苏中科院智能科学技术应用研究院Drawing method is built in the more scene positioning of unmanned plane based on three-dimensional laser radar
CN110009739A (en)*2019-01-292019-07-12浙江省北大信息技术高等研究院 Extraction and Coding Method of Motion Features of Digital Retina of Mobile Cameras
CN113781582A (en)*2021-09-182021-12-10四川大学Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration
CN114136311A (en)*2021-11-082022-03-04上海应用技术大学Laser SLAM positioning method based on IMU pre-integration
CN114659514A (en)*2022-03-212022-06-24东南大学LiDAR-IMU-GNSS fusion positioning method based on voxelized fine registration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于直线检测和数字地图匹配的车辆航向角估计;彭雅慧 等;《电子测量与仪器学报》;20220331;第第36卷卷(第第3期期);第194-199页*

Also Published As

Publication numberPublication date
CN115407357A (en)2022-11-29

Similar Documents

PublicationPublication DateTitle
CN115407357B (en) Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes
CN113837277B (en) A multi-source fusion SLAM system based on visual point-line feature optimization
CN112484725B (en)Intelligent automobile high-precision positioning and space-time situation safety method based on multi-sensor fusion
CN113706626B (en)Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
Lin et al.Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV
CN114019552B (en) A method for optimizing location reliability based on Bayesian multi-sensor error constraints
CN114526745A (en)Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN111929699A (en)Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system
CN109166149A (en)A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN115479598A (en)Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN113933818A (en) Method, device, storage medium and program product for calibration of external parameters of lidar
Wen et al.Tm 3 loc: Tightly-coupled monocular map matching for high precision vehicle localization
CN112556719A (en)Visual inertial odometer implementation method based on CNN-EKF
CN117419719A (en)IMU-fused three-dimensional laser radar positioning and mapping method
CN117968660B (en)Ground constraint multi-sensor fusion positioning and mapping method
CN115218889A (en)Multi-sensor indoor positioning method based on dotted line feature fusion
CN117606495A (en) A multi-sensor fusion vehicle positioning method for complex and challenging scenarios
CN115291227A (en)Indoor and outdoor seamless positioning and 3D mapping method and system
CN117128970A (en)Multi-sensor fusion automatic driving pose estimation method
CN114463374A (en)Visual SLAM method and system based on panoramic annular camera
CN119687919B (en)Laser radar point cloud positioning method based on characteristic point enhancement in degradation environment
CN117421384A (en)Visual inertia SLAM system sliding window optimization method based on common view projection matching
CN119756338B (en) An underwater visual-inertial-acoustic odometry positioning method integrating DVL
CN116468786B (en)Semantic SLAM method based on point-line combination and oriented to dynamic environment
Li et al.Feature assessment and enhanced vertical constraint lidar odometry and mapping on quadruped robot

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp