Movatterモバイル変換


[0]ホーム

URL:


CN114964236B - Map building and vehicle positioning system and method for underground parking garage environment - Google Patents

Map building and vehicle positioning system and method for underground parking garage environment
Download PDF

Info

Publication number
CN114964236B
CN114964236BCN202210580720.4ACN202210580720ACN114964236BCN 114964236 BCN114964236 BCN 114964236BCN 202210580720 ACN202210580720 ACN 202210580720ACN 114964236 BCN114964236 BCN 114964236B
Authority
CN
China
Prior art keywords
module
vehicle
global
semantic
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210580720.4A
Other languages
Chinese (zh)
Other versions
CN114964236A (en
Inventor
许洋
王宽
康轶非
任凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co LtdfiledCriticalChongqing Changan Automobile Co Ltd
Priority to CN202210580720.4ApriorityCriticalpatent/CN114964236B/en
Publication of CN114964236ApublicationCriticalpatent/CN114964236A/en
Application grantedgrantedCritical
Publication of CN114964236BpublicationCriticalpatent/CN114964236B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a map building and vehicle positioning system and method aiming at an underground parking lot environment, wherein a front view camera in the system is used for acquiring an original image of a vehicle front area; the inverse perspective transformation module is used for obtaining a top view and inputting the top view to the semantic feature detection module; the semantic feature detection module is used for obtaining images subjected to top view semantic segmentation, obtaining semantic features and inputting the semantic features to the image building module and the positioning module; the odometer module is used for acquiring the pose of the vehicle; the map building module projects semantic features from a vehicle body coordinate system to a global coordinate system based on the pose of the vehicle so as to obtain a global semantic map; the positioning module is used for acquiring global coordinate points of the current semantic features of the vehicle and matching the global coordinate points with the global semantic map so as to acquire the current semantic positioning result of the vehicle. The invention can realize long-time stable positioning of the vehicle in the underground parking environment, and can more robustly cope with environmental changes, and the cost is lower.

Description

Translated fromChinese
一种针对地下停车场环境的建图与车辆定位系统及方法A mapping and vehicle positioning system and method for underground parking environment

技术领域Technical Field

本发明涉及车辆定位技术领域,具体涉及一种针对地下停车场环境的建图与车辆定位系统及方法。The present invention relates to the technical field of vehicle positioning, and in particular to a mapping and vehicle positioning system and method for an underground parking environment.

背景技术Background Art

自动泊车是自动驾驶领域的一项具体应用,在这项任务中,车辆经常需要在狭窄、拥挤、弱光且没有GPS信号的停车场自主导航,因此,对车辆的准确定位至关重要。近十年出现了大量的定位方案,包括基于视觉的定位方案、基于视觉惯性导航的定位方案以及基于激光雷达的定位方案。为了节约成本,大量的研究都基于视觉定位展开。传统的视觉定位方案多是利用环境中的稀疏点、线段或平面等几何特征,其中,角点在视觉里程计算法中被广泛应用,这些方案的一般流程为通过特征点匹配,估计地图点位置,建立地图并基于此地图对相机位姿进行估计。Automatic parking is a specific application in the field of autonomous driving. In this task, vehicles often need to navigate autonomously in narrow, crowded, low-light parking lots without GPS signals. Therefore, accurate positioning of the vehicle is crucial. In the past decade, a large number of positioning solutions have emerged, including vision-based positioning solutions, visual inertial navigation-based positioning solutions, and lidar-based positioning solutions. In order to save costs, a large number of studies are based on visual positioning. Traditional visual positioning solutions mostly use geometric features such as sparse points, line segments or planes in the environment. Among them, corner points are widely used in visual mileage calculation methods. The general process of these solutions is to estimate the location of map points through feature point matching, build a map, and estimate the camera pose based on this map.

近些年,基于ORB特征进行建图定位的方案在学术界和工业界的备受关注,如公开号为CN113808203A的发明专利申请中公开了一种基于LK光流法与ORB-SLAM2的导航定位方法,该方法在ORB-SLAM2前加入基于GPU的LK光流算法,根据光流追踪特征点的数量,作为当前帧是否为关键帧的判断条件,并且对于非关键帧,不会进入ORBSLAM2的三个线程,阻止了非关键帧提取特征点和后续的计算,从而加快ORBSLAM2对Tracking线程处理,增强了算法的实时性,而且并不影响其鲁棒性,适用于汽车的自动驾驶和AGV物流小车的定位与导航。然而,这种定位方法容易受到光照、视角及环境外观变化的影响,无法长时间地有效定位。尤其是地下停车场这种环境,向ORB-SLAM等传统的视觉定位方案提出了巨大的挑战。一方面,地下室内停车场主要是墙体、地面及立柱组成,这种弱纹理结构使得特征点的检测和匹配变得很不稳定,定位也因此容易出现跟踪丢失的状况。另一方面是不同时间段不同车辆进进出出造成的停车场环境变化,长期的车辆重定位对于传统视觉定位方案来讲几乎不可能。In recent years, the mapping and positioning scheme based on ORB features has attracted much attention in academia and industry. For example, the invention patent application with publication number CN113808203A discloses a navigation and positioning method based on LK optical flow method and ORB-SLAM2. This method adds a GPU-based LK optical flow algorithm before ORB-SLAM2, and uses the number of optical flow tracking feature points as the judgment condition for whether the current frame is a key frame. For non-key frames, it will not enter the three threads of ORBSLAM2, which prevents the extraction of feature points and subsequent calculations of non-key frames, thereby speeding up ORBSLAM2 to process the Tracking thread, enhancing the real-time performance of the algorithm, and does not affect its robustness. It is suitable for the positioning and navigation of automatic driving of automobiles and AGV logistics vehicles. However, this positioning method is easily affected by changes in lighting, viewing angle and environmental appearance, and cannot be effectively positioned for a long time. Especially in the environment of underground parking lots, traditional visual positioning schemes such as ORB-SLAM are faced with huge challenges. On the one hand, the parking lot in the basement is mainly composed of walls, floors and columns. This weak texture structure makes the detection and matching of feature points very unstable, and positioning is prone to tracking loss. On the other hand, the parking lot environment changes due to the entry and exit of different vehicles at different time periods. Long-term vehicle relocation is almost impossible for traditional visual positioning solutions.

发明内容Summary of the invention

针对现有技术存在的上述不足,本发明要解决的技术问题是:如何提供一种能实现车辆在地下停车场环境下的长时间稳定定位,更加鲁棒地应对环境变化,同时成本较低的针对地下停车场环境的建图与车辆定位系统及方法。In view of the above-mentioned deficiencies in the prior art, the technical problem to be solved by the present invention is: how to provide a mapping and vehicle positioning system and method for underground parking environments that can achieve long-term stable positioning of vehicles in underground parking environments, respond more robustly to environmental changes, and at a lower cost.

为了解决上述技术问题,本发明采用如下技术方案:In order to solve the above technical problems, the present invention adopts the following technical solutions:

一种针对地下停车场环境的建图与车辆定位系统,包括前视相机、逆透视变换模块、语义特征检测模块、里程计模块、建图模块和定位模块;A mapping and vehicle positioning system for an underground parking environment, comprising a front-view camera, an inverse perspective transformation module, a semantic feature detection module, an odometer module, a mapping module and a positioning module;

所述前视相机用于获取车辆前方区域的原始图像并输入给所述逆透视变换模块;The front-view camera is used to obtain the original image of the area in front of the vehicle and input it to the inverse perspective transformation module;

所述逆透视变换模块用于将所述前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给所述语义特征检测模块;The inverse perspective transformation module is used to perform inverse perspective transformation on the original image input by the front-view camera to obtain a top view and input it to the semantic feature detection module;

所述语义特征检测模块用于通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到所述建图模块和所述定位模块;The semantic feature detection module is used to obtain the image after semantic segmentation of the top view through a convolutional neural network, and obtain the semantic features and input them into the mapping module and the positioning module;

所述里程计模块用于获取车辆的位姿输入到所述建图模块和所述定位模块;The odometer module is used to obtain the vehicle's position and posture and input it into the mapping module and the positioning module;

所述建图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;The mapping module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle posture provided by the odometer module to obtain a global semantic map;

所述定位模块用于获取车辆当前语义特征的全局坐标点,并将该全局坐标点与全局语义地图做匹配,以获得车辆当前的语义定位结果。The positioning module is used to obtain the global coordinate point of the current semantic feature of the vehicle, and match the global coordinate point with the global semantic map to obtain the current semantic positioning result of the vehicle.

优选的,所述里程计模块包括一个惯性测量单元和两个轮速传感器;Preferably, the odometer module includes an inertial measurement unit and two wheel speed sensors;

所述建图模块包括局部地图模块和全局地图模块;The mapping module includes a local map module and a global map module;

所述局部地图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给所述全局地图模块;The local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle posture provided by the odometer module to obtain a local map and input it to the global map module;

所述全局地图模块用于对局部地图加回环检测和全局优化后生成全局语义地图。The global map module is used to generate a global semantic map after adding loop detection and global optimization to the local map.

一种针对地下停车场环境的建图与车辆定位方法,采用上述的针对地下停车场环境的建图与车辆定位系统,所述建图与车辆定位方法包括建图方法和车辆定位方法;A mapping and vehicle positioning method for an underground parking environment, using the above-mentioned mapping and vehicle positioning system for an underground parking environment, the mapping and vehicle positioning method comprising a mapping method and a vehicle positioning method;

其中所述建图方法包括以下步骤:The mapping method comprises the following steps:

步骤A1)所述前视相机获取车辆前方区域的原始图像并输入给所述逆透视变换模块;Step A1) the front-view camera acquires an original image of the area in front of the vehicle and inputs it to the inverse perspective transformation module;

步骤A2)所述逆透视变换模块将所述前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给所述语义特征检测模块;Step A2) the inverse perspective transformation module performs inverse perspective transformation on the original image input by the front-view camera to obtain a top view and inputs the top view to the semantic feature detection module;

步骤A3)所述语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到所述建图模块;Step A3) the semantic feature detection module obtains the top view semantically segmented image through a convolutional neural network, and obtains the semantic features and inputs them into the mapping module;

步骤A4)所述里程计模块获取车辆的位姿输入到所述建图模块;Step A4) the odometer module obtains the vehicle's position and posture and inputs it into the mapping module;

步骤A5)所述建图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;Step A5) the mapping module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle posture provided by the odometer module to obtain a global semantic map;

步骤A6)完成建图;Step A6) Complete the map construction;

所述车辆定位方法包括以下步骤:The vehicle positioning method comprises the following steps:

步骤S1)所述前视相机获取车辆当前位置前方区域的原始图像并输入给所述逆透视变换模块;Step S1) the front-view camera obtains an original image of the area in front of the current position of the vehicle and inputs it to the inverse perspective transformation module;

步骤S2)所述逆透视变换模块将所述前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给所述语义特征检测模块;Step S2) the inverse perspective transformation module performs inverse perspective transformation on the original image input by the front-view camera to obtain a top view and inputs the top view to the semantic feature detection module;

步骤S3)所述语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到所述定位模块;Step S3) the semantic feature detection module obtains the top view semantically segmented image through a convolutional neural network, and obtains the semantic features and inputs them into the positioning module;

步骤S4)所述里程计模块获取车辆当前的位姿输入到所述定位模块;Step S4) the odometer module obtains the current position of the vehicle and inputs it into the positioning module;

步骤S5)所述定位模块基于所述里程计模块提供的车辆当前的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得车辆当前语义特征的全局坐标点;Step S5) the positioning module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the current position and posture of the vehicle provided by the odometer module to obtain the global coordinate point of the current semantic features of the vehicle;

步骤S6)将步骤S5)中得到的全局坐标点与所述建图方法中得到的全局语义地图做匹配,以获得车辆当前的语义定位结果,完成车辆定位。Step S6) matches the global coordinate points obtained in step S5) with the global semantic map obtained in the mapping method to obtain the current semantic positioning result of the vehicle and complete the vehicle positioning.

优选的,逆透视变换的公式为:Preferably, the formula for inverse perspective transformation is:

式中:πc(·)为前视相机投影模型,[Rctc]为从前视相机坐标系向车体坐标系转化的外参矩阵,[u v]为语义特征的像素坐标,[xv yv]为语义特征在车体坐标系下的坐标,λ为尺度因子。where: πc (·) is the projection model of the forward-looking camera, [Rc tc ] is the extrinsic parameter matrix transformed from the forward-looking camera coordinate system to the vehicle coordinate system, [uv] is the pixel coordinate of the semantic feature, [xv yv ] is the coordinate of the semantic feature in the vehicle coordinate system, and λ is the scale factor.

优选的,所述语义特征检测模块的卷积神经网络使用所述前视相机采集的停车场图片集进行训练分类,且卷积神经网络的分类类别包括车道线、停车线、引导线、减速带、可通行区域、障碍物和墙体,且所述语义特征检测模块通过卷积神经网络获取的语义特征包括车道线、停车线、引导线和减速带。Preferably, the convolutional neural network of the semantic feature detection module is trained and classified using the parking lot picture set collected by the forward-looking camera, and the classification categories of the convolutional neural network include lane lines, stop lines, guide lines, speed bumps, passable areas, obstacles and walls, and the semantic features obtained by the semantic feature detection module through the convolutional neural network include lane lines, stop lines, guide lines and speed bumps.

优选的,将语义特征从车体坐标系投影至全局坐标系的公式为:Preferably, the formula for projecting the semantic features from the vehicle body coordinate system to the global coordinate system is:

式中:[xw,yw zw]为语义特征在全局坐标系下的坐标,Ro为从车体坐标系向全局坐标系转化的旋转矩阵,to为从车体坐标系向全局坐标系转化的平移向量。Where: [xw,ywzw] is the coordinate of the semantic feature in the global coordinate system,Ro is the rotation matrix transformed from the vehicle body coordinate system to the global coordinate system, and to is the translation vector transformed from the vehicle body coordinate system to the global coordinate system.

优选的,所述里程计模块包括一个惯性测量单元和两个轮速传感器;Preferably, the odometer module includes an inertial measurement unit and two wheel speed sensors;

所述建图模块包括局部地图模块和全局地图模块;The mapping module includes a local map module and a global map module;

所述局部地图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给所述全局地图模块;The local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle posture provided by the odometer module to obtain a local map and input it to the global map module;

所述全局地图模块用于对局部地图加回环检测和全局优化后生成全局语义地图;The global map module is used to generate a global semantic map after adding loop closure detection and global optimization to the local map;

步骤A5)中,所述局部地图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给所述全局地图模块;所述全局地图模块再对局部地图加回环检测和全局优化后生成全局语义地图。In step A5), the local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle's posture provided by the odometer module to obtain a local map and input it to the global map module; the global map module then adds loop detection and global optimization to the local map to generate a global semantic map.

优选的,步骤A5)中,所述全局地图模块对局部地图加回环检测的方法为:使用基于数据配准法将当前的局部地图与之前生成的局部地图做匹配,若匹配结果满足设定值,则证明出现回环,用计算得到的相对位姿进行位姿图优化来消除累积误差。Preferably, in step A5), the method for the global map module to add loop detection to the local map is: using a data registration method to match the current local map with the previously generated local map. If the matching result meets the set value, it proves that a loop occurs, and the calculated relative pose is used to optimize the pose graph to eliminate the accumulated error.

优选的,步骤A5)中,所述全局地图模块对局部地图进行全局优化的方法包括约束两帧连续局部地图的所述里程计模块的测量值、以及回环检测时基于数据配准法得到的相对位姿做回环帧间的约束。Preferably, in step A5), the method for the global map module to globally optimize the local map includes constraining the measurement value of the odometer module of two consecutive local map frames, and constraining the relative posture obtained based on the data registration method during loop detection between loop frames.

优选的,步骤A5)中,所述全局地图模块对局部地图进行全局优化的方法为高斯牛顿法,目标函数为:Preferably, in step A5), the method by which the global map module performs global optimization on the local map is the Gauss-Newton method, and the objective function is:

式中:χ为位姿集合,Tt,t+1为t帧和t+1帧前视相机的相对位姿估计值,为从里程计模块获取的相对位姿测量值,L为回环帧对的集合,为基于数据配准法得到的第i帧和第j帧之间的相对位姿,做测量值使用,Ti,j为第i帧和第j帧带有累计误差的相对位姿估计值。Where: χ is the pose set, Tt,t+1 is the relative pose estimate of the front-view camera in frame t and frame t+1, is the relative pose measurement obtained from the odometer module, L is the set of loop frame pairs, is the relative pose between the i-th frame and the j-th frame obtained based on the data registration method, which is used as the measurement value. Ti,j is the relative pose estimate between the i-th frame and the j-th frame with the accumulated error.

与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

1、本发明不同于传统视觉定位方案用到的环境中的几何特征,本发明利用环境中的语义特征,对于停车场环境,主要包括车道线、引导线、停车线和减速带等,语义特征相比于几何特征能够长期稳定存在,并在光照、视角和环境变化时依旧健壮,这些语义特征由卷积神经网络检测,并利用该语义特征建立全局语义地图,再利用全局语义地图对车辆进行定位。因此该方案比传统的定位方案能更加鲁棒地应对环境变化,并能保持长时间稳定准确的使用状态。1. The present invention is different from the geometric features in the environment used by traditional visual positioning solutions. The present invention uses semantic features in the environment. For parking environments, they mainly include lane lines, guide lines, parking lines, speed bumps, etc. Compared with geometric features, semantic features can exist stably for a long time and remain robust when the lighting, viewing angle and environment change. These semantic features are detected by convolutional neural networks, and the global semantic map is established using the semantic features, and then the global semantic map is used to locate the vehicle. Therefore, this solution can respond to environmental changes more robustly than traditional positioning solutions, and can maintain a stable and accurate use state for a long time.

2、本发明所用传感器仅为一个前视相机、一个IMU(惯性测量单元)和两个轮速传感器,其中IMU和轮速传感器组成里程计模块,在建图和定位时提供车辆的相对位姿,因此本发明的使用成本很低,量产车也能轻松配置。2. The sensors used in the present invention are only a forward-looking camera, an IMU (inertial measurement unit) and two wheel speed sensors, wherein the IMU and the wheel speed sensors constitute an odometer module, which provides the relative position and posture of the vehicle during mapping and positioning. Therefore, the use cost of the present invention is very low, and mass-produced vehicles can also be easily configured.

3、本发明的框架主要包含两部分,建图和定位。顾名思义,建图即建立停车场环境的全局语义地图;由前视相机获取车辆前方区域的原始图像,经过逆透视变换后得到俯视图,然后输入卷积神经网络得到语义分割后的图像,并获得车道线、停车线、引导线和减速带等语义特征;然后基于里程计模块提供的位姿,将语义特征投影到全局坐标系下,鉴于里程计模块的数据漂移,本发明还利用回环检测和全局优化来消除这些累积误差,最后通过保存这些特征点,建立停车场的全局语义地图。全局语义地图生成之后,进入停车场的车辆通过前视相机获取图像、逆透视变换、语义特征检测及里程计模块提供的位姿获得语义特征的全局坐标点,再通过ICP算法(基于数据配准法)与建好的全局语义地图做匹配修正车辆位姿,最终获得准确的车辆定位结果。3. The framework of the present invention mainly includes two parts, mapping and positioning. As the name implies, mapping is to establish a global semantic map of the parking environment; the original image of the area in front of the vehicle is obtained by the front-view camera, and a top view is obtained after inverse perspective transformation, and then the image after semantic segmentation is input into the convolutional neural network, and the semantic features such as lane lines, parking lines, guide lines and speed bumps are obtained; then based on the posture provided by the odometer module, the semantic features are projected into the global coordinate system. In view of the data drift of the odometer module, the present invention also uses loop detection and global optimization to eliminate these accumulated errors, and finally saves these feature points to establish a global semantic map of the parking lot. After the global semantic map is generated, the vehicle entering the parking lot obtains the global coordinate points of the semantic features through the front-view camera to obtain the image, inverse perspective transformation, semantic feature detection and the posture provided by the odometer module, and then the ICP algorithm (based on data registration method) is used to match and correct the vehicle posture with the established global semantic map, and finally an accurate vehicle positioning result is obtained.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明针对地下停车场环境的建图与车辆定位系统的系统框图。FIG. 1 is a system block diagram of a mapping and vehicle positioning system for an underground parking environment according to the present invention.

具体实施方式DETAILED DESCRIPTION

下面将结合附图及实施例对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

如附图1所示,一种针对地下停车场环境的建图与车辆定位系统,包括前视相机、逆透视变换模块、语义特征检测模块、里程计模块、建图模块和定位模块;As shown in Figure 1, a mapping and vehicle positioning system for an underground parking environment includes a front-view camera, an inverse perspective transformation module, a semantic feature detection module, an odometer module, a mapping module, and a positioning module;

前视相机用于获取车辆前方区域的原始图像并输入给逆透视变换模块;The front-view camera is used to obtain the original image of the area in front of the vehicle and input it into the inverse perspective transformation module;

逆透视变换模块用于将前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给语义特征检测模块;The inverse perspective transformation module is used to perform inverse perspective transformation on the original image input by the front-view camera to obtain a top view and input it to the semantic feature detection module;

语义特征检测模块用于通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到建图模块和定位模块;The semantic feature detection module is used to obtain the top view semantically segmented image through a convolutional neural network, and obtain the semantic features to input into the mapping module and the positioning module;

里程计模块用于获取车辆的位姿输入到建图模块和定位模块;The odometer module is used to obtain the vehicle's position and posture and input it into the mapping module and the positioning module;

建图模块基于里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;The mapping module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle's position and posture provided by the odometer module to obtain a global semantic map.

定位模块用于获取车辆当前语义特征的全局坐标点,并将该全局坐标点与全局语义地图做匹配,以获得车辆当前的语义定位结果。The positioning module is used to obtain the global coordinate point of the vehicle's current semantic features and match the global coordinate point with the global semantic map to obtain the vehicle's current semantic positioning result.

在本实施例中,里程计模块包括一个惯性测量单元和两个轮速传感器;In this embodiment, the odometer module includes an inertial measurement unit and two wheel speed sensors;

建图模块包括局部地图模块和全局地图模块;The mapping module includes a local map module and a global map module;

局部地图模块基于里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给全局地图模块;Based on the vehicle's position and posture provided by the odometer module, the local map module projects the semantic features from the vehicle coordinate system to the global coordinate system to obtain a local map and input it into the global map module;

全局地图模块用于对局部地图加回环检测和全局优化后生成全局语义地图。The global map module is used to generate a global semantic map after adding loop closure detection and global optimization to the local map.

一种针对地下停车场环境的建图与车辆定位方法,采用上述的针对地下停车场环境的建图与车辆定位系统,建图与车辆定位方法包括建图方法和车辆定位方法;A method for mapping and positioning a vehicle in an underground parking environment, using the above-mentioned mapping and positioning system for an underground parking environment, the method for mapping and positioning a vehicle includes a mapping method and a vehicle positioning method;

其中建图方法包括以下步骤:The mapping method includes the following steps:

步骤A1)前视相机获取车辆前方区域的原始图像并输入给逆透视变换模块;Step A1) The front-view camera obtains the original image of the area in front of the vehicle and inputs it to the inverse perspective transformation module;

步骤A2)逆透视变换模块将前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给语义特征检测模块;Step A2) the inverse perspective transformation module performs inverse perspective transformation on the original image input by the front-view camera to obtain a top view and inputs it to the semantic feature detection module;

步骤A3)语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到建图模块;Step A3) The semantic feature detection module obtains the top view semantically segmented image through a convolutional neural network, and obtains the semantic features and inputs them into the mapping module;

步骤A4)里程计模块获取车辆的位姿输入到建图模块;Step A4) The odometer module obtains the vehicle's position and posture and inputs it into the mapping module;

步骤A5)建图模块基于里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;Step A5) The mapping module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle posture provided by the odometer module to obtain a global semantic map;

步骤A6)完成建图;Step A6) Complete the map construction;

车辆定位方法包括以下步骤:The vehicle positioning method comprises the following steps:

步骤S1)前视相机获取车辆当前位置前方区域的原始图像并输入给逆透视变换模块;Step S1) The front-view camera obtains the original image of the area in front of the current position of the vehicle and inputs it to the inverse perspective transformation module;

步骤S2)逆透视变换模块将前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给语义特征检测模块;Step S2) the inverse perspective transformation module performs inverse perspective transformation on the original image input by the front-view camera to obtain a top view and inputs it to the semantic feature detection module;

步骤S3)语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到定位模块;Step S3) the semantic feature detection module obtains the top view semantically segmented image through a convolutional neural network, and obtains the semantic features and inputs them into the positioning module;

步骤S4)里程计模块获取车辆当前的位姿输入到定位模块;Step S4) the odometer module obtains the current position of the vehicle and inputs it into the positioning module;

步骤S5)定位模块基于里程计模块提供的车辆当前的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得车辆当前语义特征的全局坐标点;Step S5) the positioning module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the current position and posture of the vehicle provided by the odometer module to obtain the global coordinate point of the current semantic features of the vehicle;

步骤S6)将步骤S5)中得到的全局坐标点与建图方法中得到的全局语义地图做匹配,以获得车辆当前的语义定位结果,完成车辆定位。Step S6) matches the global coordinate points obtained in step S5) with the global semantic map obtained in the mapping method to obtain the current semantic positioning result of the vehicle and complete the vehicle positioning.

在本实施例中,前视相机的内外参已离线标定,通过逆透视变换将前视相机获取的原始图像投影到地面上,逆透视变换的公式为:In this embodiment, the internal and external parameters of the forward-looking camera have been calibrated offline, and the original image acquired by the forward-looking camera is projected onto the ground through inverse perspective transformation. The formula of inverse perspective transformation is:

式中:πc(·)为前视相机投影模型,[Rctc]为从前视相机坐标系向车体坐标系转化的外参矩阵,[u v]为语义特征的像素坐标,[xv yv]为语义特征在车体坐标系下的坐标,λ为尺度因子。where: πc (·) is the projection model of the forward-looking camera, [Rc tc ] is the extrinsic parameter matrix transformed from the forward-looking camera coordinate system to the vehicle coordinate system, [uv] is the pixel coordinate of the semantic feature, [xv yv ] is the coordinate of the semantic feature in the vehicle coordinate system, and λ is the scale factor.

在本实施例中,语义特征检测模块使用卷积神经网络U-Net做语义分割,且卷积神经网络使用前视相机采集的停车场图片集进行训练分类,且卷积神经网络的分类类别包括车道线、停车线、引导线、减速带、可通行区域、障碍物和墙体,且语义特征检测模块通过卷积神经网络获取的语义特征包括车道线、停车线、引导线和减速带,这是由于车道线、停车线、引导线和减速带语义特征可辨识度高且稳定,适用于本发明的建图和定位。In this embodiment, the semantic feature detection module uses a convolutional neural network U-Net for semantic segmentation, and the convolutional neural network uses a parking lot picture set collected by a forward-looking camera for training and classification, and the classification categories of the convolutional neural network include lane lines, stop lines, guide lines, speed bumps, passable areas, obstacles and walls, and the semantic features obtained by the semantic feature detection module through the convolutional neural network include lane lines, stop lines, guide lines and speed bumps. This is because the semantic features of lane lines, stop lines, guide lines and speed bumps are highly recognizable and stable, and are suitable for the mapping and positioning of the present invention.

在本实施例中,利用里程计模块提供的位姿,将语义特征通过如下公式从车体坐标系投影至全局坐标系,保存语义特征的全局坐标点生成局部地图,局部地图范围为30m;其中,将语义特征从车体坐标系投影至全局坐标系的公式为:In this embodiment, the semantic features are projected from the vehicle coordinate system to the global coordinate system using the following formula using the posture provided by the odometer module, and the global coordinate points of the semantic features are saved to generate a local map, and the local map range is 30m; wherein, the formula for projecting the semantic features from the vehicle coordinate system to the global coordinate system is:

式中:[xw,yw zw]为语义特征在全局坐标系下的坐标,Ro为从车体坐标系向全局坐标系转化的旋转矩阵,to为从车体坐标系向全局坐标系转化的平移向量。Where: [xw,ywzw] is the coordinate of the semantic feature in the global coordinate system,Ro is the rotation matrix transformed from the vehicle body coordinate system to the global coordinate system, and to is the translation vector transformed from the vehicle body coordinate system to the global coordinate system.

在本实施例中,步骤A5)中,局部地图模块基于里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给全局地图模块;全局地图模块再对局部地图加回环检测和全局优化后生成全局语义地图。In this embodiment, in step A5), the local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle's posture provided by the odometer module to obtain a local map and input it to the global map module; the global map module then adds loop detection and global optimization to the local map to generate a global semantic map.

在本实施例中,步骤A5)中,全局地图模块对局部地图加回环检测的方法为:使用基于数据配准法将当前的局部地图与之前生成的局部地图做匹配,若匹配结果满足设定值,则证明出现回环,用计算得到的相对位姿进行位姿图优化来消除累积误差。In this embodiment, in step A5), the method for the global map module to add loop detection to the local map is: use a data-based registration method to match the current local map with the previously generated local map. If the matching result meets the set value, it proves that a loop occurs, and the calculated relative posture is used to optimize the posture graph to eliminate the accumulated error.

在本实施例中,步骤A5)中,全局地图模块对局部地图进行全局优化的方法包括约束两帧连续局部地图的里程计模块的测量值、以及回环检测时基于数据配准法得到的相对位姿做回环帧间的约束。In this embodiment, in step A5), the method for the global map module to globally optimize the local map includes constraining the measurement value of the odometer module of two consecutive local map frames, and constraining the relative posture obtained based on the data registration method during loop detection between loop frames.

在本实施例中,步骤A5)中,全局地图模块对局部地图进行全局优化的方法为高斯牛顿法,目标函数为:In this embodiment, in step A5), the method by which the global map module performs global optimization on the local map is the Gauss-Newton method, and the objective function is:

式中:χ为位姿集合,Tt,t+1为t帧和t+1帧前视相机的相对位姿估计值,为从里程计模块获取的相对位姿测量值,L为回环帧对的集合,为基于数据配准法得到的第i帧和第j帧之间的相对位姿,做测量值使用,Ti,j为第i帧和第j帧带有累计误差的相对位姿估计值。Where: χ is the pose set, Tt,t+1 is the relative pose estimate of the front-view camera in frame t and frame t+1, is the relative pose measurement obtained from the odometer module, L is the set of loop frame pairs, is the relative pose between the i-th frame and the j-th frame obtained based on the data registration method, which is used as the measurement value. Ti,j is the relative pose estimate between the i-th frame and the j-th frame with the accumulated error.

与现有技术相比,本发明不同于传统视觉定位方案用到的环境中的几何特征,本发明利用环境中的语义特征,对于停车场环境,主要包括车道线、引导线、停车线和减速带等,语义特征相比于几何特征能够长期稳定存在,并在光照、视角和环境变化时依旧健壮,这些语义特征由卷积神经网络检测,并利用该语义特征建立全局语义地图,再利用全局语义地图对车辆进行定位。因此该方案比传统的定位方案能更加鲁棒地应对环境变化,并能保持长时间稳定准确的使用状态。本发明所用传感器仅为一个前视相机、一个IMU(惯性测量单元)和两个轮速传感器,其中IMU和轮速传感器组成里程计模块,在建图和定位时提供车辆的相对位姿,因此本发明的使用成本很低,量产车也能轻松配置。本发明的框架主要包含两部分,建图和定位。顾名思义,建图即建立停车场环境的全局语义地图;由前视相机获取车辆前方区域的原始图像,经过逆透视变换后得到俯视图,然后输入卷积神经网络得到语义分割后的图像,并获得车道线、停车线、引导线和减速带等语义特征;然后基于里程计模块提供的位姿,将语义特征投影到全局坐标系下,鉴于里程计模块的数据漂移,本发明还利用回环检测和全局优化来消除这些累积误差,最后通过保存这些特征点,建立停车场的全局语义地图。全局语义地图生成之后,进入停车场的车辆通过前视相机获取图像、逆透视变换、语义特征检测及里程计模块提供的位姿获得语义特征的全局坐标点,再通过ICP算法(基于数据配准法)与建好的全局语义地图做匹配修正车辆位姿,最终获得准确的车辆定位结果。Compared with the prior art, the present invention is different from the geometric features in the environment used in the traditional visual positioning scheme. The present invention uses the semantic features in the environment. For the parking lot environment, it mainly includes lane lines, guide lines, parking lines and speed bumps. Compared with geometric features, semantic features can exist stably for a long time and remain robust when the illumination, viewing angle and environment change. These semantic features are detected by a convolutional neural network, and the semantic features are used to establish a global semantic map, and then the global semantic map is used to locate the vehicle. Therefore, this solution can respond to environmental changes more robustly than traditional positioning solutions, and can maintain a stable and accurate use state for a long time. The sensors used in the present invention are only a forward-looking camera, an IMU (inertial measurement unit) and two wheel speed sensors, wherein the IMU and the wheel speed sensor constitute an odometer module, which provides the relative position and posture of the vehicle during mapping and positioning, so the use cost of the present invention is very low, and mass-produced vehicles can also be easily configured. The framework of the present invention mainly includes two parts, mapping and positioning. As the name implies, mapping is to build a global semantic map of the parking environment; the original image of the area in front of the vehicle is obtained by the front-view camera, and a top view is obtained after inverse perspective transformation, and then the image after semantic segmentation is input into the convolutional neural network, and the semantic features such as lane lines, parking lines, guide lines and speed bumps are obtained; then based on the posture provided by the odometer module, the semantic features are projected into the global coordinate system. In view of the data drift of the odometer module, the present invention also uses loop detection and global optimization to eliminate these accumulated errors, and finally, by saving these feature points, a global semantic map of the parking lot is established. After the global semantic map is generated, the vehicle entering the parking lot obtains the global coordinate points of the semantic features through the front-view camera to obtain the image, inverse perspective transformation, semantic feature detection and the posture provided by the odometer module, and then the ICP algorithm (based on the data registration method) is used to match and correct the vehicle posture with the built global semantic map, and finally an accurate vehicle positioning result is obtained.

最后需要说明的是,以上实施例仅用以说明本发明的技术方案而非限制技术方案,本领域的普通技术人员应当理解,那些对本发明的技术方案进行修改或者等同替换,而不脱离本技术方案的宗旨和范围,均应涵盖在本发明的权利要求范围当中。Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention rather than to limit the technical solution. Those skilled in the art should understand that those modifications or equivalent substitutions of the technical solution of the present invention that do not depart from the purpose and scope of the technical solution should be included in the scope of the claims of the present invention.

Claims (9)

Translated fromChinese
1.一种针对地下停车场环境的建图与车辆定位系统,其特征在于,包括前视相机、逆透视变换模块、语义特征检测模块、里程计模块、建图模块和定位模块;1. A mapping and vehicle positioning system for an underground parking environment, characterized by comprising a front-view camera, an inverse perspective transformation module, a semantic feature detection module, an odometer module, a mapping module and a positioning module;所述前视相机用于获取车辆前方区域的原始图像并输入给所述逆透视变换模块;The front-view camera is used to obtain the original image of the area in front of the vehicle and input it to the inverse perspective transformation module;所述逆透视变换模块用于将所述前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给所述语义特征检测模块;The inverse perspective transformation module is used to perform inverse perspective transformation on the original image input by the front-view camera to obtain a top view and input it to the semantic feature detection module;所述语义特征检测模块用于通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到所述建图模块和所述定位模块;The semantic feature detection module is used to obtain the image after semantic segmentation of the top view through a convolutional neural network, and obtain the semantic features and input them into the mapping module and the positioning module;所述里程计模块用于获取车辆的位姿输入到所述建图模块和所述定位模块;The odometer module is used to obtain the vehicle's position and posture and input it into the mapping module and the positioning module;所述建图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;The mapping module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle posture provided by the odometer module to obtain a global semantic map;所述定位模块用于获取车辆当前语义特征的全局坐标点,并将该全局坐标点与全局语义地图做匹配,以获得车辆当前的语义定位结果;The positioning module is used to obtain the global coordinate point of the current semantic feature of the vehicle, and match the global coordinate point with the global semantic map to obtain the current semantic positioning result of the vehicle;所述建图模块包括局部地图模块和全局地图模块;The mapping module includes a local map module and a global map module;所述全局地图模块对局部地图进行全局优化的方法为高斯牛顿法,目标函数为:The method by which the global map module performs global optimization on the local map is the Gauss-Newton method, and the objective function is:式中:χ为位姿集合,Tt,t+1为t帧和t+1帧前视相机的相对位姿估计值,为从里程计模块获取的相对位姿测量值,L为回环帧对的集合,为基于数据配准法得到的第i帧和第j帧之间的相对位姿,做测量值使用,Ti,j为第i帧和第j帧带有累计误差的相对位姿估计值。Where: χ is the pose set, Tt,t+1 is the relative pose estimate of the front-view camera in frame t and frame t+1, is the relative pose measurement obtained from the odometer module, L is the set of loop frame pairs, is the relative pose between the i-th frame and the j-th frame obtained based on the data registration method, which is used as the measurement value. Ti,j is the relative pose estimate between the i-th frame and the j-th frame with the accumulated error.2.根据权利要求1所述的针对地下停车场环境的建图与车辆定位系统,其特征在于,所述里程计模块包括一个惯性测量单元和两个轮速传感器;2. The mapping and vehicle positioning system for underground parking environments according to claim 1, wherein the odometer module comprises an inertial measurement unit and two wheel speed sensors;所述局部地图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给所述全局地图模块;The local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle posture provided by the odometer module to obtain a local map and input it to the global map module;所述全局地图模块用于对局部地图加回环检测和全局优化后生成全局语义地图。The global map module is used to generate a global semantic map after adding loop detection and global optimization to the local map.3.一种针对地下停车场环境的建图与车辆定位方法,其特征在于,采用如权利要求1所述的针对地下停车场环境的建图与车辆定位系统,所述建图与车辆定位方法包括建图方法和车辆定位方法;3. A method for mapping and positioning a vehicle in an underground parking environment, characterized in that the mapping and positioning system for an underground parking environment as claimed in claim 1 is used, and the mapping and positioning method comprises a mapping method and a vehicle positioning method;其中所述建图方法包括以下步骤:The mapping method comprises the following steps:步骤A1)所述前视相机获取车辆前方区域的原始图像并输入给所述逆透视变换模块;Step A1) the front-view camera acquires an original image of the area in front of the vehicle and inputs it to the inverse perspective transformation module;步骤A2)所述逆透视变换模块将所述前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给所述语义特征检测模块;Step A2) the inverse perspective transformation module performs inverse perspective transformation on the original image input by the front-view camera to obtain a top view and inputs the top view to the semantic feature detection module;步骤A3)所述语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到所述建图模块;Step A3) the semantic feature detection module obtains the top view semantically segmented image through a convolutional neural network, and obtains the semantic features and inputs them into the mapping module;步骤A4)所述里程计模块获取车辆的位姿输入到所述建图模块;Step A4) the odometer module obtains the vehicle's position and posture and inputs it into the mapping module;步骤A5)所述建图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;Step A5) the mapping module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle posture provided by the odometer module to obtain a global semantic map;步骤A6)完成建图;Step A6) Complete the map construction;所述车辆定位方法包括以下步骤:The vehicle positioning method comprises the following steps:步骤S1)所述前视相机获取车辆当前位置前方区域的原始图像并输入给所述逆透视变换模块;Step S1) the front-view camera obtains an original image of the area in front of the current position of the vehicle and inputs it to the inverse perspective transformation module;步骤S2)所述逆透视变换模块将所述前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给所述语义特征检测模块;Step S2) the inverse perspective transformation module performs inverse perspective transformation on the original image input by the front-view camera to obtain a top view and inputs the top view to the semantic feature detection module;步骤S3)所述语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到所述定位模块;Step S3) the semantic feature detection module obtains the top view semantically segmented image through a convolutional neural network, and obtains the semantic features and inputs them into the positioning module;步骤S4)所述里程计模块获取车辆当前的位姿输入到所述定位模块;Step S4) the odometer module obtains the current position of the vehicle and inputs it into the positioning module;步骤S5)所述定位模块基于所述里程计模块提供的车辆当前的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得车辆当前语义特征的全局坐标点;Step S5) the positioning module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the current position and posture of the vehicle provided by the odometer module to obtain the global coordinate point of the current semantic features of the vehicle;步骤S6)将步骤S5)中得到的全局坐标点与所述建图方法中得到的全局语义地图做匹配,以获得车辆当前的语义定位结果,完成车辆定位。Step S6) matches the global coordinate points obtained in step S5) with the global semantic map obtained in the mapping method to obtain the current semantic positioning result of the vehicle and complete the vehicle positioning.4.根据权利要求3所述的针对地下停车场环境的建图与车辆定位方法,其特征在于,逆透视变换的公式为:4. The method for mapping and positioning vehicles in an underground parking environment according to claim 3, wherein the formula for inverse perspective transformation is:式中:πc(·)为前视相机投影模型,[Rctc]为从前视相机坐标系向车体坐标系转化的外参矩阵,[u v]为语义特征的像素坐标,[xv yv]为语义特征在车体坐标系下的坐标,λ为尺度因子。where: πc (·) is the projection model of the forward-looking camera, [Rc tc ] is the extrinsic parameter matrix transformed from the forward-looking camera coordinate system to the vehicle coordinate system, [uv] is the pixel coordinate of the semantic feature, [xv yv ] is the coordinate of the semantic feature in the vehicle coordinate system, and λ is the scale factor.5.根据权利要求3所述的针对地下停车场环境的建图与车辆定位方法,其特征在于,所述语义特征检测模块的卷积神经网络使用所述前视相机采集的停车场图片集进行训练分类,且卷积神经网络的分类类别包括车道线、停车线、引导线、减速带、可通行区域、障碍物和墙体,且所述语义特征检测模块通过卷积神经网络获取的语义特征包括车道线、停车线、引导线和减速带。5. According to claim 3, the mapping and vehicle positioning method for an underground parking environment is characterized in that the convolutional neural network of the semantic feature detection module uses the parking lot picture set collected by the forward-looking camera for training and classification, and the classification categories of the convolutional neural network include lane lines, stop lines, guide lines, speed bumps, passable areas, obstacles and walls, and the semantic features obtained by the semantic feature detection module through the convolutional neural network include lane lines, stop lines, guide lines and speed bumps.6.根据权利要求3所述的针对地下停车场环境的建图与车辆定位方法,其特征在于,将语义特征从车体坐标系投影至全局坐标系的公式为:6. The method for mapping and positioning vehicles in an underground parking environment according to claim 3, characterized in that the formula for projecting semantic features from the vehicle body coordinate system to the global coordinate system is:式中:[xw, yw zw]为语义特征在全局坐标系下的坐标,Ro为从车体坐标系向全局坐标系转化的旋转矩阵,to为从车体坐标系向全局坐标系转化的平移向量。Where: [xw,ywzw] is the coordinate of the semantic feature in the global coordinate system,Ro is the rotation matrix transformed from the vehicle body coordinate system to the global coordinate system, and to is the translation vector transformed from the vehicle body coordinate system to the global coordinate system.7.根据权利要求6所述的针对地下停车场环境的建图与车辆定位方法,其特征在于,所述里程计模块包括一个惯性测量单元和两个轮速传感器;7. The method for mapping and positioning vehicles in an underground parking environment according to claim 6, wherein the odometer module comprises an inertial measurement unit and two wheel speed sensors;所述建图模块包括局部地图模块和全局地图模块;The mapping module includes a local map module and a global map module;所述局部地图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给所述全局地图模块;The local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle posture provided by the odometer module to obtain a local map and input it to the global map module;所述全局地图模块用于对局部地图加回环检测和全局优化后生成全局语义地图;The global map module is used to generate a global semantic map after adding loop closure detection and global optimization to the local map;步骤A5)中,所述局部地图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给所述全局地图模块;所述全局地图模块再对局部地图加回环检测和全局优化后生成全局语义地图。In step A5), the local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the vehicle's posture provided by the odometer module to obtain a local map and input it to the global map module; the global map module then adds loop detection and global optimization to the local map to generate a global semantic map.8.根据权利要求7所述的针对地下停车场环境的建图与车辆定位方法,其特征在于,步骤A5)中,所述全局地图模块对局部地图加回环检测的方法为:使用基于数据配准法将当前的局部地图与之前生成的局部地图做匹配,若匹配结果满足设定值,则证明出现回环,用计算得到的相对位姿进行位姿图优化来消除累积误差。8. According to the mapping and vehicle positioning method for an underground parking environment as described in claim 7, it is characterized in that in step A5), the method of adding loop detection to the local map by the global map module is: using a data-based registration method to match the current local map with the previously generated local map. If the matching result meets the set value, it proves that a loop occurs, and the calculated relative posture is used to optimize the posture graph to eliminate the accumulated error.9.根据权利要求8所述的针对地下停车场环境的建图与车辆定位方法,其特征在于,步骤A5)中,所述全局地图模块对局部地图进行全局优化的方法包括约束两帧连续局部地图的所述里程计模块的测量值、以及回环检测时基于数据配准法得到的相对位姿做回环帧间的约束。9. According to the mapping and vehicle positioning method for underground parking environments according to claim 8, it is characterized in that in step A5), the method for the global map module to globally optimize the local map includes constraining the measurement value of the odometer module of two consecutive frames of local maps, and constraining the relative posture obtained based on the data registration method during loop detection between loop frames.
CN202210580720.4A2022-05-252022-05-25Map building and vehicle positioning system and method for underground parking garage environmentActiveCN114964236B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210580720.4ACN114964236B (en)2022-05-252022-05-25Map building and vehicle positioning system and method for underground parking garage environment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210580720.4ACN114964236B (en)2022-05-252022-05-25Map building and vehicle positioning system and method for underground parking garage environment

Publications (2)

Publication NumberPublication Date
CN114964236A CN114964236A (en)2022-08-30
CN114964236Btrue CN114964236B (en)2024-10-29

Family

ID=82956434

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210580720.4AActiveCN114964236B (en)2022-05-252022-05-25Map building and vehicle positioning system and method for underground parking garage environment

Country Status (1)

CountryLink
CN (1)CN114964236B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115456898A (en)*2022-09-092022-12-09雄狮汽车科技(南京)有限公司Method and device for building image of parking lot, vehicle and storage medium
CN115546313A (en)*2022-09-302022-12-30重庆长安汽车股份有限公司 Vehicle-mounted camera self-calibration method, device, electronic equipment and storage medium
CN115752476B (en)*2022-11-292024-06-18重庆长安汽车股份有限公司Vehicle ground library repositioning method, device, equipment and medium based on semantic information
CN116295457B (en)*2022-12-212024-05-24辉羲智能科技(上海)有限公司Vehicle vision positioning method and system based on two-dimensional semantic map
CN116052127B (en)*2023-02-012025-07-15东软睿驰汽车技术(上海)有限公司 Method, device and electronic device for constructing parking lot semantic map
CN116817887B (en)*2023-06-282024-03-08哈尔滨师范大学Semantic visual SLAM map construction method, electronic equipment and storage medium
CN116817892B (en)*2023-08-282023-12-19之江实验室Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107610175A (en)*2017-08-042018-01-19华南理工大学The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window
CN107869989A (en)*2017-11-062018-04-03东北大学 A positioning method and system based on visual inertial navigation information fusion

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110375738B (en)*2019-06-212023-03-14西安电子科技大学Monocular synchronous positioning and mapping attitude calculation method fused with inertial measurement unit
CN111780754B (en)*2020-06-232022-05-20南京航空航天大学 Visual-inertial odometry pose estimation method based on sparse direct method
CN112304307B (en)*2020-09-152024-09-06浙江大华技术股份有限公司Positioning method and device based on multi-sensor fusion and storage medium
CN113763466B (en)*2020-10-102024-06-14北京京东乾石科技有限公司 A loop detection method, device, electronic device and storage medium
CN113624223B (en)*2021-07-302024-05-24中汽创智科技有限公司Indoor parking lot map construction method and device
CN113903011B (en)*2021-10-262024-06-11江苏大学Semantic map construction and positioning method suitable for indoor parking lot

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107610175A (en)*2017-08-042018-01-19华南理工大学The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window
CN107869989A (en)*2017-11-062018-04-03东北大学 A positioning method and system based on visual inertial navigation information fusion

Also Published As

Publication numberPublication date
CN114964236A (en)2022-08-30

Similar Documents

PublicationPublication DateTitle
CN114964236B (en)Map building and vehicle positioning system and method for underground parking garage environment
CN113903011B (en)Semantic map construction and positioning method suitable for indoor parking lot
CN114184200B (en) A Multi-source Fusion Navigation Method Combined with Dynamic Mapping
CN108802785A (en)Vehicle method for self-locating based on High-precision Vector map and monocular vision sensor
CN112734841B (en)Method for realizing positioning by using wheel type odometer-IMU and monocular camera
CN113657224B (en)Method, device and equipment for determining object state in vehicle-road coordination
CN111801711A (en) Image annotation
CN111862673A (en)Parking lot vehicle self-positioning and map construction method based on top view
CN113804182B (en)Grid map creation method based on information fusion
CN105976402A (en)Real scale obtaining method of monocular vision odometer
CN113781562B (en) A Road Model-Based Method for Virtual and Real Registration of Lane Lines and Self-Vehicle Location
CN112339748B (en)Method and device for correcting vehicle pose information through environment scanning in automatic parking
CN113920198B (en)Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
CN114120075A (en) A 3D Object Detection Method Fusion Monocular Camera and LiDAR
CN114325634A (en)Method for extracting passable area in high-robustness field environment based on laser radar
CN113673462B (en) A logistics AGV positioning method based on lane lines
CN111986261A (en) A vehicle positioning method, device, electronic device and storage medium
CN115311349A (en)Vehicle automatic driving auxiliary positioning fusion method and domain control system thereof
WO2022062480A1 (en)Positioning method and positioning apparatus of mobile device
CN111238490B (en)Visual positioning method and device and electronic equipment
CN115294211A (en) A method, system, device and storage medium for calibrating external parameters of vehicle camera installation
CN115371695A (en) A synchronous positioning and mapping method for behavioral semantics assisted loop detection
CN118379349A (en) A visual SLAM method based on 3D semantics in underground parking scenarios
Zhao et al.L-VIWO: Visual-inertial-wheel odometry based on lane lines
CN118196205A (en)On-line self-calibration method and system for external parameters of vehicle-mounted camera

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp