技术领域technical field
本发明属于导航技术领域,涉及一种地图要素提取方法,尤其是自动建图机器人的地图要素提取方法。The invention belongs to the technical field of navigation, and relates to a method for extracting map elements, in particular to a method for extracting map elements of an automatic mapping robot.
背景技术Background technique
近年来,随着自动驾驶汽车在一些领域的逐步应用,针对自动驾驶汽车所使用的高精度地图的巨大需求与目前高精度语义地图的高昂采集成本、较繁琐的采集流程都产生了较大的矛盾,对其大规模实际应用带来了相当的难度。目前在地图采集过程中,局部地图可以采用激光雷达建立点云地图的形式进行构建,并通过自动算法进行点云局部地图的拼接与提取轮廓,最终可以生成全局的轮廓地图。In recent years, with the gradual application of autonomous vehicles in some fields, the huge demand for high-precision maps used in autonomous vehicles and the high acquisition cost and cumbersome acquisition process of current high-precision semantic maps have generated a large number of problems. The contradiction brings considerable difficulty to its large-scale practical application. At present, in the process of map collection, local maps can be constructed in the form of point cloud maps created by lidar, and automatic algorithms are used to stitch and extract outlines of local maps of point clouds, and finally a global outline map can be generated.
但是,受制于目前自动化算法对于地图要素本身语义信息(如类型、尺寸、位置)等的分割与识别,准确率与计算速度仍较难达到人工标注的水平,故在高精地图的后处理阶段,目前仍然需要依靠大量人力与时间,对程序采集得到的全局轮廓地图进行手工的地图要素语义标注。在此过程中,将会带来大量的人力、时间成本,对自动驾驶汽车上高精度地图的更大规模应用造成了不小的阻碍。However, due to the segmentation and recognition of the semantic information (such as type, size, location) of the map elements themselves by the current automated algorithms, the accuracy and calculation speed are still difficult to reach the level of manual annotation. Therefore, in the post-processing stage of high-precision maps , it still needs to rely on a lot of manpower and time to manually label the map elements semantically on the global contour map collected by the program. In this process, it will bring a lot of manpower and time costs, which will hinder the larger-scale application of high-precision maps on autonomous vehicles.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种全自动完成点云建图和语义标注,得到高精度语义地图的方法。The purpose of the present invention is to provide a method for fully automatic completion of point cloud mapping and semantic annotation to obtain a high-precision semantic map.
为了达到上述目的,本发明的解决方案是:In order to achieve the above object, the solution of the present invention is:
一种自动建图机器人的地图要素提取方法,获取待测环境的第一地图要素信息,构建所述待测环境的点云轮廓地图;检测待测环境设定位置上的视觉标记,读取所获取的视觉标记所包含的第二地图要素信息;将第二地图要素信息标记到所述点云轮廓地图中,得到包含具体语义信息的地图;其中,第一地图要素信息包含所述待测环境所在位置的点云信息;第二地图要素信息包括对应视觉标记所在位置的地图要素的具体语义信息。A method for extracting map elements of an automatic mapping robot, obtaining first map element information of an environment to be measured, and constructing a point cloud contour map of the environment to be measured; detecting visual marks on a set position of the environment to be measured, and reading all the second map element information contained in the acquired visual mark; mark the second map element information in the point cloud outline map to obtain a map containing specific semantic information; wherein, the first map element information contains the environment to be tested The point cloud information of the location; the second map element information includes the specific semantic information of the map element corresponding to the location of the visual marker.
所述视觉标记为二维码。The visual marking is a two-dimensional code.
第一地图要素信息包括地图要素的轮廓、距离、高差信息,不包含地图要素的语义信息。The first map element information includes outline, distance, and height difference information of the map element, but does not include semantic information of the map element.
采用激光雷达,获取待测环境的第一地图要素信息;优选地,所述激光雷达为多线激光雷达。A laser radar is used to obtain first map element information of the environment to be measured; preferably, the laser radar is a multi-line laser radar.
第二地图要素信息包含对应视觉标记所在位置的地图要素的位置、类型、大小的编码信息。The second map element information includes coded information of the location, type, and size of the map element corresponding to the location of the visual marker.
采用视觉传感器,检测待测环境设定位置上的视觉标记;优选地,所述视觉传感器为前视单目相机。A visual sensor is used to detect the visual mark on the set position of the environment to be tested; preferably, the visual sensor is a forward-looking monocular camera.
通过激光雷达获取待测环境中的第一地图要素信息,并通过SLAM(simultaneouslocalization and mapping,即同步定位与建图)算法进行点云数据预处理,生成基于激光雷达点云的场景地图,进而建立不包含地图要素语义信息的点云轮廓地图。The first map element information in the environment to be tested is obtained through lidar, and the point cloud data is preprocessed through the SLAM (simultaneous localization and mapping) algorithm to generate a scene map based on the lidar point cloud, and then establish Point cloud contour maps that do not contain semantic information for map features.
所述视觉标记为二维码;采用前视单目相机,获取自动建图机器人前方地面上的图像信息;从所述图像信息中识别并提取二维码标识的类型和二维码的四个角点,利用二维码的平面假设计算自动建图机器人与二维码之间的相对位置关系;读取并记录二维码对应的第二地图要素信息,通过扩展卡尔曼滤波器优化融合基于二维码求解出的自动建图机器人相对位置信息,以及惯性导航观测得到的自动建图机器人相对位移信息,实现地图要素语义标记点建图,并根据二维码中所记录的信息,对一个或多个相近的二维码标记点形成连线、网格或区域的语义轮廓,得到待测环境的语义轮廓地图。The visual mark is a two-dimensional code; a forward-looking monocular camera is used to obtain image information on the ground in front of the automatic mapping robot; from the image information, the type of the two-dimensional code identification and the four types of the two-dimensional code are identified and extracted. Corner points, use the plane assumption of the QR code to calculate the relative positional relationship between the automatic mapping robot and the QR code; read and record the second map element information corresponding to the QR code, and optimize the fusion based on the extended Kalman filter. The relative position information of the automatic mapping robot solved by the two-dimensional code, and the relative displacement information of the automatic mapping robot obtained by the inertial navigation observation, realize the mapping of the semantic mark points of the map elements, and according to the information recorded in the two-dimensional code, a Or a plurality of similar two-dimensional code mark points form the semantic contour of the connection line, grid or area, and obtain the semantic contour map of the environment to be tested.
所述将第二地图要素信息标记到所述点云轮廓地图中,得到包含具体语义信息的地图包括:将所述语义轮廓地图与所述点云轮廓地图融合,识别所述点云轮廓地图,将所述语义轮廓地图中的第二地图要素信息标记到所述点云轮廓地图中,得到包含完整语义的地图。The step of marking the second map element information into the point cloud outline map to obtain a map containing specific semantic information includes: fusing the semantic outline map with the point cloud outline map to identify the point cloud outline map, Marking the second map element information in the semantic contour map into the point cloud contour map to obtain a map containing complete semantics.
所述视觉标记的不同编码信息被预先定义在表格各行中,通过增加表格的行数,来适应增量式的地图要素种类扩充;预定义表格扩充后,已采集的地图,已有的视觉标记和未扩充的表格三者仍然兼容。The different coding information of the visual mark is predefined in each row of the table, and by increasing the number of rows in the table, it can adapt to the expansion of the type of incremental map elements; after the expansion of the predefined table, the collected map, the existing visual mark Still compatible with unexpanded tables.
由于采用上述方案,本发明的有益效果是:本发明自动建图机器人高精度地图要素快速提取方法为一种完全自动地完成激光雷达点云建图、生成全局轮廓地图、并自动通过读取人工视觉标记在轮廓地图上进行实时语义标注的方法。该方法所采用的视觉传感器价格低廉、使用方便;其所涉及的人工视觉标记布置简单,算法成熟,通过Apriltag的二维码识别耗时短、准确率高,实施简单,快速可靠;直接通过视觉读取二维码信息(识别率接近100%),相较于基于视觉的深度学习识别(识别率往往在80-90%),具有识别精确度高的优点;该方法无需以传统方式人工对地图上的各个元素进行手工标注,省去了大量人力成本以及标注软件的授权费用,取而代之的是张贴二维码后,即可在激光建图的同时自动采集到标注的地图要素语义信息点,成本低。从而,本发明以低成本、方便快速的方式实现了得到包含完整语义信息的高精度地图,尤其适用于自动驾驶车辆的高精度语义地图的自动采集,能够有效地促进高精度地图在自动驾驶汽车上的大规模应用。该方法可应用于自动泊车停车场等使用场景。Due to the adoption of the above scheme, the beneficial effects of the present invention are as follows: the method for rapidly extracting high-precision map elements of an automatic mapping robot of the present invention is a completely automatic completion of lidar point cloud mapping, generation of a global contour map, and automatic reading of artificial A method for real-time semantic annotation on contour maps by visual markers. The visual sensor used in this method is cheap and easy to use; the artificial visual markers involved are simple in layout and mature in algorithm, and the two-dimensional code recognition through Apriltag has short time-consuming, high accuracy, simple implementation, fast and reliable; Reading QR code information (recognition rate is close to 100%), compared with vision-based deep learning recognition (recognition rate is often 80-90%), has the advantage of high recognition accuracy; this method does not require manual identification in the traditional way Each element on the map is manually labeled, which saves a lot of labor costs and the authorization cost of the labeling software. Instead, after the QR code is posted, the semantic information points of the labeled map elements can be automatically collected during the laser mapping. low cost. Therefore, the present invention realizes obtaining a high-precision map containing complete semantic information in a low-cost, convenient and fast manner, and is especially suitable for the automatic collection of high-precision semantic maps of autonomous vehicles, and can effectively promote the use of high-precision maps in autonomous vehicles. large-scale applications. This method can be applied to usage scenarios such as automatic parking lot.
附图说明Description of drawings
图1为本发明一实施例的自动建图机器人的地图要素提取方法的流程图;1 is a flowchart of a method for extracting map elements of an automatic mapping robot according to an embodiment of the present invention;
图2是该实施例中某一单一道路中的二维码布置效果图;Fig. 2 is the layout effect diagram of two-dimensional code in a certain single road in this embodiment;
图3是该实施例中某一分支道路中的二维码布置效果图;Fig. 3 is the layout effect diagram of two-dimensional code in a certain branch road in this embodiment;
图4是该实施例中某一开放区域中的二维码布置效果图;Fig. 4 is the layout effect diagram of two-dimensional code in a certain open area in this embodiment;
图5是该实施例中对某一二维码角点识别、定位的结果示意图。FIG. 5 is a schematic diagram of the result of identifying and locating a corner point of a two-dimensional code in this embodiment.
附图中:1、直路起点二维码;2、直路终点二维码;3、交叉路口转向点二维码;4、开放区域边界点二维码;5、开放区域出口点二维码;6、开放区域入口点二维码。In the attached drawings: 1. The two-dimensional code of the starting point of the straight road; 2. The two-dimensional code of the end of the straight road; 3. The two-dimensional code of the turning point at the intersection; 4. The two-dimensional code of the border point of the open area; 5. The two-dimensional code of the exit point of the open area; 6. The QR code of the entry point of the open area.
具体实施方式Detailed ways
以下结合附图所示实施例对本发明作进一步的说明。The present invention will be further described below with reference to the embodiments shown in the accompanying drawings.
本发明提出了一种自动建图机器人的地图要素提取方法,该方法基于视觉传感器和激光雷达,面向自动建图机器人提取高精度地图的地图要素。其待测环境的第一地图要素信息,构建所述待测环境的点云轮廓地图;检测待测环境设定位置上的视觉标记,读取所获取的视觉标记所包含的第二地图要素信息;将第二地图要素信息标记到所述点云轮廓地图中,得到包含具体语义信息的地图;其中,第一地图要素信息包含所述待测环境所在位置的点云信息;第二地图要素信息包括对应视觉标记所在位置的地图要素的具体语义信息。The invention proposes a map element extraction method for an automatic map-building robot, which is based on a visual sensor and a laser radar, and extracts map elements of a high-precision map for the automatic map-building robot. The first map element information of the environment to be tested is used to construct a point cloud contour map of the environment to be tested; the visual markers on the set positions of the environment to be tested are detected, and the second map element information contained in the acquired visual markers is read. Mark the second map element information in the point cloud outline map to obtain a map containing specific semantic information; wherein, the first map element information includes the point cloud information of the location of the environment to be measured; the second map element information Include specific semantic information of map elements corresponding to the locations of visual markers.
也就是说,该方法中,建图机器人的建图过程主要包括两部分:建图机器人的激光建图和建图机器人的地图要素信息自动标记,其中:That is to say, in this method, the mapping process of the mapping robot mainly includes two parts: the laser mapping of the mapping robot and the automatic marking of the map element information of the mapping robot, among which:
自动建图机器人的激光建图包括:利用多线激光雷达,获取周边局部环境的地图要素轮廓、距离、高差等信息,结合自动建图机器人自身组合定位传感器信息,完成对建图机器人周边局部环境的不含地图要素语义信息的点云高精地图的构建;The laser mapping of the automatic mapping robot includes: using multi-line lidar to obtain the outline, distance, height difference and other information of the surrounding local environment map elements, and combining the positioning sensor information of the automatic mapping robot itself to complete the mapping of the surrounding parts of the mapping robot. Construction of high-precision map of point cloud of environment without semantic information of map elements;
地图要素信息自动标准包括:在建图机器人的激光建图的基础上,利用视觉传感器,获取目标地图要素上或周围特定位置上所定义的人工视觉标记,如二维码等,通过读取人工视觉标记,可直接读取到此地图要素的位置、类型、大小等具体信息。通过之前建立好的机器人周边局部环境的不含地图要素语义信息点云高精地图结合标记的地图要素信息,实现精准对点云地图要素轮廓的识别与标注,并记录到地图中相应点云轮廓位置,完成包含具体语义信息的地图要素信息自动标注。The automatic standard of map element information includes: on the basis of the laser mapping of the mapping robot, the visual sensor is used to obtain the artificial visual marks defined on the target map element or at a specific position around it, such as two-dimensional codes, etc. Visual markers, which can directly read the location, type, size and other specific information of this map feature. Through the previously established high-precision point cloud high-precision map of the local environment around the robot without the semantic information of map elements, combined with the marked map element information, the accurate identification and labeling of the outline of the point cloud map elements is realized, and the corresponding point cloud outline in the map is recorded. Location, complete the automatic labeling of map element information containing specific semantic information.
图1为本实施例中自动建图机器人的地图要素提取方法的流程图。FIG. 1 is a flowchart of a method for extracting map elements of an automatic map-building robot in this embodiment.
在本实施例中,该方法的具体实现过程为:In this embodiment, the specific implementation process of the method is:
采用经过改装的电动车辆作为采集平台,本实施例中其为短轴距自动行驶车,其上安装了GPS定位模块、激光雷达、前视单目相机、轨迹规划模块、控制模块等组件,能够自行驾驶,构成自动建图机器人。在本发明中,上述组件也可以更换安装到其他经过改装的车辆或其他可移动平台上,构成自动建图机器人。A refitted electric vehicle is used as the acquisition platform. In this embodiment, it is a short-wheelbase automatic driving vehicle. Components such as a GPS positioning module, a laser radar, a forward-looking monocular camera, a trajectory planning module, and a control module are installed on it. Self-driving, constitutes an automatic mapping robot. In the present invention, the above components can also be replaced and installed on other modified vehicles or other movable platforms to form an automatic mapping robot.
前视单目相机布置于车辆前方正中倾斜向下位置,分辨率为1080p或以上,帧率为25Hz,延迟小于5ms,通过棋盘格内参标定法对前视单目相机的内参数进行标定。激光雷达布置于车辆车顶正上方,分辨率为32或64线,可实现360度水平视场角,使用前也先对其外参数与内参数进行标定。The forward-looking monocular camera is arranged in the front center of the vehicle and is inclined downward. The resolution is 1080p or above, the frame rate is 25Hz, and the delay is less than 5ms. The internal parameters of the forward-looking monocular camera are calibrated by the checkerboard internal parameter calibration method. The lidar is arranged just above the roof of the vehicle, with a resolution of 32 or 64 lines, and can achieve a 360-degree horizontal field of view. The external parameters and internal parameters are also calibrated before use.
车道线、墙面、柱子、交通信号线、障碍物边界等地图要素作为重要的环境标志,在传统的视觉SLAM(simultaneous localization and mapping,即同步定位与建图)中容易造成误检测。虽然RGBD相机可以进行有效的检测,但成本较高,且对玻璃等强反射材质无效。因此,引入人工视觉标记(本实施例中为二维码)来进行辅助检测。Map elements such as lane lines, walls, pillars, traffic signal lines, and obstacle boundaries, as important environmental markers, are prone to false detections in traditional visual SLAM (simultaneous localization and mapping). Although RGBD cameras can perform effective detection, they are costly and ineffective for highly reflective materials such as glass. Therefore, artificial visual markers (two-dimensional codes in this embodiment) are introduced for auxiliary detection.
采用Apriltag二维码方案,每个二位码具有独一无二的序号,无二义性。二维码中包含以下信息编码代号:标记点序号,相应地图要素的位置、尺寸、类型,需要组合为语义轮廓的其他标记点序号及顺序等必要信息。按照每种不同的信息为一行制定定义表,二维码本身仅包含定义表列表的行号。Using the Apriltag QR code scheme, each 2-digit code has a unique serial number without ambiguity. The two-dimensional code contains the following information coding codes: the sequence number of the marker point, the location, size, and type of the corresponding map elements, and other necessary information such as the sequence number and sequence of other marker points that need to be combined into a semantic outline. A definition table is made for a row according to each different information, and the QR code itself only contains the row number of the definition table list.
将二维码打印于A2以上尺寸的雾面防水纸,然后将其张贴在所需识别场景中各个地图要素的起始点、终止点、角点等位置。张贴的密度约为沿直线车道线1张/10m,在转弯处增加张贴密度至1张/5m。墙面、柱体上张贴高度约为1.7m-2.5m,保证车辆前视单目相机能够观测到二维码,并避免其他车辆的遮挡。图2是某一单一道路中的二维码布置效果图;图3是某一分支道路中的二维码布置效果图;图4是某一开放区域中的二维码布置效果图。Print the QR code on the matte waterproof paper of A2 size or above, and then post it on the starting point, ending point, corner point, etc. of each map element in the scene to be identified. The density of posting is about 1 sheet/10m along the straight lane line, and the posting density is increased to 1 sheet/5m at the turning point. The height of the wall and column is about 1.7m-2.5m to ensure that the front-view monocular camera of the vehicle can observe the QR code and avoid the obstruction of other vehicles. Figure 2 is an effect diagram of the arrangement of two-dimensional codes in a single road; Figure 3 is an effect diagram of the layout of two-dimensional codes in a branch road; Figure 4 is an effect diagram of the layout of two-dimensional codes in an open area.
视觉传感器(即前视单目相机)读取到二维码的内容,并依据读到的行号从定义表中读取对应的含义信息及其编码,并同时计算采图机器人与二维码本身的相对位置(利用二维码角点与平面假设PnP模型坐标计算得到),结合采图机器人自身定位信息(rtkGPS),可得到二维码所代表的地图要素的位置信息。具体过程为:The visual sensor (ie, the front-viewing monocular camera) reads the content of the two-dimensional code, and reads the corresponding meaning information and its code from the definition table according to the read line number, and calculates the mapping robot and the two-dimensional code at the same time. The relative position of itself (calculated by using the coordinates of the corner points of the QR code and the plane assuming the PnP model), combined with the positioning information of the mapping robot (rtkGPS), the position information of the map elements represented by the QR code can be obtained. The specific process is:
利用前视单目相机对二维码进行检测,由于图像中四个角点在世界坐标系中位于同一平面上,且二维码边长已知,可用PnP(Perspective-n-Point)模型求解。The front-view monocular camera is used to detect the two-dimensional code. Since the four corners in the image are located on the same plane in the world coordinate system, and the side length of the two-dimensional code is known, the PnP (Perspective-n-Point) model can be used to solve it. .
利用PnP模型,已知n(n>2)组三维点坐标及在二维影像上对应的投影点坐标,即可确定相机外参数。具体求解过程如下:Using the PnP model, the external parameters of the camera can be determined by knowing n (n>2) sets of three-dimensional point coordinates and the corresponding projected point coordinates on the two-dimensional image. The specific solution process is as follows:
三维点坐标与二维像点的对应关系可表示为:The correspondence between 3D point coordinates and 2D image points can be expressed as:
spc=K[R T]pwspc =K[RT]pw
其中,pw=[x y z 1]T,为世界坐标系中三维点坐标的齐次形式,(x、y、z分别为三个维度上的空间坐标,x坐标轴、y坐标轴和z坐标轴的规定参照下段);pc=[u v 1]T,是影像坐标系中二维点坐标的齐次形式(u、v分别为影像平面上的两个坐标);K为相机内参数,在标定步骤中已预先求得;由于外参数[R T](R、T分别为相机外参数中的旋转参数矩阵、平移参数矩阵)的自由度为6Dof(即具有六个自由度(具体涉及3个平移方向、3个旋转方向)),当n>2时,可使用SVD分解迭代求解出[R T]。Among them, pw =[xyz 1]T , is the homogeneous form of the three-dimensional point coordinates in the world coordinate system, (x, y, z are the spatial coordinates in the three dimensions, the x-coordinate axis, y-coordinate axis and z-coordinate Refer to the next paragraph for the specification of the axis); pc =[uv 1]T , which is the homogeneous form of the two-dimensional point coordinates in the image coordinate system (u and v are the two coordinates on the image plane respectively); K is the camera internal parameter, It has been obtained in advance in the calibration step; because the degree of freedom of the external parameter [RT] (R, T are the rotation parameter matrix and translation parameter matrix in the camera external parameters respectively) is 6Dof (that is, it has six degrees of freedom (specifically involving 3 degrees of freedom). 3 translation directions and 3 rotation directions)), when n>2, the SVD decomposition can be used to iteratively solve [RT].
在本方法中,定义以二维码中心为原点,沿二维码方向水平向右为x轴,沿二维码方向垂直向上为y轴,垂直于二维码平面向内为z轴的二维码空间坐标系。由于二维码四个角点之间的相对位置关系已知,易得它们的坐标值(如图5,其中长度s为二维码边长的一半)。因此,pw已知,pc通过二维码检测确定,K已标定相机得出,n=4>2,可使用PnP模型迭代求解[R T]。In this method, it is defined that the center of the two-dimensional code is the origin, the horizontal and rightward directions of the two-dimensional code are the x-axis, the vertical and upward direction of the two-dimensional code is the y-axis, and the inward direction perpendicular to the plane of the two-dimensional code is the z-axis. Dimensional code space coordinate system. Since the relative positional relationship between the four corner points of the two-dimensional code is known, it is easy to obtain their coordinate values (as shown in Figure 5, where the length s is half of the side length of the two-dimensional code). Therefore, pw is known, pc is determined by two-dimensional code detection, K has been calibrated to the camera, n=4>2, and the PnP model can be used to iteratively solve [RT].
对R3×3(即上段中所述的R矩阵,在三维自由空间中其为3行3列的矩阵)进行罗德里格变换,即得前视单目相机到二维码中心连线与车辆航向的夹角a,对t3×1(即上段中的T矩阵,其包含3行1列)取二范数即为前视单目相机到二维码标记中心的距离d。则视觉标记在自动建图机器人坐标系下坐标为:(xtag=sin(a)·d,ytag=cos(a)·d)(xtag和ytag分别表示自动建图机器人坐标系下的横坐标与纵坐标)。Perform Rodrigue transformation on R3×3 (that is, the R matrix described in the previous paragraph, which is a matrix with 3 rows and 3 columns in three-dimensional free space), that is, the line connecting the front-view monocular camera to the center of the QR code and The included angle a of the vehicle heading, and taking the two-norm for t3×1 (that is, the T matrix in the upper section, which includes 3 rows and 1 column) is the distance d from the front-view monocular camera to the center of the QR code mark. Then the coordinates of the visual tag in the coordinate system of the automatic mapping robot are: (xtag = sin(a) d, ytag = cos(a) d) (xtag and ytag respectively represent the coordinate system of the automatic mapping robot abscissa and ordinate).
生成对应地图要素类型的csv文件,并按观测到的先后顺序逐行添加:从定义表中读取到的语义信息,以及二维码位置坐标,并添加序号。Generate a csv file corresponding to the map element type, and add it line by line in the order of observation: the semantic information read from the definition table, and the location coordinates of the QR code, and add the serial number.
实际使用中,为明确各个二维码标记点之间的连接组合关系,额外对每一种地图要素定义一种“边沿”标记,定义方法同上,张贴于各个顶点的二维码标记点直线连线上,用于标记连接关系,采用同样的方法采集、存储于单独csv文件中。In actual use, in order to clarify the connection and combination relationship between each QR code mark point, an additional "edge" mark is defined for each map element. The definition method is the same as above. On the line, it is used to mark the connection relationship, which is collected and stored in a separate csv file using the same method.
得到并保存多个文件,即为所需要的矢量高精地图,其中包含了每种地图要素(如交通灯、车道、停止线、开放区域等)每个顶点或起始点(或者其他特征点)的位置,以及表明这些点之间组合连接关系的“边沿”标记位置。Obtain and save multiple files, that is, the required vector high-precision map, which contains each vertex or starting point (or other feature points) of each map element (such as traffic lights, lanes, stop lines, open areas, etc.) , and the location of the "edge" markers that indicate the combined connections between these points.
使用时,系统自动读取所有csv文件,并根据对应地图元素点的坐标在地图上生成标记顶点;然后,根据对应地图元素边沿点的坐标在地图上生成标记边沿(若顶点间直线连线上存在对应边沿点,则生成直线连接,被连接的顶点与边沿组合为地图元素对象;若顶点间连线需要为曲线,曲率较小时可采用多段直线拟合,曲率较大时则仍需要后续人工标注)When in use, the system automatically reads all csv files, and generates marked vertices on the map according to the coordinates of the corresponding map element points; If there is a corresponding edge point, a straight line connection is generated, and the connected vertices and edges are combined into a map element object; if the connection between the vertices needs to be a curve, a multi-segment straight line can be used when the curvature is small, and subsequent manual labor is still required when the curvature is large. callout)
由于基于惯性导航进行航位推算存在累积误差,因此需要通过不断重复观测二维码标记或库位位置来修正累积误差。在激光雷达建图过程中,当识别到人工视觉标记时,其航位推算的累积误差可以得到极大的消除,省去了回环检测。Due to the accumulated error of dead reckoning based on inertial navigation, it is necessary to correct the accumulated error by repeatedly observing the QR code mark or the location of the warehouse. In the process of lidar mapping, when the artificial visual mark is recognized, the accumulated error of its dead reckoning can be greatly eliminated, eliminating the need for loopback detection.
本方法中,采用扩展卡尔曼滤波,根据已观测到的标记点之间相对位置(直线),每次观测到新二维码时可减小对于新观测到二维码标记点位置估计的误差。采用图优化方法,对于已得到的标记点,进行迭代优化计算,使误差最小化,得到最终标记点位置信息结果。具体为:In this method, extended Kalman filtering is used, and according to the observed relative positions (straight lines) between the marked points, each time a new two-dimensional code is observed, the error in estimating the position of the newly observed two-dimensional code mark points can be reduced . The graph optimization method is used to perform iterative optimization calculation for the obtained marker points to minimize the error and obtain the final marker point position information result. Specifically:
激光雷达进行点云图无语义地图建图过程中,每当检测到二维码等人工视觉特征标记时,即根据预定义的规则识别、读取该人工视觉标记本身所包含的地图要素位置、类型等信息,并通过定位标记角点,计算得到车辆自身的位置,实现在地图中精确定位,同时将所读取到的地图要素语义信息标记到所建立地图上的读取位置,完成地图上特定位置语义标记点的自动信息采集、标注。In the process of constructing point cloud maps without semantic map by LiDAR, whenever an artificial visual feature mark such as a QR code is detected, it will identify and read the location and type of map elements contained in the artificial visual mark itself according to predefined rules. and other information, and by positioning and marking the corner points, the position of the vehicle itself can be calculated to achieve precise positioning in the map. Automatic information collection and labeling of location semantic markers.
而对于采集得到的标记点地图,可以根据二维码中记录的相关信息,将语义标记点之间进行连线、组合,形成语义轮廓,包括形成相应曲线或相应形状。本方法中,顶点是每个语义标记点所在位置,即各二维码所在位置,边是基于读取二维码中编码记录信息连接的结果。每获得一个新的观测或车辆发生了位移,则在图中增加新的顶点和边缘,并通过扩展卡尔曼滤波算法对已形成的语义轮廓进行修正。在添加所有数据后,根据图中边和顶点的关系,列出边误差方程,使用高斯牛顿法迭代进行优化。As for the collected marker map, according to the relevant information recorded in the two-dimensional code, the semantic markers can be connected and combined to form a semantic outline, including the formation of corresponding curves or corresponding shapes. In this method, the vertex is the location of each semantic mark point, that is, the location of each two-dimensional code, and the edge is based on the result of reading the encoded record information in the two-dimensional code. Every time a new observation is obtained or the vehicle is displaced, new vertices and edges are added to the graph, and the formed semantic contour is corrected by the extended Kalman filtering algorithm. After adding all the data, according to the relationship between the edges and vertices in the graph, the edge error equations are listed, and the Gauss-Newton method is used for iterative optimization.
图模型可以用下列误差方程描述:The graphical model can be described by the following error equation:
其中,in,
xcoordinate表示某节点的位置信息矩阵,为误差函数的自变量xcoordinate represents the position information matrix of a node, which is the independent variable of the error function
k表示图模型中任意一个结点(视觉标记或车辆);k represents any node (visual marker or vehicle) in the graph model;
C表示图模型中全体结点集合;C represents the set of all nodes in the graph model;
xk表示第k个结点的位置信息矩阵,若该结点为视觉标记,则存储视觉标记的位置坐标;若该结点为车辆,则存储车辆的位置坐标及航向;xk represents the position information matrix of the kth node. If the node is a visual marker, the position coordinates of the visual marker are stored; if the node is a vehicle, the position coordinates and heading of the vehicle are stored;
zk表示由与第k个结点相关的观测获得的第k个结点的位置信息矩阵;zk represents the position information matrix of the kth node obtained from the observations related to the kth node;
Ωk表示由与第k个结点相关的观测获得的协方差矩阵;Ωk represents the covariance matrix obtained from the observations associated with the kth node;
ek(xk,zk)表示xk与zk的误差函数;ek (xk ,zk ) represents the error function between xk and zk ;
则F(xcoordinate)是整个图模型的误差函数,表示图模型的全局最优解;Then F(xcoordinate ) is the error function of the entire graphical model, Represents the global optimal solution of the graphical model;
为了求出全局最优解用高斯牛顿法对上式进行迭代求解。To find the global optimal solution Iteratively solve the above equation using Gauss-Newton's method.
对F(xcoordinate)进行一阶泰勒展开,该问题转化为求解线性方程,解得新的全局最优解,作为初始值代入F(xcoordinate)进行新一轮迭代,并用扩展卡尔曼滤波公式优化,可增量式地得到车辆当前位置,并即时更新地图中视觉标记的位置。Perform a first-order Taylor expansion on F(xcoordinate ), the problem is transformed into solving a linear equation, and a new global optimal solution is obtained, which is substituted into F(xcoordinate ) as the initial value for a new iteration, and the extended Kalman filter formula is used Optimized to get the current position of the vehicle incrementally and instantly update the position of the visual markers in the map.
使用多线激光雷达对车辆周边的环境障碍物进行点云信息的采集,并通过扩展卡尔曼滤波与图形匹配算法,实时完成对点云轮廓的建图与局部地图的实时拼接,最终得到点云组成的全局场景地图。地图完全由点云信息构成,并进行存储。Use multi-line lidar to collect point cloud information of environmental obstacles around the vehicle, and through extended Kalman filter and graph matching algorithm, the construction of point cloud contours and the real-time stitching of local maps are completed in real time, and the point cloud is finally obtained. composed of a global scene map. The map consists entirely of point cloud information and is stored.
按照离线优化过后的语义轮廓地图位置、大小、类型等信息,将地图要素语义标记轮廓建图结果标记到激光雷达完成的无语义点云轮廓地图中,生成带地图要素信息的高精度地图。According to the location, size, type and other information of the semantic contour map after offline optimization, the semantic mark contour mapping results of map elements are marked into the semanticless point cloud contour map completed by lidar, and a high-precision map with map element information is generated.
本发明中,可通过二维码编号所代表的预定义表中某些行的内容,自动对多种不同地图要素语义信息点进行采集,例如预定义的车道、开放区域、停止线、交通灯的二维码标记等,并不仅仅局限于对停车场库位的识别与采集。因此,采集的地图要素种类并非只有停车场库位这特定的一种,而是预定义表中定义好的多种(也可以是一种)地图要素。这种将视觉标记的不同编码信息被预先定义在表格各行中,可以通过增加表格行数,以适应增量式的地图要素种类扩充。采用预定义表的方式,可以适应增量式的地图要素种类扩充,即在预定义表中按顺序向下增量的添加行内容,以定义新的二维码种类,即可对扩充的地图要素进行采集。且在预定义表格扩充后,已采集的地图,已有的视觉标记和未扩充的表格三者仍然兼容(即预定义表格扩充后,已采地图、已有视觉标记、未扩充表格3者均仍然可以使用新的预定义表格解析其内容(即地图的使用过程),无需用扩充后预定义表格重新采集或其他额外工作)。因而,采集的地图要素种类并非一定,可通过增加预定义表中对各类要素点的定义内容,针对需求,采集新的种类不同的地图要素语义信息。使用时,仅需要保证采集时依据的预定义表,与使用时相同,或是使用地图时所依据的预定义表的真子集即可。In the present invention, various semantic information points of different map elements, such as predefined lanes, open areas, stop lines, traffic lights, can be automatically collected through the content of certain rows in the predefined table represented by the two-dimensional code number. It is not limited to the identification and collection of parking spaces. Therefore, the types of the collected map elements are not only one specific type of parking lot, but multiple (or one type) map elements defined in the predefined table. The different coding information of the visual mark is pre-defined in each row of the table, and the number of table rows can be increased to accommodate the expansion of incremental map element types. The method of pre-defined table can be adapted to the expansion of the types of map elements in an incremental manner, that is, the content of rows is added incrementally downward in the pre-defined table in order to define a new type of QR code. elements are collected. And after the expansion of the predefined table, the collected map, the existing visual mark and the unexpanded table are still compatible (that is, after the expansion of the predefined table, the collected map, the existing visual mark, and the unexpanded table are all compatible. Its contents can still be parsed using the new predefined table (i.e. the process of using the map, without re-capturing with the augmented predefined table or other additional work). Therefore, the types of map elements to be collected are not necessarily fixed. By adding the definition content of various element points in the predefined table, new types of semantic information of map elements can be collected according to requirements. When using, it only needs to ensure that the pre-defined table that is based on when collecting is the same as when using, or a proper subset of the pre-defined table that is based on when using the map.
上述的对实施例的描述是为便于该技术领域的普通技术人员能理解和应用本发明。熟悉本领域技术的人员显然可以容易地对这些实施例做出各种修改,并把在此说明的一般原理应用到其他实施例中而不必经过创造性的劳动。因此,本发明不限于这里的实施例,本领域技术人员根据本发明的揭示,不脱离本发明范畴所做出的改进和修改都应该在本发明的保护范围之内。The above description of the embodiments is for the convenience of those skilled in the art to understand and apply the present invention. It will be apparent to those skilled in the art that various modifications to these embodiments can be readily made, and the generic principles described herein can be applied to other embodiments without inventive step. Therefore, the present invention is not limited to the embodiments herein, and improvements and modifications made by those skilled in the art according to the disclosure of the present invention without departing from the scope of the present invention should all fall within the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811329208.2ACN109556617A (en) | 2018-11-09 | 2018-11-09 | A kind of map elements extracting method of automatic Jian Tu robot |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811329208.2ACN109556617A (en) | 2018-11-09 | 2018-11-09 | A kind of map elements extracting method of automatic Jian Tu robot |
| Publication Number | Publication Date |
|---|---|
| CN109556617Atrue CN109556617A (en) | 2019-04-02 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811329208.2APendingCN109556617A (en) | 2018-11-09 | 2018-11-09 | A kind of map elements extracting method of automatic Jian Tu robot |
| Country | Link |
|---|---|
| CN (1) | CN109556617A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110057373A (en)* | 2019-04-22 | 2019-07-26 | 上海蔚来汽车有限公司 | For generating the method, apparatus and computer storage medium of fine semanteme map |
| CN110220517A (en)* | 2019-07-08 | 2019-09-10 | 紫光云技术有限公司 | A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words |
| CN110440811A (en)* | 2019-08-29 | 2019-11-12 | 湖北三江航天红峰控制有限公司 | A kind of universal automatic navigation control method, device and equipment terminal |
| CN110555801A (en)* | 2019-07-26 | 2019-12-10 | 纵目科技(上海)股份有限公司 | Correction method, terminal and storage medium for track deduction |
| CN110736465A (en)* | 2019-11-15 | 2020-01-31 | 北京云迹科技有限公司 | Navigation method, navigation device, robot and computer readable storage medium |
| CN110861082A (en)* | 2019-10-14 | 2020-03-06 | 北京云迹科技有限公司 | Auxiliary mapping method and device, mapping robot and storage medium |
| CN111256689A (en)* | 2020-01-15 | 2020-06-09 | 北京智华机器人科技有限公司 | Robot positioning method, robot and storage medium |
| CN111551185A (en)* | 2020-06-12 | 2020-08-18 | 弗徕威智能机器人科技(上海)有限公司 | Method for adding traffic lane |
| CN111652248A (en)* | 2020-06-02 | 2020-09-11 | 上海岭先机器人科技股份有限公司 | Positioning method and device for flexible cloth |
| CN111723173A (en)* | 2020-06-15 | 2020-09-29 | 中国第一汽车股份有限公司 | Vehicle-mounted map making method and device, electronic equipment and storage medium |
| CN111765892A (en)* | 2020-05-12 | 2020-10-13 | 驭势科技(北京)有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
| CN112149471A (en)* | 2019-06-28 | 2020-12-29 | 北京初速度科技有限公司 | Loopback detection method and device based on semantic point cloud |
| CN112365606A (en)* | 2020-11-05 | 2021-02-12 | 日立楼宇技术(广州)有限公司 | Method and device for marking device position, computer device and storage medium |
| CN112581533A (en)* | 2020-12-16 | 2021-03-30 | 百度在线网络技术(北京)有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
| WO2021129345A1 (en)* | 2019-12-27 | 2021-07-01 | 炬星科技(深圳)有限公司 | Scene map building method, device, and storage medium |
| CN113375682A (en)* | 2021-06-09 | 2021-09-10 | 深圳朗道智通科技有限公司 | System and method for automatically marking real-time high-precision map through data fusion |
| CN113535868A (en)* | 2021-06-11 | 2021-10-22 | 上海追势科技有限公司 | A method for generating high-precision maps for autonomous parking based on public navigation maps |
| WO2022068781A1 (en)* | 2020-09-29 | 2022-04-07 | 炬星科技(深圳)有限公司 | Guided mapping method and device, and computer-readable storage medium |
| CN114581617A (en)* | 2022-02-24 | 2022-06-03 | 中航华东光电(上海)有限公司 | Multi-floor map construction method and system integrating Z-axis data through two-dimensional codes |
| CN115381354A (en)* | 2022-07-28 | 2022-11-25 | 广州宝乐软件科技有限公司 | Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment |
| CN115655262A (en)* | 2022-12-26 | 2023-01-31 | 广东省科学院智能制造研究所 | Deep learning perception-based multi-level semantic map construction method and device |
| TWI836366B (en)* | 2022-03-04 | 2024-03-21 | 歐特明電子股份有限公司 | Automatic parking mapping system mounted on vehicle |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104330090A (en)* | 2014-10-23 | 2015-02-04 | 北京化工大学 | Robot distributed type representation intelligent semantic map establishment method |
| CN105678476A (en)* | 2016-03-01 | 2016-06-15 | 浙江大学 | Video-based intelligent guidance system and guidance method for self-study room |
| CN107180215A (en)* | 2017-05-31 | 2017-09-19 | 同济大学 | Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically |
| CN107449427A (en)* | 2017-07-27 | 2017-12-08 | 京东方科技集团股份有限公司 | A kind of method and apparatus for generating navigation map |
| CN108303101A (en)* | 2018-03-05 | 2018-07-20 | 弗徕威智能机器人科技(上海)有限公司 | A kind of construction method of navigation map |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104330090A (en)* | 2014-10-23 | 2015-02-04 | 北京化工大学 | Robot distributed type representation intelligent semantic map establishment method |
| CN105678476A (en)* | 2016-03-01 | 2016-06-15 | 浙江大学 | Video-based intelligent guidance system and guidance method for self-study room |
| CN107180215A (en)* | 2017-05-31 | 2017-09-19 | 同济大学 | Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically |
| CN107449427A (en)* | 2017-07-27 | 2017-12-08 | 京东方科技集团股份有限公司 | A kind of method and apparatus for generating navigation map |
| CN108303101A (en)* | 2018-03-05 | 2018-07-20 | 弗徕威智能机器人科技(上海)有限公司 | A kind of construction method of navigation map |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110057373B (en)* | 2019-04-22 | 2023-11-03 | 上海蔚来汽车有限公司 | Method, apparatus and computer storage medium for generating high-definition semantic map |
| CN110057373A (en)* | 2019-04-22 | 2019-07-26 | 上海蔚来汽车有限公司 | For generating the method, apparatus and computer storage medium of fine semanteme map |
| CN112149471B (en)* | 2019-06-28 | 2024-04-16 | 北京初速度科技有限公司 | Loop detection method and device based on semantic point cloud |
| CN112149471A (en)* | 2019-06-28 | 2020-12-29 | 北京初速度科技有限公司 | Loopback detection method and device based on semantic point cloud |
| CN110220517A (en)* | 2019-07-08 | 2019-09-10 | 紫光云技术有限公司 | A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words |
| CN110555801A (en)* | 2019-07-26 | 2019-12-10 | 纵目科技(上海)股份有限公司 | Correction method, terminal and storage medium for track deduction |
| CN110440811A (en)* | 2019-08-29 | 2019-11-12 | 湖北三江航天红峰控制有限公司 | A kind of universal automatic navigation control method, device and equipment terminal |
| CN110440811B (en)* | 2019-08-29 | 2021-05-14 | 湖北三江航天红峰控制有限公司 | Universal autonomous navigation control method, device and equipment terminal |
| CN110861082A (en)* | 2019-10-14 | 2020-03-06 | 北京云迹科技有限公司 | Auxiliary mapping method and device, mapping robot and storage medium |
| CN110736465A (en)* | 2019-11-15 | 2020-01-31 | 北京云迹科技有限公司 | Navigation method, navigation device, robot and computer readable storage medium |
| WO2021129345A1 (en)* | 2019-12-27 | 2021-07-01 | 炬星科技(深圳)有限公司 | Scene map building method, device, and storage medium |
| CN111256689A (en)* | 2020-01-15 | 2020-06-09 | 北京智华机器人科技有限公司 | Robot positioning method, robot and storage medium |
| CN111256689B (en)* | 2020-01-15 | 2022-01-21 | 北京智华机器人科技有限公司 | Robot positioning method, robot and storage medium |
| CN111765892A (en)* | 2020-05-12 | 2020-10-13 | 驭势科技(北京)有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
| CN111765892B (en)* | 2020-05-12 | 2022-04-29 | 驭势科技(北京)有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
| CN111652248A (en)* | 2020-06-02 | 2020-09-11 | 上海岭先机器人科技股份有限公司 | Positioning method and device for flexible cloth |
| CN111652248B (en)* | 2020-06-02 | 2023-08-08 | 上海岭先机器人科技股份有限公司 | Positioning method and device for flexible cloth |
| CN111551185A (en)* | 2020-06-12 | 2020-08-18 | 弗徕威智能机器人科技(上海)有限公司 | Method for adding traffic lane |
| CN111723173A (en)* | 2020-06-15 | 2020-09-29 | 中国第一汽车股份有限公司 | Vehicle-mounted map making method and device, electronic equipment and storage medium |
| WO2022068781A1 (en)* | 2020-09-29 | 2022-04-07 | 炬星科技(深圳)有限公司 | Guided mapping method and device, and computer-readable storage medium |
| CN112365606A (en)* | 2020-11-05 | 2021-02-12 | 日立楼宇技术(广州)有限公司 | Method and device for marking device position, computer device and storage medium |
| CN112365606B (en)* | 2020-11-05 | 2023-08-01 | 日立楼宇技术(广州)有限公司 | Labeling method and device for equipment positions, computer equipment and storage medium |
| CN112581533A (en)* | 2020-12-16 | 2021-03-30 | 百度在线网络技术(北京)有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
| CN112581533B (en)* | 2020-12-16 | 2023-10-03 | 百度在线网络技术(北京)有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
| CN113375682A (en)* | 2021-06-09 | 2021-09-10 | 深圳朗道智通科技有限公司 | System and method for automatically marking real-time high-precision map through data fusion |
| CN113535868A (en)* | 2021-06-11 | 2021-10-22 | 上海追势科技有限公司 | A method for generating high-precision maps for autonomous parking based on public navigation maps |
| CN114581617A (en)* | 2022-02-24 | 2022-06-03 | 中航华东光电(上海)有限公司 | Multi-floor map construction method and system integrating Z-axis data through two-dimensional codes |
| CN114581617B (en)* | 2022-02-24 | 2025-02-18 | 中航华东光电(上海)有限公司 | Multi-floor map construction method and system by integrating Z-axis data with QR code |
| TWI836366B (en)* | 2022-03-04 | 2024-03-21 | 歐特明電子股份有限公司 | Automatic parking mapping system mounted on vehicle |
| CN115381354A (en)* | 2022-07-28 | 2022-11-25 | 广州宝乐软件科技有限公司 | Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment |
| CN115655262A (en)* | 2022-12-26 | 2023-01-31 | 广东省科学院智能制造研究所 | Deep learning perception-based multi-level semantic map construction method and device |
| Publication | Publication Date | Title |
|---|---|---|
| CN109556617A (en) | A kind of map elements extracting method of automatic Jian Tu robot | |
| CN111238494B (en) | Carrier, carrier positioning system and carrier positioning method | |
| CN107180215B (en) | Automatic mapping and high-precision positioning method of parking lot based on storage location and two-dimensional code | |
| CN113985429B (en) | Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar | |
| CN110146910B (en) | Positioning method and device based on data fusion of GPS and laser radar | |
| US11288526B2 (en) | Method of collecting road sign information using mobile mapping system | |
| Agrawal et al. | Localization and mapping for autonomous navigation in outdoor terrains: A stereo vision approach | |
| CN109556616A (en) | A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method | |
| CN114076956A (en) | A lane line calibration method based on lidar point cloud assistance | |
| CN112667837A (en) | Automatic image data labeling method and device | |
| CN111928862A (en) | Method for constructing semantic map on line by fusing laser radar and visual sensor | |
| Nehme et al. | Lidar-based structure tracking for agricultural robots: Application to autonomous navigation in vineyards | |
| CN110361027A (en) | Robot path planning method based on single line laser radar Yu binocular camera data fusion | |
| CN110084272A (en) | A kind of cluster map creating method and based on cluster map and the matched method for relocating of location expression | |
| CN106980657A (en) | A kind of track level electronic map construction method based on information fusion | |
| Wen et al. | Tm 3 loc: Tightly-coupled monocular map matching for high precision vehicle localization | |
| Jeong et al. | Hdmi-loc: Exploiting high definition map image for precise localization via bitwise particle filter | |
| CN113052903A (en) | Vision and radar fusion positioning method for mobile robot | |
| CN114509065B (en) | Map construction method, system, vehicle terminal, server and storage medium | |
| CN113358125A (en) | Navigation method and system based on environmental target detection and environmental target map | |
| KR102677731B1 (en) | System and Methods for improving the localization and mapping accuracy of mono-cameras using GNSS/INS and HD Map | |
| CN118365812A (en) | Map construction method and device | |
| CN116142172A (en) | A parking method and device based on a voxel coordinate system | |
| Hu et al. | Accurate global trajectory alignment using poles and road markings | |
| Wu et al. | A stepwise minimum spanning tree matching method for registering vehicle-borne and backpack LiDAR point clouds |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication | Application publication date:20190402 |