





技术领域technical field
本发明涉及计算机视觉图像处理技术领域,尤其涉及一种图像-激光雷达图像数据多尺度融合的道路检测方法。The invention relates to the technical field of computer vision image processing, in particular to a road detection method for multi-scale fusion of image-lidar image data.
背景技术Background technique
为确保智能车对于道路环境的理解和把握,道路区域检测是智能车行驶的重要依据。人类驾驶员通过识别前方可行驶区域,判断车辆前方道路是否平坦,有无其他障碍区域,如行人,车辆,坑洼地段,从而调整车辆行驶速度和姿态,控制车辆做出相应动作。同样在智能车感知系统中,道路区域检测为智能车进一步控制策略提供最重要依据,当智能车检测到前方道路范围,可依据智能车在道路中的相对位置进行局部路径规划以及决策控制,例如,当车辆在道路区域中有偏移时,可根据偏移量进行修正,同时根据偏航角度,修正车辆位姿。In order to ensure that smart cars understand and grasp the road environment, road area detection is an important basis for smart cars to drive. By identifying the drivable area ahead, the human driver determines whether the road in front of the vehicle is flat and whether there are other obstacle areas, such as pedestrians, vehicles, and potholes, so as to adjust the speed and attitude of the vehicle, and control the vehicle to make corresponding actions. Also in the intelligent vehicle perception system, road area detection provides the most important basis for the further control strategy of the intelligent vehicle. When the intelligent vehicle detects the range of the road ahead, it can perform local path planning and decision-making control according to the relative position of the intelligent vehicle on the road, such as , when the vehicle has an offset in the road area, it can be corrected according to the offset, and the vehicle pose can be corrected according to the yaw angle.
公开号为CN108985247A的专利介绍了一种多光谱图像道路识别方法,利用光谱图像中道路特征,使用超像素与结构张量进行道路分割。此方法并没有考虑道路的三维特征以及道路障碍物空间特征。Patent Publication No. CN108985247A introduces a multi-spectral image road identification method, which utilizes road features in spectral images and uses superpixels and structure tensors to perform road segmentation. This method does not consider the three-dimensional characteristics of the road and the spatial characteristics of road obstacles.
公开号为CN107808140A的专利介绍了一种基于图像融合的单目道路识别算法,利用原始图像数据以及光照不变图像融合分割可行驶道路。同样,方法忽略了道路在三维场景中的特征,没有考虑物体的几何信息。Patent Publication No. CN107808140A introduces a monocular road recognition algorithm based on image fusion, which utilizes original image data and illumination-invariant image fusion to segment drivable roads. Likewise, the method ignores the characteristics of roads in the 3D scene and does not consider the geometric information of objects.
公开号为CN108052933A的专利介绍了一种基于卷积神经网络的道路识别方法,利用经典的全卷积神经网络分割图像道路区域。但是由于仅仅利用原始图像数据,无法应对复杂场景道路分割,且对阴影以及光照较为敏感。The patent with publication number CN108052933A introduces a road recognition method based on convolutional neural network, which uses a classic full convolutional neural network to segment image road areas. However, due to only using the original image data, it cannot deal with complex scene road segmentation, and is more sensitive to shadows and lighting.
以上现有的图像处理方法还是基于深度学习的语义分割方法,单纯依靠视觉信息容易受到室外光线变化以及道路阴影遮挡影响,并且无法有效利用三维环境中障碍物的深度信息,出现误检以及漏检等情况。例如,图像会将与道路颜色纹理相近的道路边沿、隔离带、人行道带入到道路检测结果中,出现误检情况;由于楼宇阴影遮挡,光照情况变化,对道路中阴影区域误分类,导致漏检情况出现。The above existing image processing methods are still semantic segmentation methods based on deep learning. They are easily affected by outdoor light changes and road shadow occlusion by relying solely on visual information, and cannot effectively utilize the depth information of obstacles in the 3D environment, resulting in false detection and missed detection. and so on. For example, the image will bring the road edges, isolation belts, and sidewalks that are similar to the road color and texture into the road detection results, resulting in false detections; due to the occlusion of buildings and changes in lighting conditions, the shadow areas in the road are misclassified, resulting in leakage. inspection occurs.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于克服现有技术的不足,提供一种图像-激光雷达图像数据多尺度融合的道路检测方法,解决了现有方法存在的缺陷。The purpose of the present invention is to overcome the deficiencies of the prior art, and to provide a road detection method for multi-scale fusion of image-lidar image data, which solves the defects of the prior art.
本发明的目的是通过以下技术方案来实现的:一种图像-激光雷达图像数据多尺度融合的道路检测方法,所述检测方法包括以下内容:The object of the present invention is achieved through the following technical solutions: a road detection method for multi-scale fusion of image-lidar image data, the detection method includes the following contents:
通过坐标系变换矩阵将激光雷达图像数据通过坐标系空间同步投影到相应的图像平面;Through the coordinate system transformation matrix, the lidar image data is synchronously projected to the corresponding image plane through the coordinate system space;
选取激光雷达投影点云三通道通过处理转换成致密的激光雷达投影图像;Select three channels of lidar projection point cloud to convert into dense lidar projection image through processing;
将两种数据同时输入到两个完全相同且相对独立的编码器中进行多尺度密集融合;The two kinds of data are simultaneously input into two identical and relatively independent encoders for multi-scale dense fusion;
将多尺度密集融合结果输入到图像-激光雷达融合道路检测网络得到像素级的车道分类结果。The multi-scale dense fusion results are input into the image-lidar fusion road detection network to obtain pixel-level lane classification results.
在进行所述检测方法之前还需完成图像-激光雷达融合道路检测网络的训练。Before performing the detection method, the training of the image-lidar fusion road detection network needs to be completed.
所述图像-激光雷达融合道路检测网络的训练包括以下内容:The training of the image-lidar fusion road detection network includes the following:
将激光雷达数据通过坐标系空间同步将其投影到相应的图像平面,其三通道中分别包括X,Y,Z方向深度信息;Project the lidar data to the corresponding image plane through the spatial synchronization of the coordinate system, and the three channels respectively include depth information in the X, Y, and Z directions;
取激光雷达投影点云三通道分别通过扩展滤波进行密集升采样处理,与图像数据同时归一化到[0,1]区间,同时将激光雷达投影图像和原始图片统一尺寸到1248×384分辨率;Take the three channels of the lidar projection point cloud to perform intensive upsampling processing through extended filtering, and normalize the image data to the [0,1] interval at the same time. ;
使用[0,1]之间的均匀分布对神经网络参数初始化,将两个数据分别以batch_size为1输入到网络中,对已经建立好的合作网络使用Cross Entropy Loss损失函数,通过Adam优化器进行模型参数更新。Use the uniform distribution between [0, 1] to initialize the neural network parameters, input the two data into the network with batch_size as 1 respectively, use the Cross Entropy Loss loss function for the established cooperative network, and perform the Adam optimizer. Model parameter update.
所述三通道分别包括X,Y,Z方向深度信息。The three channels respectively include depth information in X, Y, and Z directions.
所述激光雷达投影点云三通道通过扩展滤波进行密集升采样处理转换成致密的激光雷达投影图像。The three-channel lidar projection point cloud is converted into a dense lidar projection image by intensive upsampling processing through extended filtering.
所述两种数据包括图像特征和激光雷达特征;激光雷达图像数据在不同尺度特征上通过LCR融合模块约束图像特征,LCR融合模块中利用1×1的卷积融合激光雷达投影全局特征表达并整合输出通道数。The two kinds of data include image features and lidar features; lidar image data is constrained by the LCR fusion module on different scale features, and the LCR fusion module uses 1×1 convolution fusion lidar projection global feature expression and integration. Number of output channels.
本发明的有益效果是:一种图像-激光雷达图像数据多尺度融合的道路检测方法,能够在保留图像数据特征的情况下,使用激光雷达特征进行多尺度特征约束,使得网络更加鲁棒,减少了过拟合现象的发生几率;通过LCR融合模块可以更好利用激光雷达特征,融合激光雷达全局信息,整合输出通道数;通过构建图像-激光雷达融合道路检测网络能够比一般的单模态数据结果更鲁棒,对阴影和光照影响更小。The beneficial effects of the present invention are: a road detection method for multi-scale fusion of image-laser image data, which can use the laser radar features to carry out multi-scale feature constraints under the condition of retaining the image data features, making the network more robust and reducing The probability of overfitting is eliminated; the LCR fusion module can make better use of lidar features, fuse lidar global information, and integrate the number of output channels; by constructing an image-lidar fusion road detection network, it can be compared with ordinary single-modal data. The result is more robust with less impact on shadows and lighting.
附图说明Description of drawings
图1为本发明方法的流程图;Fig. 1 is the flow chart of the method of the present invention;
图2为多尺度融合结构图;Figure 2 is a multi-scale fusion structure diagram;
图3为LCR融合模块图;Fig. 3 is the LCR fusion module diagram;
图4为基础网络结构图;Figure 4 is a basic network structure diagram;
图5为图像-激光雷达融合道路检测网络图;Figure 5 is an image-lidar fusion road detection network diagram;
图6为本发明的实验效果图。FIG. 6 is an experimental effect diagram of the present invention.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, but not all embodiments. The components of the embodiments of the invention generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations.
因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。Thus, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the invention as claimed, but is merely representative of selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
在本发明的描述中,需要说明的是,术语“上”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,或者是该发明产品使用时惯常摆放的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。In the description of the present invention, it should be noted that the orientation or positional relationship indicated by the terms "up", "inside", "outside", etc. is based on the orientation or positional relationship shown in the accompanying drawings, or when the product of the invention is used. The usual orientation or positional relationship is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and therefore should not be construed as a Invention limitations.
在本发明的描述中,还需要说明的是,除非另有明确的规定和限定,术语“设置”、“安装”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should also be noted that, unless otherwise expressly specified and limited, the terms "arrangement", "installation" and "connection" should be understood in a broad sense, for example, it may be a fixed connection or a connectable connection. Detachable connection, or integral connection; may be mechanical connection or electrical connection; may be direct connection, or indirect connection through an intermediate medium, or internal communication between two components. For those of ordinary skill in the art, the specific meanings of the above terms in the present invention can be understood in specific situations.
下面结合附图进一步详细描述本发明的技术方案,但本发明的保护范围不局限于以下所述。The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the protection scope of the present invention is not limited to the following.
如图1所示,一种图像-激光雷达图像数据多尺度融合的道路检测方法,所述检测方法包括以下内容:As shown in Figure 1, a road detection method based on multi-scale fusion of image-lidar image data, the detection method includes the following contents:
通过坐标系变换矩阵将激光雷达图像数据通过坐标系空间同步投影到相应的图像平面;Through the coordinate system transformation matrix, the lidar image data is synchronously projected to the corresponding image plane through the coordinate system space;
进一步地,其坐标系变换矩阵如下:Further, its coordinate system transformation matrix is as follows:
如图2所示,选取激光雷达投影点云三通道通过扩展滤波进行密集升采样处理转换成致密的激光雷达投影图像。与激光雷达图像数据同时归一化到[0,1],同时将激光雷达投影图像和原始图片统一尺寸到1248×384。As shown in Figure 2, the three channels of the lidar projection point cloud are selected to perform dense upsampling processing through extended filtering to convert into a dense lidar projection image. It is normalized to [0,1] at the same time as the lidar image data, and the lidar projection image and the original image are unified to 1248×384.
进一步地,扩展滤波对于离散稀疏的激光雷达点云投影,能够增加密集程度,形成更稠密的激光雷达点云投影,使得物体轮廓更为清晰,边界更为平滑,有助于神经网络学习特征,提高道路分割MIoU。Further, for the discrete and sparse lidar point cloud projection, extended filtering can increase the density and form a denser lidar point cloud projection, making the outline of the object clearer and the boundary smoother, which is helpful for the neural network to learn features. Improve road segmentation MIoU.
将两种数据同时输入到两个完全相同且相对独立的编码器中进行多尺度密集融合;The two kinds of data are simultaneously input into two identical and relatively independent encoders for multi-scale dense fusion;
将多尺度密集融合结果输入到图像-激光雷达融合道路检测网络得到像素级的车道分类结果。The multi-scale dense fusion results are input into the image-lidar fusion road detection network to obtain pixel-level lane classification results.
在进行所述检测方法之前还需完成图像-激光雷达融合道路检测网络的训练。Before performing the detection method, the training of the image-lidar fusion road detection network needs to be completed.
所述图像-激光雷达融合道路检测网络的训练包括以下内容:The training of the image-lidar fusion road detection network includes the following:
模型在UNet编码器的基础上将视觉图像以及激光雷达投影图像分别通过两路编码器,其中每条之路分别是(R,G,B)和(X,Y,Z)三通道数据。在每个下采样层中利用激光雷达投影特征图约束图像特征,最后将融合数据通过对称的反卷积结构恢复为原始图像大小以及通道数。Based on the UNet encoder, the model passes the visual image and the lidar projection image through two encoders, each of which is (R, G, B) and (X, Y, Z) three-channel data. The image features are constrained by the lidar projection feature map in each downsampling layer, and finally the fused data is restored to the original image size and number of channels through a symmetric deconvolution structure.
训练步骤:Training steps:
1).将激光雷达数据通过坐标系空间同步将其投影到相应的图像平面,其三通道中分别包括X,Y,Z方向深度信息。1). Project the lidar data to the corresponding image plane through the coordinate system space synchronization, and the three channels respectively include depth information in X, Y, and Z directions.
2)取激光雷达投影点云三通道分别通过扩展滤波进行密集升采样处理,与图像数据同时归一化到[0,1]区间,同时将激光雷达投影图像和原始图片统一尺寸到1248×384分辨率。2) Take the three channels of the lidar projection point cloud to perform intensive upsampling processing through extended filtering, normalize the lidar projection image and the original image to the [0,1] interval at the same time, and unify the size of the lidar projection image and the original image to 1248×384 resolution.
3)使用[0,1]之间的均匀分布对神经网络参数初始化,将两个数据分别以batch_size为1输入到网络中,对已经建立好的合作网络使用Cross Entropy Loss损失函数,通过Adam优化器进行模型参数更新。3) Use the uniform distribution between [0,1] to initialize the neural network parameters, input the two data into the network with batch_size as 1 respectively, use the Cross Entropy Loss loss function for the established cooperative network, and optimize it through Adam to update the model parameters.
所述三通道分别包括X,Y,Z方向深度信息。The three channels respectively include depth information in X, Y, and Z directions.
如图3所示,所述两种数据包括图像特征和激光雷达特征;激光雷达图像数据在不同尺度特征上通过LCR融合模块约束图像特征,LCR融合模块中利用1×1的卷积融合激光雷达投影全局特征表达并整合输出通道数。As shown in Figure 3, the two types of data include image features and lidar features; lidar image data is constrained by the LCR fusion module on different scale features to constrain image features, and the LCR fusion module uses 1×1 convolution to fuse lidar. Project the global feature representation and integrate the number of output channels.
进一步地,LCR融合模块使用1×1的卷积用来整合输出特征通道数,并且融合激光雷达全局信息表达。这样操作比使用3×3卷积大幅度减小参数量,降低了过拟合的风险,可以理解为全连接层,用来表达全局信息。Further, the LCR fusion module uses 1×1 convolution to integrate the number of output feature channels and fuse the lidar global information representation. This operation greatly reduces the amount of parameters and reduces the risk of overfitting compared to using 3×3 convolution. It can be understood as a fully connected layer to express global information.
如图4和图5所示,根据多尺度密集融合方式构建全卷积道路分割模型,利用FCN网络的三个下采样层构建融合信息,再通过三个反卷积层恢复原始图像大小,得到像素级的车道分类。使用[0,1]之间的均匀分布对神经网络参数初始化,将两个数据分别以batch_size为1输入到网络中,对已经建立好的合作网络使用Cross Entropy Loss损失函数,通过Adam优化器进行模型参数更新。As shown in Figure 4 and Figure 5, a fully convolutional road segmentation model is constructed according to the multi-scale dense fusion method, and the fusion information is constructed by using the three downsampling layers of the FCN network, and then the original image size is restored through the three deconvolution layers. Pixel-level lane classification. Use the uniform distribution between [0, 1] to initialize the neural network parameters, input the two data into the network with batch_size as 1 respectively, use the Cross Entropy Loss loss function for the established cooperative network, and perform the Adam optimizer. Model parameter update.
进一步地,在网络不同尺度空间利用LCR模块对两种模态数据进行融合,更充分利用其特征,更紧密结合两种数据特性。并且两种模态数据在编码特征提取时不会互相影响参数,两种特征互不相同而又互有促进,形成一种类似于正则化的效应,使得双方的特征学习都向着更优秀、更强的泛化能力的方向进行。网络输入大小为1248x384,两个数据分别为3通道,优化方法选用Adam,初始学习率0.0001,batch size设置为1,学习率调整策略为:连续4次验证loss不下降则降低0.5倍学习率。网络权重初始化方式使用随机初始化。Further, the LCR module is used to fuse the two modal data in different scale spaces of the network, making full use of their features and combining the two data characteristics more closely. In addition, the two modal data will not affect each other's parameters when encoding feature extraction, and the two features are different from each other and promote each other, forming an effect similar to regularization, which makes the feature learning of both parties better and better. direction of strong generalization ability. The network input size is 1248x384, and the two data are 3 channels respectively. The optimization method is Adam, the initial learning rate is 0.0001, and the batch size is set to 1. The learning rate adjustment strategy is: if the loss does not decrease for 4 consecutive times, the learning rate is reduced by 0.5 times. The network weight initialization method uses random initialization.
如图6所示,通过本发明的对图像进行处理后使得最后结果对阴影和光照影响更小了,通过激光雷达包含的三维物体信息,将颜色纹理相近的人行道区域以及阴影和光照影响的道路区域有了更多约束,分割效果较单一图像数据更鲁棒。相比其他融合方式,例如前融合,后融合方式,本文多尺度融合更为充分,使得分割效果优越。As shown in Fig. 6, after the image is processed by the present invention, the final result has less influence on the shadow and illumination. Through the three-dimensional object information contained in the lidar, the sidewalk area with similar color and texture and the road affected by shadow and illumination are The region has more constraints, and the segmentation effect is more robust than single image data. Compared with other fusion methods, such as pre-fusion and post-fusion, the multi-scale fusion in this paper is more sufficient, which makes the segmentation effect superior.
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above descriptions are only the embodiments of the present invention, and are not intended to limit the scope of the present invention. Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of the present invention, or directly or indirectly applied to other related technologies Fields are similarly included in the scope of patent protection of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910677344.9ACN110414418B (en) | 2019-07-25 | 2019-07-25 | Road detection method for multi-scale fusion of image-laser radar image data |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910677344.9ACN110414418B (en) | 2019-07-25 | 2019-07-25 | Road detection method for multi-scale fusion of image-laser radar image data |
| Publication Number | Publication Date |
|---|---|
| CN110414418A CN110414418A (en) | 2019-11-05 |
| CN110414418Btrue CN110414418B (en) | 2022-06-03 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910677344.9AActiveCN110414418B (en) | 2019-07-25 | 2019-07-25 | Road detection method for multi-scale fusion of image-laser radar image data |
| Country | Link |
|---|---|
| CN (1) | CN110414418B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111274976B (en)* | 2020-01-22 | 2020-09-18 | 清华大学 | Lane detection method and system based on multi-level fusion of vision and lidar |
| CN111291676B (en)* | 2020-02-05 | 2020-12-11 | 清华大学 | Lane line detection method and device based on laser radar point cloud and camera image fusion and chip |
| CN113534786B (en)* | 2020-04-20 | 2025-07-01 | 深圳市三六零智慧生活科技有限公司 | Environmental reconstruction method, system and mobile robot based on SLAM method |
| CN111626217B (en)* | 2020-05-28 | 2023-08-22 | 宁波博登智能科技有限公司 | Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion |
| CN111967373B (en)* | 2020-08-14 | 2021-03-30 | 东南大学 | Self-adaptive enhanced fusion real-time instance segmentation method based on camera and laser radar |
| CN112017170B (en)* | 2020-08-26 | 2024-05-14 | 广东建科交通工程质量检测中心有限公司 | Road pavement pit identification method, device and equipment based on three-dimensional shadow model |
| CN111951306B (en)* | 2020-08-31 | 2024-06-07 | 华通科技有限公司 | Target detection method for fusion of laser radar and image video |
| CN112433228B (en)* | 2021-01-05 | 2023-02-03 | 中国人民解放军国防科技大学 | Multi-laser radar decision-level fusion method and device for pedestrian detection |
| CN113655494B (en)* | 2021-07-27 | 2024-05-10 | 上海智能网联汽车技术中心有限公司 | Road side camera and 4D millimeter wave fused target detection method, device and medium |
| CN113838030B (en)* | 2021-09-24 | 2024-05-14 | 北京杰迈科技股份有限公司 | Switch state detection method |
| CN114612869A (en)* | 2022-03-14 | 2022-06-10 | 合肥工业大学 | An information fusion method of roadside lidar and vehicle lidar |
| CN115031744B (en)* | 2022-05-31 | 2025-05-30 | 电子科技大学 | A cognitive map positioning method and system based on sparse point cloud-texture information |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009199284A (en)* | 2008-02-21 | 2009-09-03 | Univ Of Tokyo | Road object recognition method |
| CN103308925A (en)* | 2013-05-31 | 2013-09-18 | 中国科学院合肥物质科学研究院 | Integral three-dimensional color laser radar data point cloud generating method and device thereof |
| CN104374376A (en)* | 2014-11-05 | 2015-02-25 | 北京大学 | Vehicle-mounted three-dimensional measurement system device and application thereof |
| CN108445496A (en)* | 2018-01-02 | 2018-08-24 | 北京汽车集团有限公司 | Ranging caliberating device and method, distance-measuring equipment and distance measuring method |
| CN109461178A (en)* | 2018-09-10 | 2019-03-12 | 中国科学院自动化研究所 | A kind of monocular image depth estimation method and device merging sparse known label |
| CN109917419A (en)* | 2019-04-12 | 2019-06-21 | 中山大学 | A Dense System and Method for Depth Filling Based on LiDAR and Image |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9989969B2 (en)* | 2015-01-19 | 2018-06-05 | The Regents Of The University Of Michigan | Visual localization within LIDAR maps |
| US10200683B2 (en)* | 2016-12-21 | 2019-02-05 | Microvision, Inc. | Devices and methods for providing foveated scanning laser image projection with depth mapping |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009199284A (en)* | 2008-02-21 | 2009-09-03 | Univ Of Tokyo | Road object recognition method |
| CN103308925A (en)* | 2013-05-31 | 2013-09-18 | 中国科学院合肥物质科学研究院 | Integral three-dimensional color laser radar data point cloud generating method and device thereof |
| CN104374376A (en)* | 2014-11-05 | 2015-02-25 | 北京大学 | Vehicle-mounted three-dimensional measurement system device and application thereof |
| CN108445496A (en)* | 2018-01-02 | 2018-08-24 | 北京汽车集团有限公司 | Ranging caliberating device and method, distance-measuring equipment and distance measuring method |
| CN109461178A (en)* | 2018-09-10 | 2019-03-12 | 中国科学院自动化研究所 | A kind of monocular image depth estimation method and device merging sparse known label |
| CN109917419A (en)* | 2019-04-12 | 2019-06-21 | 中山大学 | A Dense System and Method for Depth Filling Based on LiDAR and Image |
| Title |
|---|
| "Pedestrian Detection Combining RGB and Dense LIDAR Data";Cristiano Premebida etc;《2014 IEEE/RSJ International Conference on Intelligent Robots and Systems》;20140918;正文第4113页-4114页* |
| Publication number | Publication date |
|---|---|
| CN110414418A (en) | 2019-11-05 |
| Publication | Publication Date | Title |
|---|---|---|
| CN110414418B (en) | Road detection method for multi-scale fusion of image-laser radar image data | |
| CN111274976B (en) | Lane detection method and system based on multi-level fusion of vision and lidar | |
| CN111694010B (en) | Roadside vehicle identification method based on fusion of vision and laser radar | |
| CN109948661B (en) | A 3D vehicle detection method based on multi-sensor fusion | |
| CN110942000B (en) | Unmanned vehicle target detection method based on deep learning | |
| CN112215306B (en) | Target detection method based on fusion of monocular vision and millimeter wave radar | |
| CN114332494B (en) | Three-dimensional target detection and recognition method based on multi-source fusion in vehicle-road cooperative scenario | |
| CN110909666B (en) | Night vehicle detection method based on improved YOLOv3 convolutional neural network | |
| Hirabayashi et al. | Traffic light recognition using high-definition map features | |
| CN111461048B (en) | Vision-based parking lot drivable area detection and local map construction method | |
| CN111046781B (en) | A Robust 3D Object Detection Method Based on Ternary Attention Mechanism | |
| CN113506318B (en) | Three-dimensional target perception method under vehicle-mounted edge scene | |
| CN107633220A (en) | A kind of vehicle front target identification method based on convolutional neural networks | |
| CN111967373B (en) | Self-adaptive enhanced fusion real-time instance segmentation method based on camera and laser radar | |
| CN112731436B (en) | Multi-mode data fusion travelable region detection method based on point cloud up-sampling | |
| CN110321877A (en) | Three mesh rearview mirrors of one kind and trinocular vision safe driving method and system | |
| CN116978009A (en) | Dynamic object filtering method based on 4D millimeter wave radar | |
| CN114332845A (en) | A method and device for 3D target detection | |
| CN114898322A (en) | Driving environment recognition method, device, vehicle and storage medium | |
| CN110599497A (en) | Drivable region segmentation method based on deep neural network | |
| CN119312276B (en) | A human-vehicle interaction identification method and system | |
| CN114283394A (en) | A traffic target detection system based on vehicle sensor fusion | |
| CN114937255A (en) | A detection method and device for fusion of lidar and camera | |
| CN114419591A (en) | Multi-sensor information fusion vehicle detection method based on C-V2X | |
| CN117111055A (en) | Vehicle state sensing method based on thunder fusion |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |