


技术领域technical field
本申请涉及自动驾驶领域,具体而言,涉及一种自动驾驶感知结果的测试方法和装置。The present application relates to the field of automatic driving, in particular, to a method and device for testing perception results of automatic driving.
背景技术Background technique
随着目前自动驾驶领域的快速发展,各种感知方案也在快速的更新迭代,主要分为基于视觉的、基于激光雷达的以及基于视觉和激光雷达融合的感知方案。从目前带有辅助驾驶或自动驾驶功能的大规模量产车型来看,基于视觉的感知方案仍然是当下主流方案。With the rapid development of the current autonomous driving field, various perception schemes are also being rapidly updated and iterated, mainly divided into vision-based, lidar-based, and fusion-based vision and lidar perception schemes. Judging from the current mass-produced models with assisted driving or automatic driving functions, the vision-based perception solution is still the current mainstream solution.
在图像算法开发过程中以及在车载嵌入式端进行图像算法自动化测试时,如何量化评价算法给出的感知结果则是进行后续算法迭代优化的前提。目前,主要是依靠人工来评价算法给出的感知结果,例如通过一个或者多个专家来打分实现,这种方式不仅人力、物力成本较高,而且准确性上也无法得到保证。During the image algorithm development process and when the image algorithm is automatically tested on the vehicle embedded terminal, how to quantify and evaluate the perception results given by the algorithm is the premise for subsequent algorithm iterative optimization. At present, it is mainly relying on human beings to evaluate the perception results given by the algorithm, such as scoring by one or more experts. This method not only has high manpower and material costs, but also cannot guarantee accuracy.
针对上述的问题,目前尚未提出有效的解决方案。For the above problems, no effective solution has been proposed yet.
发明内容Contents of the invention
本申请实施例提供了一种自动驾驶感知结果的测试方法和装置,以至少解决相关技术中算法测试的效率较低的技术问题。Embodiments of the present application provide a method and device for testing perception results of automatic driving, so as to at least solve the technical problem of low efficiency of algorithm testing in the related art.
根据本申请实施例的一个方面,提供了一种自动驾驶感知结果的测试方法,包括:获取携带有样本标签的测试样本,其中,所述测试样本用于模拟智能汽车在行驶途中采集到的行程图像,所述样本标签用于标识所述测试样本中的对象;使用图像算法对所述测试样本中的对象进行识别,得到识别结果,其中,所述图像算法为在所述智能汽车上使用的算法;根据所述识别结果和所述样本标签确定所述图像算法的测试结果。According to an aspect of the embodiment of the present application, a method for testing the perception results of automatic driving is provided, including: obtaining a test sample carrying a sample label, wherein the test sample is used to simulate the journey collected by the smart car while driving Image, the sample label is used to identify the object in the test sample; use the image algorithm to identify the object in the test sample to obtain the recognition result, wherein the image algorithm is the one used on the smart car Algorithm: determining the test result of the image algorithm according to the recognition result and the sample label.
可选地,获取携带有样本标签的测试样本,包括:利用激光雷达采集到的点云和相机采集到的原始图像生成所述测试样本;对所述测试样本中的对象进行识别,以生成所述测试样本的样本标签。Optionally, obtaining a test sample with a sample label includes: generating the test sample using the point cloud collected by the lidar and the original image collected by the camera; identifying objects in the test sample to generate the Sample label for the test sample described above.
可选地,利用激光雷达采集到的点云和相机采集到的原始图像生成所述测试样本,包括:获取所述激光雷达采集到的点云和所述相机在相同时刻采集到的原始图像;将所述点云中的点投影到所述原始图像中相同位置的像素点上,得到所述测试样本。Optionally, using the point cloud collected by the laser radar and the original image collected by the camera to generate the test sample includes: acquiring the point cloud collected by the laser radar and the original image collected by the camera at the same time; Projecting the points in the point cloud onto the pixels at the same position in the original image to obtain the test sample.
可选地,获取所述激光雷达采集到的点云和所述相机在相同时刻采集到的原始图像,包括:在所述激光雷达采集一帧点云时,以脉冲触发的方式触发所述相机采集一帧原始图像,以保证所述激光雷达采集到的点云和所述相机采集到的图像在时域上同步。Optionally, acquiring the point cloud collected by the lidar and the original image collected by the camera at the same time includes: triggering the camera in a pulse-triggered manner when the lidar collects a frame of point cloud A frame of original image is collected to ensure that the point cloud collected by the lidar and the image collected by the camera are synchronized in the time domain.
可选地,将所述点云中的点投影到所述原始图像中相同位置的像素点上,得到所述测试样本,包括:利用外参标定算法,将所述激光雷达的坐标系统一到所述相机的坐标系中,再根据相机内参建立所述点云中的点和所述原始图像中的像素点之间的映射关系,以保证所述激光雷达采集到的点云和所述相机采集到的图像在空间上同步;利用所述映射关系将所述点云中的每个点,投影到所述原始图像中相同位置的像素点上,得到所述测试样本。Optionally, projecting points in the point cloud onto pixels at the same position in the original image to obtain the test sample includes: using an external parameter calibration algorithm to convert the coordinate system of the lidar to In the coordinate system of the camera, the mapping relationship between the points in the point cloud and the pixels in the original image is established according to the internal reference of the camera, so as to ensure that the point cloud collected by the lidar and the camera The collected images are spatially synchronized; using the mapping relationship, each point in the point cloud is projected onto a pixel at the same position in the original image to obtain the test sample.
可选地,对所述测试样本中的对象进行识别,以生成所述测试样本的样本标签,包括:将所述测试样本输入到3D目标检测网络后,将所述3D目标检测网络输出的感知结果输入到多目标跟踪算法,得到融合感知结果,其中,所述感知结果用于表示所述3D目标检测网络从所述测试样本中识别出的对象,所述融合感知结果用于表示所述多目标跟踪算法从所述测试样本中识别出的对象;根据所述融合感知结果生成所述测试样本的样本标签。Optionally, identifying the object in the test sample to generate the sample label of the test sample includes: after inputting the test sample into the 3D object detection network, outputting the perception of the 3D object detection network The result is input into the multi-target tracking algorithm to obtain a fusion sensing result, wherein the sensing result is used to represent the object identified by the 3D target detection network from the test sample, and the fusion sensing result is used to represent the multiple An object identified by the target tracking algorithm from the test sample; and a sample label of the test sample is generated according to the fusion perception result.
可选地,根据所述融合感知结果生成所述测试样本的样本标签,包括:在GUI工具上展示所述测试样本和所述融合感知结果;在检测到确认操作的情况下,将所述融合感知结果作为所述测试样本的样本标签;在检测到对所述融合感知结果进行更正的操作的情况下,将更正后的内容作为所述测试样本的样本标签。Optionally, generating the sample label of the test sample according to the fusion sensing result includes: displaying the test sample and the fusion sensing result on a GUI tool; The perception result is used as the sample label of the test sample; when an operation to correct the fusion perception result is detected, the corrected content is used as the sample label of the test sample.
根据本申请实施例的另一方面,还提供了一种自动驾驶感知结果的测试装置,包括:获取单元,用于获取携带有样本标签的测试样本,其中,所述测试样本用于模拟智能汽车在行驶途中采集到的行程图像,所述样本标签用于标识所述测试样本中的对象;识别单元,用于使用图像算法对所述测试样本中的对象进行识别,得到识别结果,其中,所述图像算法为在所述智能汽车上使用的算法;测试单元,用于根据所述识别结果和所述样本标签确定所述图像算法的测试结果。According to another aspect of the embodiment of the present application, there is also provided a test device for automatic driving perception results, including: an acquisition unit, configured to acquire a test sample carrying a sample label, wherein the test sample is used to simulate a smart car The travel image collected during driving, the sample label is used to identify the object in the test sample; the identification unit is used to use an image algorithm to identify the object in the test sample to obtain a recognition result, wherein the The image algorithm is an algorithm used on the smart car; a test unit is used to determine the test result of the image algorithm according to the recognition result and the sample label.
可选地,获取单元还用于:利用激光雷达采集到的点云和相机采集到的原始图像生成所述测试样本;对所述测试样本中的对象进行识别,以生成所述测试样本的样本标签。Optionally, the acquisition unit is further configured to: generate the test sample using the point cloud collected by the lidar and the original image collected by the camera; identify the object in the test sample to generate a sample of the test sample Label.
可选地,获取单元还用于:获取所述激光雷达采集到的点云和所述相机在相同时刻采集到的原始图像;将所述点云中的点投影到所述原始图像中相同位置的像素点上,得到所述测试样本。Optionally, the acquisition unit is further configured to: acquire the point cloud collected by the lidar and the original image collected by the camera at the same time; project the points in the point cloud to the same position in the original image On the pixel points, the test sample is obtained.
可选地,获取单元还用于:在所述激光雷达采集一帧点云时,以脉冲触发的方式触发所述相机采集一帧原始图像,以保证所述激光雷达采集到的点云和所述相机采集到的图像在时域上同步。Optionally, the acquisition unit is further configured to: when the lidar collects a frame of point cloud, trigger the camera to collect a frame of original image in a pulse-triggered manner, so as to ensure that the point cloud collected by the lidar and the The images collected by the above cameras are synchronized in the time domain.
可选地,获取单元还用于:利用外参标定算法,将所述激光雷达的坐标系统一到所述相机的坐标系中,再根据相机内参建立所述点云中的点和所述原始图像中的像素点之间的映射关系,以保证所述激光雷达采集到的点云和所述相机采集到的图像在空间上同步;利用所述映射关系将所述点云中的每个点,投影到所述原始图像中相同位置的像素点上,得到所述测试样本。Optionally, the acquisition unit is further configured to: use an external parameter calibration algorithm to convert the coordinate system of the lidar into the coordinate system of the camera, and then establish the points in the point cloud and the original The mapping relationship between the pixels in the image, to ensure that the point cloud collected by the lidar and the image collected by the camera are spatially synchronized; use the mapping relationship to map each point in the point cloud , projected onto the pixel at the same position in the original image to obtain the test sample.
可选地,获取单元还用于:将所述测试样本输入到3D目标检测网络后,将所述3D目标检测网络输出的感知结果输入到多目标跟踪算法,得到融合感知结果,其中,所述感知结果用于表示所述3D目标检测网络从所述测试样本中识别出的对象,所述融合感知结果用于表示所述多目标跟踪算法从所述测试样本中识别出的对象;根据所述融合感知结果生成所述测试样本的样本标签。Optionally, the acquisition unit is further configured to: after inputting the test sample into the 3D object detection network, input the perception result output by the 3D object detection network into a multi-target tracking algorithm to obtain a fusion perception result, wherein the The perception result is used to represent the object identified by the 3D target detection network from the test sample, and the fusion perception result is used to represent the object recognized by the multi-target tracking algorithm from the test sample; according to the A sample label of the test sample is generated by fusing the perception results.
可选地,获取单元还用于:在GUI工具上展示所述测试样本和所述融合感知结果;在检测到确认操作的情况下,将所述融合感知结果作为所述测试样本的样本标签;在检测到对所述融合感知结果进行更正的操作的情况下,将更正后的内容作为所述测试样本的样本标签。Optionally, the acquiring unit is further configured to: display the test sample and the fusion sensing result on a GUI tool; when a confirmation operation is detected, use the fusion sensing result as a sample label of the test sample; When an operation to correct the fusion sensing result is detected, the corrected content is used as the sample label of the test sample.
根据本申请实施例的另一方面,还提供了一种电子装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器通过计算机程序执行上述的方法。According to another aspect of the embodiments of the present application, an electronic device is also provided, including a memory, a processor, and a computer program stored in the memory and operable on the processor, and the processor executes the above method through the computer program.
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方法中任一实施例的步骤。According to an aspect of the present application there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the steps in any embodiment of the above method.
应用本发明的技术方案,获取携带有样本标签的测试样本,使用图像算法对所述测试样本中的对象进行识别,得到识别结果,根据所述识别结果和所述样本标签确定所述图像算法的测试结果,通过提供测试样本这一标准答案,可以全程自动完成对图像算法的测试,而不用人工干预,可以解决相关技术中算法测试的效率较低的技术问题。Apply the technical solution of the present invention to obtain a test sample carrying a sample label, use an image algorithm to identify objects in the test sample, obtain a recognition result, and determine the value of the image algorithm according to the recognition result and the sample label. The test results, by providing the standard answer of the test sample, can automatically complete the test of the image algorithm without manual intervention, and can solve the technical problem of low efficiency of algorithm testing in related technologies.
除了上面所描述的目的、特征和优点之外,本发明还有其它的目的、特征和优点。下面将参照图,对本发明作进一步详细的说明。In addition to the objects, features and advantages described above, the present invention has other objects, features and advantages. Hereinafter, the present invention will be described in further detail with reference to the drawings.
附图说明Description of drawings
构成本发明的一部分的说明书附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The accompanying drawings constituting a part of the present invention are used to provide a further understanding of the present invention, and the schematic embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute improper limitations to the present invention. In the attached picture:
图1是根据本申请实施例的一种可选的自动驾驶感知结果的测试方法的流程图;FIG. 1 is a flow chart of an optional testing method for automatic driving perception results according to an embodiment of the present application;
图2是根据本申请实施例的一种可选的自动驾驶感知结果的测试方案的示意图;FIG. 2 is a schematic diagram of an optional testing scheme for automatic driving perception results according to an embodiment of the present application;
图3是根据本申请实施例的一种可选的自动驾驶感知结果的测试装置的示意图。Fig. 3 is a schematic diagram of an optional test device for automatic driving perception results according to an embodiment of the present application.
具体实施方式detailed description
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。In order to enable those skilled in the art to better understand the solution of the present application, the technical solution in the embodiment of the application will be clearly and completely described below in conjunction with the accompanying drawings in the embodiment of the application. Obviously, the described embodiment is only It is an embodiment of a part of the application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the scope of protection of this application.
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first" and "second" in the description and claims of the present application and the above drawings are used to distinguish similar objects, but not necessarily used to describe a specific sequence or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having", as well as any variations thereof, are intended to cover a non-exclusive inclusion, for example, a process, method, system, product or device comprising a sequence of steps or elements is not necessarily limited to the expressly listed instead, may include other steps or elements not explicitly listed or inherent to the process, method, product or apparatus.
在图像算法开发过程中以及在车载嵌入式端进行图像算法自动化测试时,如何量化评价算法给出的感知结果则是进行后续算法迭代优化的前提。目前,主要是依靠人工来评价算法给出的感知结果,例如通过一个或者多个专家来打分实现,这种方式不仅人力、物力成本较高,而且准确性上也无法得到保证。显然,就需要一个“标准答案”来做参考,用以评价这些感知结果,进而评测图像算法性能。During the image algorithm development process and when the image algorithm is automatically tested on the vehicle embedded terminal, how to quantify and evaluate the perception results given by the algorithm is the premise for subsequent algorithm iterative optimization. At present, it is mainly relying on human beings to evaluate the perception results given by the algorithm, such as scoring by one or more experts. This method not only has high manpower and material costs, but also cannot guarantee accuracy. Obviously, a "standard answer" is needed as a reference to evaluate these perceptual results, and then evaluate the performance of image algorithms.
所以需要构建一套系统,以较低的成本自动地生成这些“标准答案”,最后待复核完成后,确定最终的“标准答案”,称之为“真值”(即本申请的携带有样本标签的测试样本),之后便可以根据这些“真值”来自动地评价每一帧的检测结果,从而便可以进行图像算法的自动化测试,此套系统便称之为“真值系统”。Therefore, it is necessary to build a system to automatically generate these "standard answers" at a relatively low cost. Finally, after the review is completed, the final "standard answers" are determined, which is called "true value" (that is, the application contains samples Labeled test samples), and then the detection results of each frame can be automatically evaluated according to these "truth values", so that automatic testing of image algorithms can be performed. This system is called "truth value system".
根据本申请实施例的一方面,提供了一种自动驾驶感知结果的测试的方法实施例。本申请实施例的自动驾驶感知结果的测试方法可以由服务器、终端来执行。其中,终端执行本申请实施例的自动驾驶感知结果的测试方法也可以是由安装在其上的客户端来执行。图1是根据本申请实施例的一种可选的自动驾驶感知结果的测试方法的流程图,如图1所示,该方法可以包括以下步骤:According to an aspect of the embodiments of the present application, an embodiment of a method for testing the perception result of automatic driving is provided. The method for testing the perception result of automatic driving in the embodiment of the present application may be executed by a server or a terminal. Wherein, the terminal executes the method for testing the automatic driving perception result of the embodiment of the present application, which may also be executed by a client installed on it. Fig. 1 is a flow chart of an optional test method for automatic driving perception results according to an embodiment of the present application. As shown in Fig. 1, the method may include the following steps:
步骤S102,获取携带有样本标签的测试样本,所述测试样本用于模拟智能汽车在行驶途中采集到的行程图像,所述样本标签用于标识所述测试样本中的对象,例如行人、其他车辆、自行车、电动车、障碍物等。Step S102, obtain a test sample with a sample label, the test sample is used to simulate the journey image collected by the smart car while driving, and the sample label is used to identify the objects in the test sample, such as pedestrians, other vehicles , bicycles, electric vehicles, obstacles, etc.
1)可先获取所述激光雷达采集到的点云和所述相机在相同时刻采集到的原始图像,例如,在所述激光雷达采集一帧点云时,以脉冲触发的方式触发所述相机采集一帧原始图像,以保证所述激光雷达采集到的点云和所述相机采集到的图像在时域上同步;1) The point cloud collected by the lidar and the original image collected by the camera at the same time can be obtained first, for example, when the lidar collects a frame of point cloud, trigger the camera in a pulse-triggered manner Collecting a frame of original image to ensure that the point cloud collected by the lidar and the image collected by the camera are synchronized in the time domain;
2)将所述点云中的点投影到所述原始图像中相同位置的像素点上,得到所述测试样本。例如,利用外参标定算法,将所述激光雷达的坐标系统一到所述相机的坐标系中,再根据相机内参建立所述点云中的点和所述原始图像中的像素点之间的映射关系,以保证所述激光雷达采集到的点云和所述相机采集到的图像在空间上同步;利用所述映射关系将所述点云中的每个点,投影到所述原始图像中相同位置的像素点上,得到所述测试样本。2) Projecting the points in the point cloud onto the pixels at the same position in the original image to obtain the test sample. For example, using the external parameter calibration algorithm, the coordinate system of the lidar is inserted into the coordinate system of the camera, and then the relationship between the point in the point cloud and the pixel in the original image is established according to the internal reference of the camera. Mapping relationship, to ensure that the point cloud collected by the lidar and the image collected by the camera are spatially synchronized; use the mapping relationship to project each point in the point cloud into the original image On the pixel points at the same position, the test sample is obtained.
例如,首先将搭载本套真值系统的车辆静止在某个位.置,然后将棋盘格标定板放置在车辆的前方任意位置,只要保证激光雷达和相机视野中能够完整看到棋盘格标定板即可,然后利用上述所说的同步采集方式,录制此时的点云数据和相机图像数据,然后改变棋盘格标定板的位置,重复以上步骤,得到若干帧上述数据;对上述得到的若干帧数据进行处理,具体来说是,首先对每一帧的激光雷达点云数据提取棋盘格标定板4个角点的三维坐标(x,y,z),然后对每一帧的相机图像数据提取棋盘格标定板4个角点的像素坐标(u,v),因为相机的内参在出厂时厂家已经提供,属于已知数据,故将若干组这样的三维坐标和像素坐标加上相机内参,利用PnP的方式进行解算,得到激光雷达坐标系和相机坐标系的外参变换矩阵。由此便可建立点云点和像素点的映射关系。For example, first of all, the vehicle equipped with this set of truth system is stationary at a certain position, and then the checkerboard calibration board is placed at any position in front of the vehicle, as long as the checkerboard calibration board can be seen completely in the field of view of the lidar and the camera That is, then use the above-mentioned synchronous acquisition method to record the point cloud data and camera image data at this time, then change the position of the checkerboard calibration board, repeat the above steps, and obtain several frames of the above data; Data processing, specifically, first extract the three-dimensional coordinates (x, y, z) of the four corner points of the checkerboard calibration board from the lidar point cloud data of each frame, and then extract the camera image data of each frame The pixel coordinates (u, v) of the 4 corners of the checkerboard calibration board, because the internal parameters of the camera have been provided by the manufacturer at the factory, are known data, so several sets of such three-dimensional coordinates and pixel coordinates are added to the internal parameters of the camera, using The PnP method is used to solve the problem, and the external parameter transformation matrix of the lidar coordinate system and the camera coordinate system is obtained. In this way, the mapping relationship between point cloud points and pixel points can be established.
3)对所述测试样本中的对象进行识别,以生成所述测试样本的样本标签。3) Identifying objects in the test sample to generate a sample label of the test sample.
例如,可以先将所述测试样本输入到3D目标检测网络后,将所述3D目标检测网络输出的感知结果输入到多目标跟踪算法,得到融合感知结果,所述感知结果用于表示所述3D目标检测网络从所述测试样本中识别出的对象,所述融合感知结果用于表示所述多目标跟踪算法从所述测试样本中识别出的对象;然后根据所述融合感知结果生成所述测试样本的样本标签,可在GUI工具上展示所述测试样本和所述融合感知结果,在检测到确认操作的情况下,将所述融合感知结果作为所述测试样本的样本标签,在检测到对所述融合感知结果进行更正的操作的情况下,将更正后的内容作为所述测试样本的样本标签。For example, after the test samples can be input into the 3D object detection network, the perception results output by the 3D object detection network can be input into the multi-target tracking algorithm to obtain the fusion perception results, and the perception results are used to represent the 3D Objects identified by the target detection network from the test samples, the fusion perception results are used to represent the objects identified by the multi-target tracking algorithm from the test samples; then the test is generated according to the fusion perception results The sample label of the sample, the test sample and the fusion sensing result can be displayed on the GUI tool, and when the confirmation operation is detected, the fusion sensing result is used as the sample label of the test sample. In the case of correcting the fusion perception result, the corrected content is used as the sample label of the test sample.
上述感知的结果是3D目标检测网络的推理结果,包括每一个被检测出来的目标物在激光雷达坐标系下的中心点位置,长、宽、高以及航向角,即(x,y,z,l,w,h,theta)这样7维的数据。The result of the above perception is the inference result of the 3D target detection network, including the center point position, length, width, height and heading angle of each detected target object in the lidar coordinate system, namely (x, y, z, l, w, h, theta) such 7-dimensional data.
由于3D多目标跟踪算法中的卡尔曼滤波算法和匈牙利数据关联算法的机制,对于任意一帧的点云数据来说,只要该帧中的目标物没有完全消失在激光雷达视野里,哪怕大部分被遮挡或者仅露出一小部分,都是可以被追踪到的,而上述说的这种情况如果没有多目标追踪算法,仅仅依靠3D目标检测的话,由于点云的稀疏性和上述的目标物由于被遮挡而造成整体目标物点云不完整性,几乎都是不能被检测到的,所以多目标跟踪算法可以降低目标检测带来的误检、漏检概率,提高融合感知结果的精度。Due to the mechanism of the Kalman filter algorithm and the Hungarian data association algorithm in the 3D multi-target tracking algorithm, for any frame of point cloud data, as long as the target in the frame does not completely disappear in the lidar field of view, even most If it is blocked or only a small part is exposed, it can be tracked. If there is no multi-target tracking algorithm in the above-mentioned situation and only rely on 3D target detection, due to the sparsity of the point cloud and the above-mentioned target objects due to The incomplete point cloud of the overall target object caused by occlusion is almost impossible to be detected. Therefore, the multi-target tracking algorithm can reduce the probability of false detection and missed detection caused by target detection, and improve the accuracy of fusion perception results.
本申请的真值系统,除了保证精度高的前提外,还要能解决图像数据天生缺少空间距离维度的问题,因此硬件的选择不能单单依靠相机,还需要引入其它传感器作为融合补充,激光雷达拥有着不俗的三维感知能力,其形成的点云较为稠密,不仅可以和图像像素在空间上形成对应,做到同步融合感知,而且激光雷达对于距离的解算也是厘米级的,因此激光雷达和相机便是该真值系统首选的传感器,本申请综合利用激光雷达三维感知以及相机图像类别信息丰富的优势,构建图像算法自动化测试的真值系统。The truth-value system of this application, in addition to the premise of ensuring high precision, must also be able to solve the problem that image data is inherently lacking in spatial distance dimensions. Therefore, the choice of hardware cannot only rely on cameras, but also needs to introduce other sensors as fusion supplements. LiDAR has With excellent three-dimensional perception capabilities, the formed point cloud is relatively dense, which can not only form a spatial correspondence with the image pixels to achieve synchronous fusion perception, but also the distance calculation of LiDAR is centimeter-level, so LiDAR and The camera is the preferred sensor of the truth system. This application comprehensively utilizes the advantages of lidar three-dimensional perception and rich camera image category information to build a truth system for automatic testing of image algorithms.
步骤S104,使用图像算法对所述测试样本中的对象进行识别,得到识别结果,所述图像算法为在所述智能汽车上使用的算法。Step S104, using an image algorithm to identify objects in the test sample to obtain a recognition result, the image algorithm being an algorithm used on the smart car.
步骤S106,根据所述识别结果和所述样本标签确定所述图像算法的测试结果。Step S106, determining the test result of the image algorithm according to the recognition result and the sample label.
例如,一轮测试可以测试m张测试样本,若图像算法的识别结果与样本标识相同的有n张,那么测试准确率就是n/m,若测试准确度要达到99%,该图像算法才能在智能汽车中使用,那么仅需将n/m与99%相比较即可。For example, a round of testing can test m test samples. If there are n samples with the same recognition result of the image algorithm as the sample ID, then the test accuracy rate is n/m. If the test accuracy must reach 99%, the image algorithm can Smart car, then just compare n/m with 99%.
通过上述步骤,获取携带有样本标签的测试样本,使用图像算法对所述测试样本中的对象进行识别,得到识别结果,根据所述识别结果和所述样本标签确定所述图像算法的测试结果,通过提供测试样本这一标准答案,可以全程自动完成对图像算法的测试,而不用人工干预,可以解决相关技术中算法测试的效率较低的技术问题。Through the above steps, a test sample carrying a sample label is obtained, an image algorithm is used to identify objects in the test sample, and a recognition result is obtained, and a test result of the image algorithm is determined according to the recognition result and the sample label, By providing the standard answer of the test sample, the test of the image algorithm can be automatically completed in the whole process without manual intervention, and the technical problem of low algorithm test efficiency in related technologies can be solved.
另外,本发明融合了激光雷达和相机各自的优势,通过对二者同步采集的数据进行离线处理,获得足够高精度的感知结果,用以生成图像算法自动化测试的所需的真值。实现了激光雷达点云数据和图像数据在时空上的对准,在时域上,解决了激光雷达和相机同步采集的问题;空间上,解决了激光雷达和分布在车辆不同位置、不同安装角度、不同FOV视角的相机外参标定问题;提高了点云和图像融合感知结果精度,可以最小化的人工介入成本,保证真值的可靠性。In addition, the present invention combines the respective advantages of the laser radar and the camera, and obtains sufficiently high-precision perception results through off-line processing of the data collected synchronously by the two to generate the required truth value for automatic testing of image algorithms. Realized the alignment of lidar point cloud data and image data in time and space, and solved the problem of synchronous acquisition of lidar and camera in time domain; in space, solved the problem of lidar and image data distributed in different positions and different installation angles of the vehicle , Calibration of camera external parameters with different FOV angles of view; improve the accuracy of point cloud and image fusion perception results, minimize the cost of manual intervention, and ensure the reliability of the true value.
作为一个可选的实施例,下文结合图2所示内容,以具体实施方式进一步详述本申请的技术方案:As an optional embodiment, the technical solution of the present application is further described in detail in a specific embodiment in combination with the content shown in FIG. 2 below:
在时域上,以脉冲触发的方式,保证激光雷达点云数据和相机图像数据的同步性;空间上,将激光雷达坐标系变换到相机坐标系,根据相机内参,将点云映射到图像像素上;将时空同步的点云和图像输入到3D目标检测网络,并引入多目标跟踪算法,保存算法输出的融合感知结果;通过人工复核的GUI工具,读入上述步骤的融合感知结果,对存在漏检、误检的情况进行修改,生成最终版的真值。具体实现步骤如下:In the time domain, the synchronization of lidar point cloud data and camera image data is guaranteed by pulse triggering; in space, the lidar coordinate system is transformed into the camera coordinate system, and the point cloud is mapped to image pixels according to the internal parameters of the camera Above; input the time-space synchronized point cloud and image into the 3D target detection network, and introduce a multi-target tracking algorithm to save the fusion perception result output by the algorithm; through the manual review GUI tool, read the fusion perception result of the above steps, and the presence The situation of missed detection and false detection is modified to generate the final version of the true value. The specific implementation steps are as follows:
步骤1,激光雷达和相机前融合。Step 1, lidar and camera front fusion.
先对相机的图像进行语义分割,在完成图像语义分割完后,根据激光雷达和相机的外参矩阵以及相机自身的内参矩阵,对每一个点云都进行投影计算,滤除投影到图像范围外的点云点后,将剩下的点云点投影到语义分割图像的像素点上,获得该像素点表征的目标物类别,从而赋予原始点云点类别维度。First perform semantic segmentation on the image of the camera. After the semantic segmentation of the image is completed, according to the external parameter matrix of the lidar and the camera and the internal parameter matrix of the camera itself, the projection calculation is performed on each point cloud, and the projection outside the image range is filtered out. After the point cloud points, the remaining point cloud points are projected onto the pixels of the semantic segmentation image to obtain the target object category represented by the pixel points, thereby endowing the original point cloud point category dimension.
步骤2,3D目标检测。Step 2, 3D object detection.
将上一步新点云输入到3D目标检测网络,得到3D目标检测结果。Input the new point cloud from the previous step to the 3D target detection network to obtain the 3D target detection result.
步骤3,多目标跟踪。Step 3, multi-target tracking.
将上一步3D目标检测结果输入到多目标跟踪算法,来降低误检和漏检的概率。Input the 3D target detection results of the previous step into the multi-target tracking algorithm to reduce the probability of false detection and missed detection.
步骤4,人工或者机器复核。Step 4, manual or machine review.
经过上述步骤处理后,已经先通过算法自动生成了初步的真值,为了真值结果更加可靠,则需要进行最后的人工复核。通过GUI工具读取上述步骤生成的真值,并且参照原始的图像进行复核,如果存在误检、漏检的情况,则可以点击该GUI工具中的相应按钮进行真值的修改,待复核完毕后,所有帧的真值便确定了,也就完成了此真值系统所有操作。After the above steps, the preliminary truth value has been automatically generated by the algorithm. In order to make the truth result more reliable, a final manual review is required. Read the true value generated by the above steps through the GUI tool, and review it with reference to the original image. If there is a false detection or missed detection, you can click the corresponding button in the GUI tool to modify the true value. After the review is completed , the truth values of all frames are determined, and all operations of this truth value system are completed.
在本申请的技术方案中:In the technical scheme of this application:
(1)以脉冲触发的方式,当激光雷达采集一帧点云时,触发相机曝光采集一帧图像,保证时域上激光雷达点云数据和相机图像数据的同步采集;运用外参标定算法,将激光雷达坐标系统一到相机坐标系,再根据相机内参,建立点云点和像素点的映射关系,综上,保证了激光雷达和相机时空对准。(1) In the way of pulse triggering, when the lidar collects a frame of point cloud, the camera exposure is triggered to collect a frame of image to ensure the synchronous acquisition of lidar point cloud data and camera image data in the time domain; using the external parameter calibration algorithm, Put the lidar coordinate system into the camera coordinate system, and then establish the mapping relationship between the point cloud point and the pixel point according to the internal reference of the camera. In summary, the time-space alignment between the lidar and the camera is guaranteed.
(2)在将上述步骤生成的同步数据输入到3D目标检测网络后,紧着将得出的感知结果输出到多目标跟踪算法,用以进一步降低目标检测带来的误检、漏检概率,提高融合感知结果的精度。(2) After inputting the synchronous data generated by the above steps into the 3D target detection network, the resulting perception results are then output to the multi-target tracking algorithm to further reduce the probability of false detection and missed detection caused by target detection. Improve the accuracy of fusion perception results.
(3)将上述步骤生成的融合感知结果读入到人工复核的GUI工具上,修改极少数存在的漏检、误检情况的帧结果,以此来进一步提高此前算法给出的真值的可靠性。(3) Read the fusion perception results generated by the above steps into the GUI tool for manual review, and modify the frame results of the very few missed detections and false detections, so as to further improve the reliability of the true value given by the previous algorithm sex.
采用本方案:解决了现行图像算法自动化测试时,缺失距离真值的问题,使得图像算法感知功能的测试更加全面;建立了一套真值系统的完整工作流程,先用激光雷达和相机融合算法加上多目标跟踪算法给出了较高精度的真值,后续通过较小的人工介入,进一步提高了真值的可靠性。Adopting this solution: solves the problem of missing the true value of the distance during the automatic test of the current image algorithm, making the test of the image algorithm's perception function more comprehensive; establishes a complete workflow of the true value system, first uses the laser radar and camera fusion algorithm In addition, the multi-target tracking algorithm gives a higher-precision true value, and subsequent small manual intervention further improves the reliability of the true value.
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。It should be noted that for the foregoing method embodiments, for the sake of simple description, they are expressed as a series of action combinations, but those skilled in the art should know that the present application is not limited by the described action sequence. Depending on the application, certain steps may be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification belong to preferred embodiments, and the actions and modules involved are not necessarily required by this application.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation. Based on such an understanding, the technical solution of the present application can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD) contains several instructions to enable a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in the various embodiments of the present application.
根据本申请实施例的另一个方面,还提供了一种用于实施上述自动驾驶感知结果的测试方法的自动驾驶感知结果的测试装置。图3是根据本申请实施例的一种可选的自动驾驶感知结果的测试装置的示意图,如图3所示,该装置可以包括:According to another aspect of the embodiment of the present application, there is also provided a testing device for the automatic driving perception result of the above method for testing the automatic driving perception result. Fig. 3 is a schematic diagram of an optional test device for automatic driving perception results according to an embodiment of the present application. As shown in Fig. 3, the device may include:
获取单元31,用于获取携带有样本标签的测试样本,其中,所述测试样本用于模拟智能汽车在行驶途中采集到的行程图像,所述样本标签用于标识所述测试样本中的对象;识别单元33,用于使用图像算法对所述测试样本中的对象进行识别,得到识别结果,其中,所述图像算法为在所述智能汽车上使用的算法;测试单元35,用于根据所述识别结果和所述样本标签确定所述图像算法的测试结果。The acquisition unit 31 is configured to acquire a test sample carrying a sample label, wherein the test sample is used to simulate the travel image collected by the smart car while driving, and the sample label is used to identify the object in the test sample; The recognition unit 33 is configured to use an image algorithm to identify the object in the test sample to obtain a recognition result, wherein the image algorithm is an algorithm used on the smart car; the test unit 35 is configured to use the image algorithm according to the The recognition result and the sample label determine the test result of the image algorithm.
可选地,获取单元还用于:利用激光雷达采集到的点云和相机采集到的原始图像生成所述测试样本;对所述测试样本中的对象进行识别,以生成所述测试样本的样本标签。Optionally, the acquisition unit is further configured to: generate the test sample using the point cloud collected by the lidar and the original image collected by the camera; identify the object in the test sample to generate a sample of the test sample Label.
可选地,获取单元还用于:获取所述激光雷达采集到的点云和所述相机在相同时刻采集到的原始图像;将所述点云中的点投影到所述原始图像中相同位置的像素点上,得到所述测试样本。Optionally, the acquisition unit is further configured to: acquire the point cloud collected by the lidar and the original image collected by the camera at the same time; project the points in the point cloud to the same position in the original image On the pixel points, the test sample is obtained.
可选地,获取单元还用于:在所述激光雷达采集一帧点云时,以脉冲触发的方式触发所述相机采集一帧原始图像,以保证所述激光雷达采集到的点云和所述相机采集到的图像在时域上同步。Optionally, the acquisition unit is further configured to: when the lidar collects a frame of point cloud, trigger the camera to collect a frame of original image in a pulse-triggered manner, so as to ensure that the point cloud collected by the lidar and the The images collected by the above cameras are synchronized in the time domain.
可选地,获取单元还用于:利用外参标定算法,将所述激光雷达的坐标系统一到所述相机的坐标系中,再根据相机内参建立所述点云中的点和所述原始图像中的像素点之间的映射关系,以保证所述激光雷达采集到的点云和所述相机采集到的图像在空间上同步;利用所述映射关系将所述点云中的每个点,投影到所述原始图像中相同位置的像素点上,得到所述测试样本。Optionally, the acquisition unit is further configured to: use an external parameter calibration algorithm to convert the coordinate system of the lidar into the coordinate system of the camera, and then establish the points in the point cloud and the original The mapping relationship between the pixels in the image, to ensure that the point cloud collected by the lidar and the image collected by the camera are spatially synchronized; use the mapping relationship to map each point in the point cloud , projected onto the pixel at the same position in the original image to obtain the test sample.
可选地,获取单元还用于:将所述测试样本输入到3D目标检测网络后,将所述3D目标检测网络输出的感知结果输入到多目标跟踪算法,得到融合感知结果,其中,所述感知结果用于表示所述3D目标检测网络从所述测试样本中识别出的对象,所述融合感知结果用于表示所述多目标跟踪算法从所述测试样本中识别出的对象;根据所述融合感知结果生成所述测试样本的样本标签。Optionally, the acquisition unit is further configured to: after inputting the test sample into the 3D object detection network, input the perception result output by the 3D object detection network into a multi-target tracking algorithm to obtain a fusion perception result, wherein the The perception result is used to represent the object identified by the 3D target detection network from the test sample, and the fusion perception result is used to represent the object recognized by the multi-target tracking algorithm from the test sample; according to the A sample label of the test sample is generated by fusing the perception results.
可选地,获取单元还用于:在GUI工具上展示所述测试样本和所述融合感知结果;在检测到确认操作的情况下,将所述融合感知结果作为所述测试样本的样本标签;在检测到对所述融合感知结果进行更正的操作的情况下,将更正后的内容作为所述测试样本的样本标签。Optionally, the acquiring unit is further configured to: display the test sample and the fusion sensing result on a GUI tool; when a confirmation operation is detected, use the fusion sensing result as a sample label of the test sample; When an operation to correct the fusion sensing result is detected, the corrected content is used as the sample label of the test sample.
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。Optionally, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments, and details are not repeated in this embodiment.
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。Optionally, in this embodiment, the above-mentioned storage medium may include but not limited to: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk Various media that can store program codes such as discs or optical discs.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the above embodiments of the present application are for description only, and do not represent the advantages and disadvantages of the embodiments.
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。If the integrated units in the above embodiments are realized in the form of software function units and sold or used as independent products, they can be stored in the above computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or part of the contribution to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium. Several instructions are included to make one or more computer devices (which may be personal computers, servers or network devices, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present application, the descriptions of each embodiment have their own emphases, and for parts not described in detail in a certain embodiment, reference may be made to relevant descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed client can be implemented in other ways. Wherein, the device embodiments described above are only illustrative, for example, the division of the units is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of units or modules may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above description is only the preferred embodiment of the present application. It should be pointed out that for those of ordinary skill in the art, without departing from the principle of the present application, some improvements and modifications can also be made. These improvements and modifications are also It should be regarded as the protection scope of this application.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211193799.1ACN115527185A (en) | 2022-09-28 | 2022-09-28 | Method and device for testing automatic driving perception result |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211193799.1ACN115527185A (en) | 2022-09-28 | 2022-09-28 | Method and device for testing automatic driving perception result |
| Publication Number | Publication Date |
|---|---|
| CN115527185Atrue CN115527185A (en) | 2022-12-27 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211193799.1APendingCN115527185A (en) | 2022-09-28 | 2022-09-28 | Method and device for testing automatic driving perception result |
| Country | Link |
|---|---|
| CN (1) | CN115527185A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106228570A (en)* | 2016-07-08 | 2016-12-14 | 百度在线网络技术(北京)有限公司 | A kind of Truth data determines method and apparatus |
| CN112230240A (en)* | 2020-09-30 | 2021-01-15 | 深兰人工智能(深圳)有限公司 | Space-time synchronization system, device and readable medium for lidar and camera data |
| CN113268411A (en)* | 2021-04-25 | 2021-08-17 | 福瑞泰克智能系统有限公司 | Driving assistance algorithm testing method and device, electronic device and storage medium |
| CN113298161A (en)* | 2021-05-28 | 2021-08-24 | 平安科技(深圳)有限公司 | Image recognition model testing method and device, computer equipment and storage medium |
| CN113674355A (en)* | 2021-07-06 | 2021-11-19 | 中国北方车辆研究所 | Target identification and positioning method based on camera and laser radar |
| CN113869364A (en)* | 2021-08-26 | 2021-12-31 | 北京旷视科技有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
| WO2022022694A1 (en)* | 2020-07-31 | 2022-02-03 | 北京智行者科技有限公司 | Method and system for sensing automated driving environment |
| KR20220039582A (en)* | 2020-09-22 | 2022-03-29 | 주식회사 뉴이스트원테크 | Detection apparatus based artificial intelligence and operating method thereof |
| CN114862901A (en)* | 2022-04-26 | 2022-08-05 | 青岛慧拓智能机器有限公司 | Road-end multi-source sensor fusion target sensing method and system for surface mine |
| CN114926808A (en)* | 2022-03-30 | 2022-08-19 | 吉林大学 | Target detection and tracking method based on sensor fusion |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106228570A (en)* | 2016-07-08 | 2016-12-14 | 百度在线网络技术(北京)有限公司 | A kind of Truth data determines method and apparatus |
| WO2022022694A1 (en)* | 2020-07-31 | 2022-02-03 | 北京智行者科技有限公司 | Method and system for sensing automated driving environment |
| KR20220039582A (en)* | 2020-09-22 | 2022-03-29 | 주식회사 뉴이스트원테크 | Detection apparatus based artificial intelligence and operating method thereof |
| CN112230240A (en)* | 2020-09-30 | 2021-01-15 | 深兰人工智能(深圳)有限公司 | Space-time synchronization system, device and readable medium for lidar and camera data |
| CN113268411A (en)* | 2021-04-25 | 2021-08-17 | 福瑞泰克智能系统有限公司 | Driving assistance algorithm testing method and device, electronic device and storage medium |
| CN113298161A (en)* | 2021-05-28 | 2021-08-24 | 平安科技(深圳)有限公司 | Image recognition model testing method and device, computer equipment and storage medium |
| CN113674355A (en)* | 2021-07-06 | 2021-11-19 | 中国北方车辆研究所 | Target identification and positioning method based on camera and laser radar |
| CN113869364A (en)* | 2021-08-26 | 2021-12-31 | 北京旷视科技有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
| CN114926808A (en)* | 2022-03-30 | 2022-08-19 | 吉林大学 | Target detection and tracking method based on sensor fusion |
| CN114862901A (en)* | 2022-04-26 | 2022-08-05 | 青岛慧拓智能机器有限公司 | Road-end multi-source sensor fusion target sensing method and system for surface mine |
| Publication | Publication Date | Title |
|---|---|---|
| CN109376667B (en) | Target detection method, device and electronic device | |
| CN114267041B (en) | Method and device for identifying object in scene | |
| WO2020052540A1 (en) | Object labeling method and apparatus, movement control method and apparatus, device, and storage medium | |
| JP2020047276A (en) | Sensor calibration method and apparatus, computer equipment, medium, and vehicle | |
| CN112132901A (en) | Point cloud labeling method and device, electronic equipment and storage medium | |
| WO2023024443A1 (en) | Data matching method and apparatus, and electronic device, storage medium and program product | |
| CN111856445B (en) | Target detection method, device, equipment and system | |
| CN117593454B (en) | Three-dimensional reconstruction and target surface Ping Miandian cloud generation method | |
| CN114926485B (en) | Image depth annotation method, device, equipment and storage medium | |
| CN113030990A (en) | Fusion ranging method and device for vehicle, ranging equipment and medium | |
| CN112150448A (en) | Image processing method, device and equipment, storage medium | |
| CN115830073A (en) | Map element reconstruction method, map element reconstruction device, computer equipment and storage medium | |
| CN115223146A (en) | Obstacle detection method, obstacle detection device, computer device, and storage medium | |
| CN112562093A (en) | Object detection method, electronic medium, and computer storage medium | |
| TW202242803A (en) | Positioning method and apparatus, electronic device and storage medium | |
| CN114708583A (en) | Target object detection method, device, equipment and storage medium | |
| CN117312992A (en) | Emotion recognition method and system for fusion of multi-view face features and audio features | |
| CN112364693B (en) | Binocular vision-based obstacle recognition method, device, equipment and storage medium | |
| CN117542042B (en) | Three-dimensional object detection method and device, electronic equipment and storage medium | |
| CN118968462A (en) | Image depth annotation method, device, medium and equipment | |
| CN115527185A (en) | Method and device for testing automatic driving perception result | |
| CN117854037A (en) | Target tracking method and related device | |
| KR20240137447A (en) | Absolute depth estimation from a single image using online depth scale transfer | |
| Bravo et al. | Outdoor vacant parking space detector for improving mobility in smart cities | |
| EP4261565A1 (en) | Object detection method and apparatus for vehicle, device, vehicle and medium |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |