

















技术领域technical field
本发明涉及数据处理领域,更具体地,涉及一种采集数据自动修正和补充方法及系统。The present invention relates to the field of data processing, and more specifically, to a method and system for automatically correcting and supplementing collected data.
背景技术Background technique
随着自动驾驶行业的飞速发展,真实场景的数据采集显得尤为重要,这些数据是采集车在真实道路环境使用雷达等设备采集的传感器数据,然后再将这些csv格式的数据,进行道路和交通流信息的提取,从而进行场景库和仿真场景的搭建,但最终的场景效果依赖于采集设备的精度,事实上采集车采集的数据和真实的场景存在一定误差,如果能对采集数据的精度、准确率进行提升,并且还能获取采集设备无法提取的信息,那么将极大地提升采集数据的准确性,从而使构建的场景库或仿真场景效果更加贴近真实场景。With the rapid development of the autonomous driving industry, the data collection of real scenes is particularly important. These data are sensor data collected by vehicles using radar and other equipment in the real road environment, and then these data in csv format are used for road and traffic flow. Information extraction, so as to build the scene library and simulation scene, but the final scene effect depends on the accuracy of the collection equipment. In fact, there is a certain error between the data collected by the collection vehicle and the real scene. If the accuracy and accuracy of the collected data can be If the efficiency is improved, and information that cannot be extracted by the acquisition equipment can be obtained, the accuracy of the collected data will be greatly improved, so that the built scene library or simulation scene effect is closer to the real scene.
发明内容Contents of the invention
本发明针对现有技术中存在的技术问题,提供一种采集数据自动修正和补充方法及系统。Aiming at the technical problems existing in the prior art, the present invention provides a method and system for automatically correcting and supplementing collected data.
根据本发明的第一方面,提供了一种采集数据自动修正和补充方法,包括:According to a first aspect of the present invention, a method for automatically correcting and supplementing collected data is provided, including:
获取真实场景行车记录仪视频和采集设备的采集CSV数据;Obtain CSV data collected from real-scene driving recorder video and collection equipment;
分别从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型;Extract lane line position coordinates, motion track coordinates, speed and each target object id, category information of each target object, position coordinates of each target object, and video from the real-scene driving recorder respectively. The type of the static scene in the middle;
根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充。According to extracting lane line position coordinates, motion trajectory coordinates, speed and each target object id, category information of each target object, position coordinates and video of each target object from the real scene driving recorder The type of the static scene in the middle, automatically corrects and supplements the collected CSV data.
在上述技术方案的基础上,本发明还可以作出如下改进。On the basis of the above technical solution, the present invention can also make the following improvements.
可选的,从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,包括:Optionally, extract the lane line position coordinates, the trajectory coordinates of each target, the speed and the id of each target, the category information of each target, and the position of each target from the real scene driving recorder Coordinates and the type of static scene in the video, including:
基于车道线检测算法对所述真实场景行车记录仪视频进行车道线检测,保存每一帧的时间点及对应检测的车道线位置坐标;Based on the lane line detection algorithm, the lane line detection is performed on the real scene driving recorder video, and the time point of each frame and the corresponding detected lane line position coordinates are saved;
基于多目标跟踪算法逐帧检测每个目标物的运动轨迹坐标、速度及每个目标物id并记录时间点;Based on the multi-target tracking algorithm, the trajectory coordinates, speed and id of each target are detected frame by frame and the time points are recorded;
基于目标检测算法逐帧检测每个目标物类别信息,并记录每个目标物的位置坐标以及当前时间点;Based on the target detection algorithm, detect the category information of each target object frame by frame, and record the position coordinates and current time point of each target object;
基于图像分割算法逐帧检测视频中静态场景所属类型,记录当前静态场景所属时间段。Based on the image segmentation algorithm, the type of static scene in the video is detected frame by frame, and the time period of the current static scene is recorded.
可选的,所述静态场景包括匝道、加油站、收费站、隧道、晴天、雨天和雾天。Optionally, the static scene includes ramps, gas stations, toll stations, tunnels, sunny days, rainy days, and foggy days.
可选的,根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充,包括:Optionally, according to extracting the coordinates of the lane line position, the coordinates of the trajectory of each target, the speed and the id of each target, the category information of each target, and the Position coordinates and the type of static scene in the video, automatically correct and supplement the collected CSV data, including:
根据基于车道线检测算法识别出的道路上的所有车道线位置坐标,将本车道左右两边三条及以上的车道线位置坐标记录在采集csv数据中;According to the position coordinates of all lane lines on the road identified based on the lane line detection algorithm, record the position coordinates of three or more lane lines on the left and right sides of the lane in the collected csv data;
当所述采集CSV数据中的车道线出现小范围缺失时,根据前一小段和后一小段车道线位置坐标取平均值,计算出当前缺失的车道线位置坐标,并将该车道线位置坐标更新到csv数据中。When the lane line in the collected CSV data is missing in a small range, calculate the current missing lane line position coordinates according to the average value of the previous and subsequent lane line position coordinates, and update the lane line position coordinates into the csv data.
可选的,根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充,包括:Optionally, according to extracting the coordinates of the lane line position, the coordinates of the trajectory of each target, the speed and the id of each target, the category information of each target, and the Position coordinates and the type of static scene in the video, automatically correct and supplement the collected CSV data, including:
当基于目标检测算法检测的当前帧画面中目标物类型和数量与采集csv数据中记录的目标物类型和数量不一致时,将根据基于目标检测算法检测的当前帧画面中目标物类型和数量自动修正采集csv数据中的目标物类型、目标物id和目标物数量信息。When the type and number of objects detected in the current frame based on the object detection algorithm are inconsistent with the type and number of objects recorded in the collected csv data, it will be automatically corrected based on the type and number of objects detected in the current frame based on the object detection algorithm Collect target object type, target object id, and target object quantity information in csv data.
可选的,根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充,包括:Optionally, according to extracting the coordinates of the lane line position, the coordinates of the trajectory of each target, the speed and the id of each target, the category information of each target, and the Position coordinates and the type of static scene in the video, automatically correct and supplement the collected CSV data, including:
根据多目标追踪算法检测的目标物运动轨迹坐标信息和车道线坐标信息,确定每一个目标物的变道行为数据;Determine the lane-changing behavior data of each target according to the target object trajectory coordinate information and lane line coordinate information detected by the multi-target tracking algorithm;
若检测出来的每一个目标物的变道行为数据与采集CSV数据中的相同目标物的变道行为数据不一致,则基于检测出来的每一个目标物的变道行为数据对采集CSV数据中的相同目标物的变道行为数据进行修正。If the detected lane-changing behavior data of each target object is inconsistent with the lane-changing behavior data of the same target object in the collected CSV data, based on the detected lane-changing behavior data of each target object, the same The lane-changing behavior data of the target is corrected.
可选的,根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充,包括:Optionally, according to extracting the coordinates of the lane line position, the coordinates of the trajectory of each target, the speed and the id of each target, the category information of each target, and the Position coordinates and the type of static scene in the video, automatically correct and supplement the collected CSV data, including:
基于目标追踪算法检测的目标物的位置坐标以及自车位置坐标,计算出目标物相对自车的横向距离和纵向距离;Based on the position coordinates of the target detected by the target tracking algorithm and the position coordinates of the vehicle, the lateral and longitudinal distances of the target relative to the vehicle are calculated;
将目标物相对自车的横向距离和纵向距离更新到采集CSV数据中。Update the lateral distance and longitudinal distance of the target object relative to the own vehicle into the collected CSV data.
可选的,根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充,包括:Optionally, according to extracting the coordinates of the lane line position, the coordinates of the trajectory of each target, the speed and the id of each target, the category information of each target, and the Position coordinates and the type of static scene in the video, automatically correct and supplement the collected CSV data, including:
当同一个目标物出现被遮挡后又重现,基于多目标追踪算法,将该目标物始终记录为同一个目标,目标物id相同;When the same target is occluded and then reappears, based on the multi-target tracking algorithm, the target is always recorded as the same target with the same target ID;
将所述目标物id更新到csv数据中。Update the target object id into the csv data.
可选的,根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充,包括:Optionally, according to extracting the coordinates of the lane line position, the coordinates of the trajectory of each target, the speed and the id of each target, the category information of each target, and the Position coordinates and the type of static scene in the video, automatically correct and supplement the collected CSV data, including:
基于mobilenet图像分类算法检测的视频中静态场景所属类型和对应时间段记录到采集csv数据中;The type of static scene in the video and the corresponding time period detected based on the mobilenet image classification algorithm are recorded in the collected csv data;
基于目标追踪算法,检测视频中的目标速度及运动轨迹信息,获取每一个目标物的运动方向信息,将每一个目标物的运动方向信息更新到采集CSV数据中。Based on the target tracking algorithm, detect the target speed and trajectory information in the video, obtain the movement direction information of each target object, and update the movement direction information of each target object to the collected CSV data.
根据本发明的第二方面,提供一种采集数据自动修正和补充系统,包括:According to a second aspect of the present invention, there is provided a system for automatically correcting and supplementing collected data, comprising:
获取模块,用于获取真实场景行车记录仪视频和采集设备的采集CSV数据;The obtaining module is used to obtain the CSV data collected by the real-scene driving recorder video and the collection device;
提取模块,用于分别从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型;The extraction module is used to extract the lane line position coordinates, the motion trajectory coordinates of each target, the speed and the id of each target, the category information of each target, the category information of each target, and the location coordinates and the type of the static scene in the video;
修正和补充模块,用于根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充。The correction and supplementary module is used to extract the lane line position coordinates, the trajectory coordinates of each target, the speed and the id of each target, the category information of each target, and each The location coordinates of the target and the type of the static scene in the video are automatically corrected and supplemented for the collected CSV data.
根据本发明的第三方面,提供了一种电子设备,包括存储器、处理器,所述处理器用于执行存储器中存储的计算机管理类程序时实现采集数据自动修正和补充方法的步骤。According to a third aspect of the present invention, an electronic device is provided, including a memory and a processor, and the processor is used to implement the steps of the method for automatically correcting and supplementing collected data when executing a computer management program stored in the memory.
根据本发明的第四方面,提供了一种计算机可读存储介质,其上存储有计算机管理类程序,所述计算机管理类程序被处理器执行时实现采集数据自动修正和补充方法的步骤。According to a fourth aspect of the present invention, a computer-readable storage medium is provided, on which a computer management program is stored, and when the computer management program is executed by a processor, the steps of the method for automatically correcting and supplementing collected data are realized.
本发明提供的一种采集数据自动修正和补充方法及系统,该方法采用深度学习目标检测、多目标追踪、车道线检测、图像分类等算法对真实场景行车记录仪视频进行交通流及道路信息的提取,再自动跟采集的csv数据进行对比,纠正数据当中的错误信息,补充数据当中没有的采集信息,从而弥补雷达等采集设备的误差,整体上提升采集数据精度,最终实现构建的场景库或仿真场景精度的提升。The present invention provides a method and system for automatically correcting and supplementing collected data. The method uses algorithms such as deep learning target detection, multi-target tracking, lane line detection, and image classification to perform traffic flow and road information analysis on real-scene driving recorder videos. Extract, and then automatically compare with the collected csv data, correct the wrong information in the data, supplement the collected information that is not in the data, so as to make up for the error of the radar and other collection equipment, improve the accuracy of the collected data as a whole, and finally realize the construction of the scene library or The accuracy of the simulation scene is improved.
附图说明Description of drawings
图1为本发明提供的一种采集数据自动修正和补充方法流程图;Fig. 1 is a flow chart of a method for automatically correcting and supplementing collected data provided by the present invention;
图2-1为采集数据包含车道线信息示意图,图2-2为对采集CSV数据补充的示意图;Figure 2-1 is a schematic diagram of the collected data including lane line information, and Figure 2-2 is a schematic diagram of a supplement to the collected CSV data;
图3-1为采集数据中包含的目标物类型和数量示意图,图3-2为对采集数据中目标类型和数量进行修正和补充的示意图;Figure 3-1 is a schematic diagram of the type and quantity of the target contained in the collected data, and Figure 3-2 is a schematic diagram of the correction and supplement of the target type and quantity in the collected data;
图4-1为采集数据中目标变道信息示意图,图4-2为对采集数据中目标变道信息进行修正和补充的示意图;Figure 4-1 is a schematic diagram of target lane change information in the collected data, and Figure 4-2 is a schematic diagram of correcting and supplementing the target lane change information in the collected data;
图5-1为采集数据中目标横纵距离示意图,图5-2为对采集数据中目标横纵距离进行修正和补充示意图;Figure 5-1 is a schematic diagram of the horizontal and vertical distance of the target in the collected data, and Figure 5-2 is a schematic diagram of the correction and supplement of the horizontal and vertical distance of the target in the collected data;
图6-1为采集数据中目标物id信息示意图,图6-2为对采集数据中目标物id进行修正和补充的示意图;Figure 6-1 is a schematic diagram of the target id information in the collected data, and Figure 6-2 is a schematic diagram of correcting and supplementing the target id in the collected data;
图7-1为采集数据中静态场景无法识别的示意图,图7-2为对采集数据中静态场景进行补充的示意图;Figure 7-1 is a schematic diagram of the unrecognizable static scene in the collected data, and Figure 7-2 is a supplementary schematic diagram of the static scene in the collected data;
图8-1为采集数据中目标无航向数据的示意图,图8-2为对采集数据中目标的航向数据进行补充的示意图;Figure 8-1 is a schematic diagram of the target without heading data in the collected data, and Figure 8-2 is a schematic diagram of supplementing the heading data of the target in the collected data;
图9为本发明提供的一种采集数据自动修正和补充系统的结构示意图;Fig. 9 is a schematic structural diagram of a system for automatically correcting and supplementing collected data provided by the present invention;
图10为本发明提供的一种可能的电子设备的硬件结构示意图;FIG. 10 is a schematic diagram of a hardware structure of a possible electronic device provided by the present invention;
图11为本发明提供的一种可能的计算机可读存储介质的硬件结构示意图。FIG. 11 is a schematic diagram of a hardware structure of a possible computer-readable storage medium provided by the present invention.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。另外,本发明提供的各个实施例或单个实施例中的技术特征可以相互任意结合,以形成可行的技术方案,这种结合不受步骤先后次序和/或结构组成模式的约束,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时,应当认为这种技术方案的结合不存在,也不在本发明要求的保护范围之内。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention. In addition, the technical features in each embodiment or a single embodiment provided by the present invention can be combined arbitrarily with each other to form a feasible technical solution. This combination is not restricted by the sequence of steps and/or structural composition mode, but it must be Based on the ability of those skilled in the art to realize, when the combination of technical solutions is contradictory or unrealizable, it should be considered that such combination of technical solutions does not exist and is not within the protection scope of the present invention.
图1为本发明提供的一种采集数据自动修正和补充方法流程图,如图1所示,方法包括:Fig. 1 is a flow chart of a method for automatically correcting and supplementing data collection provided by the present invention. As shown in Fig. 1, the method includes:
S1,获取真实场景行车记录仪视频和采集设备的采集CSV数据。S1, acquiring real-scene traffic recorder video and CSV data collected by the collection device.
可理解的是,首先准备好真实场景的影像也即行车记录仪视频,同时准备好对应采集的csv格式的数据。其中,采用python脚本读取采集的csv数据,逐行读取时间戳及每行的传感器数据数值并保存。It is understandable that the image of the real scene, that is, the video of the driving recorder, and the corresponding collected data in csv format are prepared first. Among them, the python script is used to read the collected csv data, and the time stamp and the sensor data value of each line are read and saved line by line.
S2,分别从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型。S2, respectively extracting the lane line position coordinates, the motion trajectory coordinates of each target, the speed and the id of each target, the category information of each target, and the position coordinates of each target from the real scene driving recorder And the type of the static scene in the video.
可理解的是,基于真实场景行车记录仪视频,采用开源的lanenet等车道线检测算法对视频进行检测,检测每一帧中的车道线坐标位置,保存每一帧的时间点及对应的车道线位置坐标。采用开源的deepsort等多目标跟踪算法逐帧检测每个目标物的运动轨迹坐标、速度及目标物id并记录时间点。采用开源的yolov7等目标检测算法逐帧检测目标物类别信息,并记录其位置坐标以及当前时间点。采用mobilenet图像分类算法对匝道、加油站、收费站、隧道、晴天、雨天、雾天等静态场景图片进行模型训练,然后逐帧检测视频中静态场景所属类型,记录当前静态场景所属时间段。It is understandable that, based on the real scene driving recorder video, the open source lanenet and other lane line detection algorithms are used to detect the video, detect the coordinate position of the lane line in each frame, and save the time point of each frame and the corresponding lane line Position coordinates. Use open source deepsort and other multi-target tracking algorithms to detect the trajectory coordinates, speed and target id of each target frame by frame and record the time points. Use open source yolov7 and other target detection algorithms to detect the target category information frame by frame, and record its position coordinates and current time point. The mobilenet image classification algorithm is used for model training on static scene pictures such as ramps, gas stations, toll stations, tunnels, sunny days, rainy days, and foggy days, and then the type of static scene in the video is detected frame by frame, and the time period of the current static scene is recorded.
S3,根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充。S3, extracting the position coordinates of the lane line, the trajectory coordinates of each target, the speed and the id of each target, the category information of each target, and the position coordinates of each target according to the actual scene driving recorder As well as the type of static scene in the video, it automatically corrects and supplements the collected CSV data.
可理解的是,采用python脚本分别读取步骤S2和步骤S3保存的信息,然后修正及补充采集的csv数据,主要包括以下几种数据的修正和补充。It is understandable that the python script is used to respectively read the information saved in step S2 and step S3, and then correct and supplement the collected csv data, which mainly includes the correction and supplement of the following types of data.
如图2-1和2-2,2-1为csv数据中车道线的表现示意图,2-2为修正后的CSV数据中车道线的示意图,当前的采集设备只能采集当前车道的左边两条及右边两条车道线信息,采用lanenet车道线检测算法可以识别出道路上所有车线信息。那么就可以根据本车道左右两边三条及以上的车道线坐标信息和自车坐标计算出车道线与自车距离,将该数值记录在采集的csv表格同一时刻对应字段中。当车道线出现小范围缺失时,根据前一小段和后一小段车道线位置坐标取均值求出当前缺失的车道线位置坐标,再根据自车坐标,计算出车道线与自车的距离,并将该数值更新到csv表格同一时刻对应字段中。As shown in Figures 2-1 and 2-2, 2-1 is a schematic diagram of the lane lines in the csv data, and 2-2 is a schematic diagram of the lane lines in the revised CSV data. The current acquisition device can only collect the two left sides of the current lane. The lane line information and the two lane lines on the right can be identified by using the lanenet lane line detection algorithm. Then, the distance between the lane line and the vehicle can be calculated according to the coordinate information of three or more lane lines on the left and right sides of the lane and the coordinates of the vehicle, and the value can be recorded in the corresponding field of the collected csv form at the same time. When the lane line is missing in a small range, calculate the current missing lane line position coordinates according to the average value of the previous and next lane line position coordinates, and then calculate the distance between the lane line and the own vehicle according to the vehicle coordinates, and Update the value to the corresponding field of the csv table at the same time.
如图3-1和3-2,3-1为csv数据中目标类型和目标物数量的表现示意图,3-2为算法修正后CSV数据中目标类型和目标物数量的示意图,采用yolov7目标检测算法检测当前某一帧画面目标物类型和数量,目标物类型有小车、货车、非机动车、行人、障碍物等等,当采集的csv数据中记录的目标物类型和数量不一致时,比如csv中出现目标物缺失和重复以及目标物类型错误的情况,那么将根据当前目标检测的结果自动修正csv中同一时刻的目标物类型、id、数量信息。As shown in Figures 3-1 and 3-2, 3-1 is a schematic diagram of the target type and the number of targets in the csv data, and 3-2 is a schematic diagram of the target type and the number of targets in the CSV data after algorithm correction, using yolov7 target detection The algorithm detects the type and quantity of objects in a certain current frame. The types of objects include cars, trucks, bicycles, pedestrians, obstacles, etc. When the types and quantities of objects recorded in the collected csv data are inconsistent, such as csv In the case of missing and repeated targets and wrong target types, the target type, id, and quantity information in the csv at the same time will be automatically corrected according to the current target detection results.
如图4-1和4-2,4-1为csv数据中目标物变道行为数据的表现示意图,4-2为算法修正后CSV数据中目标物变道行为数据的示意图,自车和目标车变道行为在场景提取中是一个很重要的参数,采集设备对自车和目标车所处车道的判断也存在精度不高的情况,根据deepsort多目标追踪算法检测的目标物运动轨迹坐标信息和lanenet检测车道线坐标信息,当两者坐标出现重叠并且前后数秒内发生目标物轨迹坐标横穿车道线,就能判断目标在发生变道也即切入切出行为。As shown in Figures 4-1 and 4-2, 4-1 is a schematic diagram of the behavior data of the object changing lanes in the csv data, and 4-2 is a schematic diagram of the behavior data of the object changing lanes in the CSV data after algorithm correction, ego vehicle and target The lane changing behavior of the vehicle is a very important parameter in the scene extraction. The acquisition equipment also has low accuracy in judging the lanes of the own vehicle and the target vehicle. According to the deepsort multi-target tracking algorithm, the target object trajectory coordinate information and lanenet to detect the coordinate information of the lane line. When the coordinates of the two overlap and the target trajectory coordinates cross the lane line within a few seconds before and after, it can be judged that the target is changing lanes, that is, cutting in and out.
如图5-1和5-2,5-1为csv数据中目标物的横向距离和纵向距离的表现示意图,5-2为算法修正后csv数据中目标物的横向距离和纵向距离的示意图,真实的采集设备会有目标物相对自车的横向和纵向距离数据异常的情况,如目标物在自车左边,而采集数据显示目标在自车右边,采用deepsort目标追踪算法检测目标物的位置坐标以及自车位置坐标,就能计算出目标物相对自车的横向和纵向距离。如果对比csv数据出现较大误差,那么将相对横纵坐标数据更新到csv表格同一时刻对应的字段中。As shown in Figures 5-1 and 5-2, 5-1 is a schematic diagram of the horizontal and vertical distances of the target in the csv data, and 5-2 is a schematic diagram of the horizontal and vertical distances of the target in the csv data after algorithm correction. The real acquisition equipment will have abnormal situations in the horizontal and vertical distance data of the target object relative to the vehicle. For example, the target object is on the left side of the vehicle, but the collected data shows that the target is on the right side of the vehicle. Use the deepsort target tracking algorithm to detect the position coordinates of the target object As well as the position coordinates of the own vehicle, the horizontal and vertical distances of the target object relative to the own vehicle can be calculated. If there is a large error in comparing the csv data, then update the relative horizontal and vertical coordinate data to the corresponding field of the csv table at the same time.
如图6-1和6-2,6-1为csv数据中目标物被遮挡识别的目标物数量的表现示意图,6-2为算法修正后的示意图,真实的采集设备采集数据时,如果目标物出现被遮挡后又重现的情况,就会重新生成目标新id,相当于把一个目标识别为两个目标,采用deepsort目标追踪算法,会将该目标始终记录为一个目标,也就是唯一id,然后自动将该目标id信息更新到csv数据同一时刻对应的字段中。As shown in Figures 6-1 and 6-2, 6-1 is a schematic diagram of the number of targets identified by occlusion in csv data, and 6-2 is a schematic diagram after algorithm correction. When the real acquisition device collects data, if the target If an object is blocked and then reappears, a new id of the target will be regenerated, which is equivalent to identifying one target as two targets. Using the deepsort target tracking algorithm, the target will always be recorded as one target, which is the unique id , and then automatically update the target id information to the corresponding field of the csv data at the same time.
如图7-1和7-2,7-1为csv数据中表现示意图,7-2为算法修正后CSV数据中静态场景的示意图,当前的数据采集设备无法获取静态场景信息,如,匝道、加油站、收费站、隧道、晴天、雨天、雾天等,采用mobilenet图像分类算法对这些静态场景图片进行训练,然后逐帧对视频中静态场景检测提取静态场景类型,记录当前时间段。将当前静态场景类型和对应时间段记录到csv数据同一时刻对应的字段中。As shown in Figures 7-1 and 7-2, 7-1 is a schematic diagram of the csv data, and 7-2 is a schematic diagram of the static scene in the CSV data after algorithm correction. The current data collection equipment cannot obtain static scene information, such as ramps, Gas stations, toll stations, tunnels, sunny days, rainy days, foggy days, etc., use the mobilenet image classification algorithm to train these static scene pictures, and then detect and extract static scene types in the video frame by frame, and record the current time period. Record the current static scene type and the corresponding time period in the corresponding field of the csv data at the same time.
如图8-1和8-2,8-1为csv数据表现示意图,8-2为算法修正后的示意图,当前的数据采集设备无法获取目标物运动方向信息,采用deepsort目标追踪算法,检测得到视频中的目标速度及运动轨迹坐标信息(x2,y2),并获取自车速度和坐标信息(x1,y1),经过时间t,目标坐标变为(x2’,y2’),自车坐标变为(x1’,y1’),坐标变化量dx=x1’-x1,dy=y1’-y1,以图像左下角为坐标原点(0,0),假设以起始时刻自车位置为大地静止坐标原点,那么目标相对于大地静止坐标应当为x=x2’+dx,y=y2’+dy,如果运动轨迹坐标横向坐标x增大,纵向坐标y基本保持不变,表明目标向右横穿,反之向左横穿,如果运动轨迹坐标纵向坐标y增大,横向坐标x基本保持不变,表明目标正向运动,反之逆向运动,将航向信息更新到csv数据同一时刻对应的字段中。As shown in Figures 8-1 and 8-2, 8-1 is a schematic diagram of csv data performance, and 8-2 is a schematic diagram after algorithm correction. The current data acquisition equipment cannot obtain the information of the target's movement direction. Using the deepsort target tracking algorithm, the detection is obtained The target speed and trajectory coordinate information (x2, y2) in the video, and obtain the vehicle speed and coordinate information (x1, y1), after time t, the target coordinate becomes (x2', y2'), and the vehicle coordinate becomes is (x1', y1'), the amount of coordinate change dx=x1'-x1, dy=y1'-y1, the lower left corner of the image is the coordinate origin (0,0), and the position of the ego vehicle at the initial moment is assumed to be geostatic coordinate origin, then the target relative to the geostationary coordinates should be x=x2'+dx, y=y2'+dy, if the horizontal coordinate x of the trajectory coordinates increases, the vertical coordinate y remains basically unchanged, indicating that the target traverses to the right , otherwise traverse to the left, if the longitudinal coordinate y of the trajectory coordinates increases, the horizontal coordinate x remains basically unchanged, indicating that the target is moving forward, otherwise it is moving in the reverse direction, and the course information is updated to the corresponding field of the csv data at the same time.
基于从真实场景影像中检测的数据对传感器采集的CSV数据进行修正和补充后,使用更新的csv数据进行场景提取、场景仿真等后续操作。After correcting and supplementing the CSV data collected by the sensor based on the data detected from the real scene image, the updated CSV data is used for subsequent operations such as scene extraction and scene simulation.
图9为本发明实施例提供的一种采集数据自动修正和补充系统结构图,包括获取模块901、提取模块902及修正和补充模块903,其中:Fig. 9 is a structural diagram of an automatic correction and supplementation system for collected data provided by an embodiment of the present invention, including an acquisition module 901, an extraction module 902, and a correction and supplementation module 903, wherein:
获取模块901,用于获取真实场景行车记录仪视频和采集设备的采集CSV数据;Obtaining module 901, used to obtain the CSV data collected by the real-scene driving recorder video and the acquisition device;
提取模块902,用于分别从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型;The extraction module 902 is used to extract the coordinates of the lane line position, the coordinates of the trajectory of each target, the speed and the id of each target, the category information of each target, and the coordinates of each target from the driving recorder in the real scene. The location coordinates of the object and the type of the static scene in the video;
修正和补充模块903,用于根据从所述真实场景行车记录仪中提取车道线位置坐标、每个目标物的运动轨迹坐标、速度及每个目标物id、每个目标物的类别信息、每个目标物的位置坐标以及视频中静态场景所属类型,对采集CSV数据进行自动修正和补充。The correction and supplement module 903 is used to extract the coordinates of the lane line position, the coordinates of the trajectory of each target, the speed and the id of each target, the category information of each target, and the The location coordinates of each target and the type of the static scene in the video are automatically corrected and supplemented for the collected CSV data.
可以理解的是,本发明提供的一种采集数据自动修正和补充系统与前述各实施例提供的采集数据自动修正和补充方法相对应,采集数据自动修正和补充系统的相关技术特征可参考采集数据自动修正和补充方法的相关技术特征,在此不再赘述。It can be understood that the collected data automatic correction and supplement system provided by the present invention corresponds to the collected data automatic correction and supplement methods provided in the foregoing embodiments, and the related technical features of the collected data automatic correction and supplement system can refer to the collection data Relevant technical features of the automatic correction and supplementary methods will not be repeated here.
请参阅图10,图10为本发明实施例提供的电子设备的实施例示意图。如图10所示,本发明实施例提了一种电子设备,包括存储器1010、处理器1020及存储在存储器1010上并可在处理器1020上运行的计算机程序1011,处理器1020执行计算机程序1011时实现采集数据自动修正和补充方法的步骤。Please refer to FIG. 10 . FIG. 10 is a schematic diagram of an embodiment of an electronic device provided by an embodiment of the present invention. As shown in Figure 10, the embodiment of the present invention provides an electronic device, including a
请参阅图11,图11为本发明提供的一种计算机可读存储介质的实施例示意图。如图11所示,本实施例提供了一种计算机可读存储介质1100,其上存储有计算机程序1111,该计算机程序1111被处理器执行时实现采集数据自动修正和补充方法的步骤。Please refer to FIG. 11 , which is a schematic diagram of an embodiment of a computer-readable storage medium provided by the present invention. As shown in FIG. 11 , this embodiment provides a computer-
本发明实施例提供的一种采集数据自动修正和补充方法及系统,该方法采用深度学习目标检测、多目标追踪、车道线检测、图像分类等算法对真实场景行车记录仪视频进行交通流及道路信息的提取,再自动跟采集的csv数据进行对比,纠正数据当中的错误信息,补充数据当中没有的采集信息,从而弥补雷达等采集设备的误差,整体上提升采集数据精度,最终实现构建的场景库或仿真场景精度的提升。The embodiment of the present invention provides a method and system for automatically correcting and supplementing collected data. The method uses algorithms such as deep learning target detection, multi-target tracking, lane line detection, image classification, etc. The information is extracted, and then automatically compared with the collected csv data, correcting the wrong information in the data, and supplementing the collected information that is not in the data, so as to make up for the error of the radar and other collection equipment, improve the accuracy of the collected data as a whole, and finally realize the constructed scene Improvements in the accuracy of libraries or simulation scenarios.
需要说明的是,在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详细描述的部分,可以参见其它实施例的相关描述。It should be noted that, in the foregoing embodiments, descriptions of each embodiment have their own emphases, and for parts that are not described in detail in a certain embodiment, reference may be made to relevant descriptions of other embodiments.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式计算机或者其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce a machine for A device for realizing the functions specified in one or more procedures of a flowchart and/or one or more blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。While preferred embodiments of the invention have been described, additional changes and modifications to these embodiments can be made by those skilled in the art once the basic inventive concept is understood. Therefore, it is intended that the appended claims be construed to cover the preferred embodiment as well as all changes and modifications which fall within the scope of the invention.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包括这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the present invention. Thus, if these modifications and variations of the present invention fall within the scope of the claims of the present invention and equivalent technologies thereof, the present invention also intends to include these modifications and variations.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211737959.4ACN116092290A (en) | 2022-12-31 | 2022-12-31 | A method and system for automatically correcting and supplementing collected data |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211737959.4ACN116092290A (en) | 2022-12-31 | 2022-12-31 | A method and system for automatically correcting and supplementing collected data |
| Publication Number | Publication Date |
|---|---|
| CN116092290Atrue CN116092290A (en) | 2023-05-09 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211737959.4APendingCN116092290A (en) | 2022-12-31 | 2022-12-31 | A method and system for automatically correcting and supplementing collected data |
| Country | Link |
|---|---|
| CN (1) | CN116092290A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101479624A (en)* | 2006-06-30 | 2009-07-08 | 丰田自动车株式会社 | Sensor fusion with radar and camera and automobile drive assist system for changing object occurance probability estimation of radar threshold |
| CN104200657A (en)* | 2014-07-22 | 2014-12-10 | 杭州智诚惠通科技有限公司 | Traffic flow parameter acquisition method based on video and sensor |
| CN106096525A (en)* | 2016-06-06 | 2016-11-09 | 重庆邮电大学 | A kind of compound lane recognition system and method |
| CN107036607A (en)* | 2015-12-15 | 2017-08-11 | 本田技研工业株式会社 | For the system and method for the map datum for examining vehicle |
| CN107646114A (en)* | 2015-05-22 | 2018-01-30 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Method for estimating lane |
| CN107719303A (en)* | 2017-09-05 | 2018-02-23 | 观致汽车有限公司 | A kind of door-window opening control system, method and vehicle |
| CN107948477A (en)* | 2017-12-05 | 2018-04-20 | 张家港奥尼斯信息科技有限公司 | Vehicle identification camera |
| CN108596081A (en)* | 2018-04-23 | 2018-09-28 | 吉林大学 | A kind of traffic detection method merged based on radar and video camera |
| CN108614262A (en)* | 2018-06-22 | 2018-10-02 | 安徽江淮汽车集团股份有限公司 | A kind of vehicle forward target detection method and system |
| CN108960183A (en)* | 2018-07-19 | 2018-12-07 | 北京航空航天大学 | A kind of bend target identification system and method based on Multi-sensor Fusion |
| CN108983219A (en)* | 2018-08-17 | 2018-12-11 | 北京航空航天大学 | A kind of image information of traffic scene and the fusion method and system of radar information |
| CN109143259A (en)* | 2018-08-20 | 2019-01-04 | 北京主线科技有限公司 | High-precision cartography method towards the unmanned truck in harbour |
| US20190025841A1 (en)* | 2017-07-21 | 2019-01-24 | Uber Technologies, Inc. | Machine Learning for Predicting Locations of Objects Perceived by Autonomous Vehicles |
| CN109584571A (en)* | 2019-01-16 | 2019-04-05 | 苏州齐思智行汽车系统有限公司 | Intersection pre-warning and control method and system and sensing device used |
| CN110390814A (en)* | 2019-06-04 | 2019-10-29 | 深圳市速腾聚创科技有限公司 | Monitoring system and method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101479624A (en)* | 2006-06-30 | 2009-07-08 | 丰田自动车株式会社 | Sensor fusion with radar and camera and automobile drive assist system for changing object occurance probability estimation of radar threshold |
| CN104200657A (en)* | 2014-07-22 | 2014-12-10 | 杭州智诚惠通科技有限公司 | Traffic flow parameter acquisition method based on video and sensor |
| CN107646114A (en)* | 2015-05-22 | 2018-01-30 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Method for estimating lane |
| CN107036607A (en)* | 2015-12-15 | 2017-08-11 | 本田技研工业株式会社 | For the system and method for the map datum for examining vehicle |
| CN106096525A (en)* | 2016-06-06 | 2016-11-09 | 重庆邮电大学 | A kind of compound lane recognition system and method |
| US20190025841A1 (en)* | 2017-07-21 | 2019-01-24 | Uber Technologies, Inc. | Machine Learning for Predicting Locations of Objects Perceived by Autonomous Vehicles |
| CN107719303A (en)* | 2017-09-05 | 2018-02-23 | 观致汽车有限公司 | A kind of door-window opening control system, method and vehicle |
| CN107948477A (en)* | 2017-12-05 | 2018-04-20 | 张家港奥尼斯信息科技有限公司 | Vehicle identification camera |
| CN108596081A (en)* | 2018-04-23 | 2018-09-28 | 吉林大学 | A kind of traffic detection method merged based on radar and video camera |
| CN108614262A (en)* | 2018-06-22 | 2018-10-02 | 安徽江淮汽车集团股份有限公司 | A kind of vehicle forward target detection method and system |
| CN108960183A (en)* | 2018-07-19 | 2018-12-07 | 北京航空航天大学 | A kind of bend target identification system and method based on Multi-sensor Fusion |
| CN108983219A (en)* | 2018-08-17 | 2018-12-11 | 北京航空航天大学 | A kind of image information of traffic scene and the fusion method and system of radar information |
| CN109143259A (en)* | 2018-08-20 | 2019-01-04 | 北京主线科技有限公司 | High-precision cartography method towards the unmanned truck in harbour |
| CN109584571A (en)* | 2019-01-16 | 2019-04-05 | 苏州齐思智行汽车系统有限公司 | Intersection pre-warning and control method and system and sensing device used |
| CN110390814A (en)* | 2019-06-04 | 2019-10-29 | 深圳市速腾聚创科技有限公司 | Monitoring system and method |
| Publication | Publication Date | Title |
|---|---|---|
| CN113379805B (en) | Multi-information resource fusion processing method for traffic nodes | |
| US20180373980A1 (en) | Method for training and refining an artificial intelligence | |
| CN114049382B (en) | Target fusion tracking method, system and medium in intelligent network connection environment | |
| CN109697420A (en) | A kind of Moving target detection and tracking towards urban transportation | |
| CN113465615B (en) | Lane line generation method and related device | |
| US10759439B2 (en) | Driver's driving tendency determination apparatus and method thereof | |
| CN110176022B (en) | Tunnel panoramic monitoring system and method based on video detection | |
| CN111507126B (en) | Alarm method and device of driving assistance system and electronic equipment | |
| CN113112524B (en) | Track prediction method and device for moving object in automatic driving and computing equipment | |
| CN111062971B (en) | Deep learning multi-mode-based mud head vehicle tracking method crossing cameras | |
| CN107506753B (en) | Multi-vehicle tracking method for dynamic video monitoring | |
| CN112434657A (en) | Drift carrier detection method, device, program, and computer-readable medium | |
| CN117496515A (en) | Point cloud data labeling method, storage medium and electronic equipment | |
| CN115512080A (en) | Point cloud continuous frame marking method, marking system, electronic device and storage medium | |
| CN114067290A (en) | A visual perception method and system based on rail transit | |
| CN116092290A (en) | A method and system for automatically correcting and supplementing collected data | |
| WO2022038259A1 (en) | Processing images for extracting information about known objects | |
| CN113177509A (en) | Method and device for recognizing backing behavior | |
| CN114283311B (en) | A method and device for labeling abnormal driving data | |
| CN115355919A (en) | Precision detection method and device of vehicle positioning algorithm, computing equipment and medium | |
| CN112805200B (en) | Snapshot image of traffic scene | |
| CN112906424B (en) | Image recognition methods, devices and equipment | |
| CN116228820B (en) | Obstacle detection method and device, electronic equipment and storage medium | |
| CN118015047B (en) | Multi-target tracking method, device, equipment and storage medium | |
| CN115655264A (en) | Pose estimation method and device |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |