


技术领域technical field
本申请涉及车载智能设备技术领域,具体地,涉及一种车辆能见度检测方法及系统。The present application relates to the technical field of vehicle-mounted intelligent devices, and in particular, to a vehicle visibility detection method and system.
背景技术Background technique
自动驾驶功能使用过程中,如果遇到雾天、雨天、雪天、沙尘暴等恶劣天气,会导致视觉摄像头能见度低,感知距离受限、感知效果差,容易导致安全事故。因此,需要监控自车周围的能见度状态,用以判断是否需要退出设计运行区域,提醒用户接管。在自动驾驶领域,也有通过深度学习提取摄像头图像的环境特征,对天气进行分类,结合天气预报、阳光、雨量传感器等信息进行天气的综合判断。而针对自动驾驶领域的深度学习法,需要单独训练一个网络去分类天气状态。这种做法存在两个问题:第一是单独用深度学习分类天气需要额外的算力,第二是仅使用视觉摄像头,准确率较低。During the use of the automatic driving function, if you encounter severe weather such as fog, rain, snow, sandstorms, etc., the visibility of the visual camera will be low, the perception distance will be limited, and the perception effect will be poor, which will easily lead to safety accidents. Therefore, it is necessary to monitor the visibility status around the vehicle to judge whether it is necessary to exit the design operation area and remind the user to take over. In the field of autonomous driving, deep learning is also used to extract the environmental characteristics of camera images, classify the weather, and make comprehensive weather judgments based on information such as weather forecasts, sunlight, and rainfall sensors. For the deep learning method in the field of autonomous driving, a separate network needs to be trained to classify weather conditions. There are two problems with this approach: the first is that using deep learning alone to classify the weather requires additional computing power, and the second is that only the visual camera is used, and the accuracy is low.
在其他领域基于图像的能见度检测方法,一般是是基于固定场景,在主相机位置固定的情况下适用。而自动驾驶的主相机工作范围的场景并非固定,直接使用效果不好。Image-based visibility detection methods in other fields are generally based on fixed scenes and are applicable when the position of the main camera is fixed. However, the scene of the working range of the main camera for autonomous driving is not fixed, and the effect of direct use is not good.
发明内容Contents of the invention
为解决上述问题的至少一个方面,本发明提供一种车辆能见度检测方法,包括:利用雷达装置识别车辆前方的障碍物,并生成障碍物信息;利用处理器接收所述障碍物信息,并基于所述障碍物信息判断车辆前方是否存在障碍物;当车辆前方存在障碍物时,所述处理器接收双目摄像头获取的环境图像,并基于所述障碍物信息和所述双目摄像头获取的环境图像确定能见度值;当车辆前方不存在障碍物时,所述处理器接收所述双目摄像头中的左摄像头或右摄像头获取的环境图像,并基于所述左摄像头或所述右摄像头获取的环境图像确定能见度值;设定能见度阈值,基于所述能见度阈值确定所述能见度值对应的能见度等级。In order to solve at least one aspect of the above problems, the present invention provides a vehicle visibility detection method, including: using a radar device to identify obstacles in front of the vehicle, and generating obstacle information; using a processor to receive the obstacle information, and based on the obtained The obstacle information determines whether there is an obstacle in front of the vehicle; when there is an obstacle in front of the vehicle, the processor receives the environmental image obtained by the binocular camera, and based on the obstacle information and the environmental image obtained by the binocular camera Determine the visibility value; when there is no obstacle in front of the vehicle, the processor receives the environmental image obtained by the left camera or the right camera in the binocular camera, and based on the environmental image obtained by the left camera or the right camera Determine a visibility value; set a visibility threshold, and determine a visibility level corresponding to the visibility value based on the visibility threshold.
优选地,基于所述障碍物信息和所述双目摄像头获取的环境图像确定能见度值的步骤包括:基于所述双目摄像头获取的环境图像生成环境图像信息;根据所述障碍物信息和所述环境图像信息确定被所述雷达装置识别且未被所述双目摄像头拍摄到的丢失障碍物,通过所述丢失障碍物的障碍物信息确定能见度值。Preferably, the step of determining the visibility value based on the obstacle information and the environmental image acquired by the binocular camera includes: generating environmental image information based on the environmental image acquired by the binocular camera; according to the obstacle information and the The environmental image information determines the lost obstacle recognized by the radar device and not captured by the binocular camera, and the visibility value is determined through the obstacle information of the lost obstacle.
优选地,根据所述障碍物信息和所述环境图像信息确定被所述雷达装置识别且未被所述双目摄像头拍摄到的丢失障碍物的步骤包括:采用匈牙利匹配算法将所述障碍物信息和所述环境图像信息进行关联,通过比较所述障碍物信息和所述环境图像信息确定丢失障碍物。Preferably, the step of determining the lost obstacle identified by the radar device and not photographed by the binocular camera according to the obstacle information and the environmental image information includes: using the Hungarian matching algorithm to combine the obstacle information It is associated with the environmental image information, and the missing obstacle is determined by comparing the obstacle information with the environmental image information.
优选地,所述雷达装置生成的所述障碍物信息包括编号、距离、方向和速度,所述能见度值为所述丢失障碍物的距离值。Preferably, the obstacle information generated by the radar device includes number, distance, direction and speed, and the visibility value is the distance value of the lost obstacle.
优选地,基于所述左摄像头或所述右摄像头获取的环境图像确定能见度值的步骤包括:根据所述左摄像头或所述右摄像头的环境图像,采用车道线分割法确定车道线点集;对所述车道线点集中的各点由近及远进行降序排列;在所述车道线点集中选择排序靠前的预设数量的点组成目标点集,计算所述目标点集中各点至车辆的距离平均值,所述能见度值为所述目标点集中各点至车辆距离的平均值。Preferably, the step of determining the visibility value based on the environment image acquired by the left camera or the right camera includes: using the lane line segmentation method to determine the lane line point set according to the environment image of the left camera or the right camera; The points in the lane line point set are arranged in descending order from near to far; in the lane line point set, select a preset number of points to form a target point set, and calculate the distance from each point in the target point set to the vehicle. The distance average value, the visibility value is the average value of the distance from each point in the target point set to the vehicle.
优选地,还包括连续获取多个时刻的目标点集,并分别计算多个时刻的目标点集对应的能见度值,计算当前目标点集中各点至车辆距离的平均值与多个时刻的目标点集对应的能见度值的均方根误差,当均方根误差小于设定的误差阈值时,确定当前目标点集中各点至车辆距离的平均值为能见度值。Preferably, it also includes continuously acquiring the target point sets at multiple moments, and calculating the corresponding visibility values of the target point sets at multiple moments, and calculating the average distance from each point in the current target point set to the vehicle and the target point at multiple moments. The root mean square error of the visibility value corresponding to the set, when the root mean square error is less than the set error threshold, determine the average value of the distance from each point in the current target point set to the vehicle as the visibility value.
另一方面,本发明提供一种用于实施任一前述方法的系统,包括:雷达装置、双目摄像头和处理器,所述双目摄像头和所述雷达装置分别与所述处理器通信连接。In another aspect, the present invention provides a system for implementing any one of the foregoing methods, including: a radar device, a binocular camera, and a processor, and the binocular camera and the radar device are respectively connected in communication with the processor.
优选地,还包括通信单元,所述通信单元用于连接所述处理器和车辆控制平台。Preferably, a communication unit is also included, and the communication unit is used to connect the processor and the vehicle control platform.
优选地,所述雷达装置采用毫米波雷达,所述毫米波雷达设置在车辆的前端,所述双目摄像头设置在所述车辆的驾驶室前端。Preferably, the radar device adopts a millimeter-wave radar, the millimeter-wave radar is arranged at the front end of the vehicle, and the binocular camera is arranged at the front end of the cab of the vehicle.
优选地,所述雷达装置的最远检测距离大于等于所述双目摄像头的最远检测距离,所述雷达装置的水平视场角小于等于所述双目摄像头的水平视场角。Preferably, the farthest detection distance of the radar device is greater than or equal to the farthest detection distance of the binocular camera, and the horizontal field of view of the radar device is smaller than or equal to the horizontal field of view of the binocular camera.
本发明的车辆能见度检测方法及系统具有以下有益效果:通过处理器基于雷达装置生成的障碍物信息对车辆前方的障碍物进行判断,并基于判断结果接收和处理双目摄像头获取的环境图像,进一步地确定能见度值,综合雷达装置和双目摄像头的检测结果,提高能见度值的准确率。The vehicle visibility detection method and system of the present invention have the following beneficial effects: the obstacle information in front of the vehicle is judged by the processor based on the obstacle information generated by the radar device, and the environmental image obtained by the binocular camera is received and processed based on the judgment result, and further The visibility value is accurately determined, and the detection results of the radar device and the binocular camera are integrated to improve the accuracy of the visibility value.
附图说明Description of drawings
为了更好地理解本发明的上述及其他目的、特征、优点和功能,可以参考附图中所示的实施方式。附图中相同的附图标记指代相同的部件。本领域技术人员应该理解,附图旨在示意性地阐明本发明的优选实施方式,对本发明的范围没有任何限制作用,图中各个部件并非按比例绘制。For a better understanding of the above and other objects, features, advantages and functions of the present invention, reference may be made to the embodiments shown in the accompanying drawings. Like reference numerals refer to like parts in the figures. Those skilled in the art should understand that the accompanying drawings are intended to schematically illustrate preferred embodiments of the present invention, and are not intended to limit the scope of the present invention, and components in the drawings are not drawn to scale.
图1示出了根据本发明实施例的车辆能见度检测方法的流程示意图;FIG. 1 shows a schematic flow chart of a vehicle visibility detection method according to an embodiment of the present invention;
图2示出了根据本发明实施例的车辆能见度检测系统的结构框图;Fig. 2 shows a structural block diagram of a vehicle visibility detection system according to an embodiment of the present invention;
图3示出了根据本发明实施例的车辆能见度系统的雷达装置和双目摄像头的应用场景示意图。Fig. 3 shows a schematic diagram of application scenarios of a radar device and a binocular camera of a vehicle visibility system according to an embodiment of the present invention.
附图标记说明:Explanation of reference signs:
10、雷达装置;20、双目摄像头;30、处理器;31、计算单元;32、以太网接口;33、CAN接口;40、通信单元。10. Radar device; 20. Binocular camera; 30. Processor; 31. Computing unit; 32. Ethernet interface; 33. CAN interface; 40. Communication unit.
具体实施方式detailed description
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and they should be regarded as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
在本文中使用的术语“包括”及其变形表示开放性包括,即“包括但不限于”。除非特别申明,术语“或”表示“和/或”。术语“基于”表示“至少部分地基于”。术语“一个示例实施例”和“一个实施例”表示“至少一个示例实施例”。术语“另一实施例”表示“至少一个另外的实施例”。术语“第一”、“第二”等等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。As used herein, the term "comprise" and its variants mean open inclusion, ie "including but not limited to". The term "or" means "and/or" unless otherwise stated. The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment." The term "another embodiment" means "at least one further embodiment". The terms "first", "second", etc. may refer to different or the same object. Other definitions, both express and implied, may also be included below.
为了至少部分地解决上述问题以及其他潜在问题中的一个或者多个,本公开的一个实施例提出了一种车辆能见度检测方法,包括:利用雷达装置识别车辆前方的障碍物,并生成障碍物信息;利用处理器接收障碍物信息,并基于障碍物信息判断车辆前方是否存在障碍物;当车辆前方存在障碍物时,处理器接收双目摄像头获取的环境图像,并基于障碍物信息和双目摄像头获取的环境图像确定能见度值;当车辆前方不存在障碍物时,处理器接收所述双目摄像头中的左摄像头或右摄像头获取的环境图像,并基于左摄像头或右摄像头获取的环境图像确定能见度值;设定能见度阈值,基于能见度阈值确定所述能见度值对应的能见度等级。In order to at least partially solve one or more of the above-mentioned problems and other potential problems, an embodiment of the present disclosure proposes a vehicle visibility detection method, including: using a radar device to identify obstacles in front of the vehicle and generate obstacle information ; Use the processor to receive obstacle information, and judge whether there is an obstacle in front of the vehicle based on the obstacle information; when there is an obstacle in front of the vehicle, the processor receives the environmental image obtained by the binocular camera, and based on the obstacle information and the binocular camera The acquired environmental image determines the visibility value; when there is no obstacle in front of the vehicle, the processor receives the environmental image acquired by the left camera or the right camera in the binocular camera, and determines the visibility based on the environmental image acquired by the left camera or the right camera value; set a visibility threshold, and determine a visibility level corresponding to the visibility value based on the visibility threshold.
具体地,如图1和图3所示,在本实施例中,雷达装置采用毫米波雷达,毫米波雷达固定设置在车辆的前端,以识别车辆前方的障碍物,双目摄像头设置在车辆的驾驶室内的强档风玻璃处,以获取车辆前方的环境图像。处理器采用具有用于数据接收和处理的计算单元,该计算单元分别与雷达装置和双目摄像头通信连接,以接收雷达装置生成的障碍物信息和双目摄像头拍摄的图像。当然在另外的实施例中,雷达装置还可以采用其他具有障碍物识别以及障碍物信息生成功能的雷达或雷达组合,例如激光雷达等。Specifically, as shown in Fig. 1 and Fig. 3, in this embodiment, the radar device adopts a millimeter-wave radar, and the millimeter-wave radar is fixedly arranged at the front end of the vehicle to identify obstacles in front of the vehicle, and the binocular camera is arranged at the front of the vehicle. At the strong windshield in the cab to obtain an image of the environment in front of the vehicle. The processor adopts a computing unit for data receiving and processing, and the computing unit is respectively connected to the radar device and the binocular camera to receive obstacle information generated by the radar device and images captured by the binocular camera. Of course, in other embodiments, the radar device can also use other radars or radar combinations with functions of obstacle recognition and obstacle information generation, such as laser radar and the like.
雷达装置通过固定设置在车辆前端识别其前方的障碍物,并生成障碍物信息,其中障碍物信息包括编号、距离、方向和速度。编号为雷达装置根据检测范围内的各障碍物进行的身份标记以实现对不同障碍物的区分;距离为障碍物与车辆之前的距离;方向为障碍物的移动方向;速度为障碍物的移动速度。障碍物包括可以作为车辆相对位置参考目标车辆、行人等。The radar device is fixedly installed at the front of the vehicle to identify obstacles in front of it and generate obstacle information, where the obstacle information includes number, distance, direction and speed. The number is the identification mark of the radar device according to the obstacles within the detection range to distinguish different obstacles; the distance is the distance between the obstacle and the vehicle; the direction is the moving direction of the obstacle; the speed is the moving speed of the obstacle . Obstacles include target vehicles, pedestrians, etc., which can be used as a reference for the relative position of the vehicle.
处理器根据生成的障碍物信息判断车辆前方是否存在障碍物,例如,在本实施例中,处理器根据雷达装置生成的障碍物信息判断车辆前方是否存在车辆,当车辆前方存在车辆时,处理器接收并处理双目摄像头获取的车辆前方的环境图像,并基于障碍物信息和双目摄像头获取的环境图像确定能见度值,具体地,处理器的计算单元基于双目摄像头获取的环境图像生成环境图像信息;根据所述障碍物信息和所述环境图像信息确定被所述雷达装置识别且未被所述双目摄像头拍摄到的丢失障碍物,通过所述丢失障碍物的障碍物信息确定能见度值。定位丢失障碍物为:在双目摄像头的检测范围内,被毫米波雷达识别且未被双目摄像头的环境图像捕捉到的障碍物。The processor judges whether there is an obstacle in front of the vehicle according to the generated obstacle information. For example, in this embodiment, the processor judges whether there is a vehicle in front of the vehicle according to the obstacle information generated by the radar device. When there is a vehicle in front of the vehicle, the processor Receive and process the environmental image in front of the vehicle acquired by the binocular camera, and determine the visibility value based on the obstacle information and the environmental image acquired by the binocular camera. Specifically, the computing unit of the processor generates an environmental image based on the environmental image acquired by the binocular camera information; according to the obstacle information and the environmental image information, the lost obstacle recognized by the radar device and not captured by the binocular camera is determined, and the visibility value is determined through the obstacle information of the lost obstacle. The location loss obstacle is: within the detection range of the binocular camera, the obstacle recognized by the millimeter wave radar and not captured by the environmental image of the binocular camera.
在一些实施例中,雷达装置生成的障碍物信息包括编号、距离、方向和速度,能见度值为丢失障碍物的距离值。In some embodiments, the obstacle information generated by the radar device includes number, distance, direction and speed, and the visibility value is the distance value of the missing obstacle.
具体地,毫米波雷达测量的自车前方的障碍物为多个障碍物车辆,生成的障碍物信息包括障碍物信息列表,障碍物信息表中包括多个障碍物车辆的编号,以及多个障碍物车辆分别与自车的距离,多个障碍物车辆的移动方向和移动速度。Specifically, the obstacles in front of the vehicle measured by the millimeter-wave radar are multiple obstacle vehicles, and the generated obstacle information includes a list of obstacle information. The obstacle information table includes numbers of multiple obstacle vehicles, and multiple obstacle vehicles. The distance between the obstacle vehicle and the self-vehicle, the moving direction and moving speed of multiple obstacle vehicles.
在一些实施例中,根据障碍物信息和环境图像信息确定被雷达装置识别且未被双目摄像头拍摄到的丢失障碍物的步骤包括:采用匈牙利匹配算法将障碍物信息和环境图像信息进行关联,通过比较障碍物信息和环境图像信息确定丢失障碍物。In some embodiments, the step of determining the lost obstacle identified by the radar device and not captured by the binocular camera according to the obstacle information and the environmental image information includes: using the Hungarian matching algorithm to associate the obstacle information with the environmental image information, The missing obstacle is determined by comparing the obstacle information with the environment image information.
具体地,计算单元根据双目摄像头获取的环境图像生成环境图像信息,其中环境图像信息包括障碍物信息列表。采用匈牙利匹配算法将基于毫米波雷达的识别的障碍物信息列表和基于双目摄像头的拍摄图像生成的障碍物信息列表进行关联,使在上述两个障碍物信息列表中同一障碍物分别对应,以双目摄像头的检测范围为标准,根据障碍物车辆与自车的距离,确定在基于毫米波雷达确定的障碍物信息列表中出现而未出现在基于双目摄像头的拍摄图像生成的障碍物信息列表中的障碍物车辆,即丢失障碍物。Specifically, the calculation unit generates environmental image information according to the environmental image acquired by the binocular camera, wherein the environmental image information includes a list of obstacle information. The Hungarian matching algorithm is used to associate the obstacle information list based on the millimeter-wave radar recognition with the obstacle information list generated based on the binocular camera images, so that the same obstacle in the above two obstacle information lists corresponds to each other. The detection range of the binocular camera is the standard. According to the distance between the obstacle vehicle and the self-vehicle, it is determined that it appears in the obstacle information list determined based on the millimeter-wave radar but does not appear in the obstacle information list generated based on the images captured by the binocular camera. The obstacle vehicle in the , that is, the missing obstacle.
根据丢失障碍物在基于毫米波雷达生成的障碍物信息中对应的距离,确定丢失障碍物与自车的距离为能见度值。在一些实施例中,当丢失障碍物包括多个障碍物车辆时,能见度值为与自车距离最近的障碍物车辆的距离值。在另外的实施例中,能见度值还可以是多个相邻时刻确定的障碍物车辆距离值的平均值。在其他的实施例中,当前确定的丢失障碍物车辆的距离值与多个相邻时刻的能见度值的均方根误差小于设定的阈值时,确定丢失障碍物的距离值为能见度值。According to the corresponding distance of the lost obstacle in the obstacle information generated based on the millimeter wave radar, the distance between the lost obstacle and the own vehicle is determined as the visibility value. In some embodiments, when the missing obstacle includes multiple obstacle vehicles, the visibility value is the distance value of the obstacle vehicle closest to the ego vehicle. In another embodiment, the visibility value may also be an average value of the obstacle-vehicle distance values determined at multiple adjacent times. In other embodiments, when the root mean square error between the currently determined distance value of the vehicle with the missing obstacle and the visibility values at multiple adjacent moments is smaller than a set threshold, determine the distance value of the missing obstacle as the visibility value.
根据天气状况设定能见度阈值,例如H1和H2,其中,通过在晴朗天气条件下对能见度进行检测,确定能见度阈值H1;通过在雾天、雨雪天气条件下对能见度进行检测,确定能见度阈值H2。当能见度值大于H1时,能见度等级为优;当能见度值大于H2且小于等于H1时,能见度等级为正常;当能见度值小于等于H2时,能见度等级为差。Set the visibility thresholds according to the weather conditions, such as H1 and H2, where the visibility threshold H1 is determined by detecting the visibility under clear weather conditions; the visibility threshold H2 is determined by detecting the visibility under foggy, rainy and snowy weather conditions . When the visibility value is greater than H1, the visibility level is excellent; when the visibility value is greater than H2 and less than or equal to H1, the visibility level is normal; when the visibility value is less than or equal to H2, the visibility level is poor.
当处理器根据雷达装置生成的障碍物信息判断车辆前方不存在车辆时,处理器接收双目摄像头的左摄像头或右摄像头获取的环境图像,处理器的计算单元采用基于深度学习的车道线实例分割算法,根据输入的双目中并根据环境图像判断车辆前方是否存在车道线。When the processor judges that there is no vehicle in front of the vehicle based on the obstacle information generated by the radar device, the processor receives the environmental image captured by the left camera or the right camera of the binocular camera, and the computing unit of the processor adopts lane line instance segmentation based on deep learning The algorithm judges whether there is a lane line in front of the vehicle according to the input binocular image and the environment image.
在一些实施例中,基于双目摄像头的左摄像头或右摄像头获取的环境图像确定能见度值的步骤包括:根据左摄像头或右摄像头的环境图像,采用车道线分割法确定车道线点集;对车道线点集中的各点由近及远进行降序排列;在车道线点集中选择排序靠前的预设数量的点组成目标点集,计算目标点集中各点至车辆的距离平均值,能见度值为目标点集中各点至车辆距离的平均值。In some embodiments, the step of determining the visibility value based on the environment image acquired by the left camera or the right camera of the binocular camera includes: using the lane line segmentation method to determine the lane line point set according to the environment image of the left camera or the right camera; The points in the line point set are arranged in descending order from near to far; in the lane line point set, select the preset number of points that are ranked first to form the target point set, and calculate the average distance from each point in the target point set to the vehicle, and the visibility value is The average distance from each point in the target point set to the vehicle.
具体地,处理器的计算单元接收双目摄像头的左摄像头获取的RGB三色通道的环境图像,采用车道线分割算法对获取的环境图像进行处理,使用训练好的模型对环境图像中的像素点进行分类,以识别车道线对应的像素点,并输出车道线在车道线平面中的坐标点集,即车道线点集。Specifically, the calculation unit of the processor receives the environmental image of the RGB three-color channel acquired by the left camera of the binocular camera, uses the lane line segmentation algorithm to process the acquired environmental image, and uses the trained model to analyze the pixel points in the environmental image Classify to identify the pixel points corresponding to the lane line, and output the coordinate point set of the lane line in the lane line plane, that is, the lane line point set.
对车道线点集中的各点以自车为参考,由近及远采用降序进行排列,从车道线点集中最小排序的点开始顺次选择N个点组成目标点集,以目标点集中的各个点与自车距离的平均值作为能见度值。在另一些实施例中,还包括连续获取多个时刻的目标点集,并分别计算多个时刻的目标点集对应的能见度值,计算当前时刻目标点集中各点至车辆距离的平均值与多个时刻的目标点集对应的能见度值的均方根误差,当均方根误差小于设定的误差阈值时,确定当前目标点集中各点至车辆距离的平均值为能见度值。For each point in the lane line point set, the self-vehicle is used as a reference, and they are arranged in descending order from near to far, and N points are sequentially selected from the smallest sorted point in the lane line point set to form the target point set. The average value of the distance between the point and the vehicle is taken as the visibility value. In some other embodiments, it also includes continuously acquiring the target point sets at multiple moments, and calculating respectively the visibility values corresponding to the target point sets at multiple moments, and calculating the average value and the multiplicity of the distances from each point in the target point set to the vehicle at the current moment. The root mean square error of the visibility value corresponding to the target point set at a moment, when the root mean square error is less than the set error threshold, determine the average value of the distance from each point in the current target point set to the vehicle as the visibility value.
根据天气状况设定能见度阈值,例如Y1和Y2,其中,通过在晴朗天气条件下对能见度进行检测,确定能见度阈值Y1;通过在雾天、雨雪天气条件下对能见度进行检测,确定能见度阈值Y2。当能见度值大于Y1时,能见度等级为优;当能见度值大于Y2且小于等于Y1时,能见度等级为正常;当能见度值小于等于Y2时,能见度等级为差。Set the visibility thresholds according to the weather conditions, such as Y1 and Y2, where the visibility threshold Y1 is determined by detecting the visibility under clear weather conditions; the visibility threshold Y2 is determined by detecting the visibility under foggy, rainy and snowy weather conditions . When the visibility value is greater than Y1, the visibility level is excellent; when the visibility value is greater than Y2 and less than or equal to Y1, the visibility level is normal; when the visibility value is less than or equal to Y2, the visibility level is poor.
另一方面,提供一种车辆能见度检测系统,包括雷达装置10、双目摄像头20和处理器30,雷达装置10与双目摄像头20分别与处理器30通信连接。On the other hand, a vehicle visibility detection system is provided, including a
具体地,如图2和图3所示,处理器30包括计算单元31、以太网接口32和CAN接口33,其中计算单元31用于接收和处理雷达装置10和双目摄像头20的数据,包括判断雷达装置10的生成的障碍物信息中是否存在车辆,接收双目摄像头20获取的环境图像或者双目摄像头20的左摄像头或右摄像头获取的环境图像,对接收的环境图像进行处理,判断环境图像中是否存在车道线,计算能见度值,并根据设定的阈值确定能见度等级。以太网接口32用于连接雷达装置10和处理器30的计算单元31,CAN接口33用于连接双目摄像头20和计算单元31,。Specifically, as shown in Figures 2 and 3, the processor 30 includes a computing unit 31, an Ethernet interface 32 and a CAN interface 33, wherein the computing unit 31 is used to receive and process the data of the
在一些实施例中,还包括通信单元,通信单元40用于连接处理器30和车辆控制平台。In some embodiments, a communication unit is also included, and the communication unit 40 is used to connect the processor 30 and the vehicle control platform.
通信单元40一端通过CAN接口33与处理器30的计算单元31连接,通信单元40的另一端用于连接车辆控制平台,以实现处理器30与车辆控制平台的数据传输。通过处理器30与车辆控制平台的通信连接,车辆控制平台接收能见度值或者能见度等级以进一步优化车辆行进速度和车辆状态。One end of the communication unit 40 is connected to the computing unit 31 of the processor 30 through the CAN interface 33 , and the other end of the communication unit 40 is used to connect to the vehicle control platform to realize data transmission between the processor 30 and the vehicle control platform. Through the communication connection between the processor 30 and the vehicle control platform, the vehicle control platform receives the visibility value or visibility level to further optimize the vehicle travel speed and vehicle state.
在一些实施例中,雷达装置10采用毫米波雷达,毫米波雷达设置在车辆的前端,双目摄像头20设置在车辆的驾驶室前端。具体地,如图3所示,毫米波雷达设置在车辆前端,双目摄像头20设置在车辆驾驶室内的前挡风玻璃处,使毫米波雷达的检测范围和双目摄像头20的检测范围的重合部分尽可能的大。In some embodiments, the
在一些实施例中,雷达装置10的最远检测距离大于等于双目摄像头20的最远检测距离,雷达装置10的水平视场角小于等于双目摄像头20的水平视场角。以确保能见度值的准确性。In some embodiments, the farthest detection distance of the
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文。Having described various embodiments of the present disclosure above, the foregoing description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Many modifications and alterations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen to best explain the principle of each embodiment, practical application or technical improvement in the market, or to enable other ordinary skilled in the art to understand this article.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110783114.8ACN115616557A (en) | 2021-07-12 | 2021-07-12 | Vehicle visibility detection method and system |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110783114.8ACN115616557A (en) | 2021-07-12 | 2021-07-12 | Vehicle visibility detection method and system |
| Publication Number | Publication Date |
|---|---|
| CN115616557Atrue CN115616557A (en) | 2023-01-17 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110783114.8APendingCN115616557A (en) | 2021-07-12 | 2021-07-12 | Vehicle visibility detection method and system |
| Country | Link |
|---|---|
| CN (1) | CN115616557A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118447485A (en)* | 2024-07-08 | 2024-08-06 | 图为信息科技(深圳)有限公司 | Vehicle target recognition system based on edge calculation |
| CN119126103A (en)* | 2024-09-12 | 2024-12-13 | 华诺星空技术股份有限公司 | A radar and vision-integrated highway visibility detection method and system |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2008056163A (en)* | 2006-09-01 | 2008-03-13 | Mazda Motor Corp | Obstacle detecting device for vehicle |
| US20180058873A1 (en)* | 2016-08-29 | 2018-03-01 | Denso International America, Inc. | Driver visibility detection system and method for detecting driver visibility |
| CN108663368A (en)* | 2018-05-11 | 2018-10-16 | 长安大学 | A kind of system and method for real-time monitoring freeway network night entirety visibility |
| CN110164163A (en)* | 2018-02-13 | 2019-08-23 | 福特全球技术公司 | The method and apparatus determined convenient for environment visibility |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2008056163A (en)* | 2006-09-01 | 2008-03-13 | Mazda Motor Corp | Obstacle detecting device for vehicle |
| US20180058873A1 (en)* | 2016-08-29 | 2018-03-01 | Denso International America, Inc. | Driver visibility detection system and method for detecting driver visibility |
| CN110164163A (en)* | 2018-02-13 | 2019-08-23 | 福特全球技术公司 | The method and apparatus determined convenient for environment visibility |
| CN108663368A (en)* | 2018-05-11 | 2018-10-16 | 长安大学 | A kind of system and method for real-time monitoring freeway network night entirety visibility |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118447485A (en)* | 2024-07-08 | 2024-08-06 | 图为信息科技(深圳)有限公司 | Vehicle target recognition system based on edge calculation |
| CN118447485B (en)* | 2024-07-08 | 2024-10-15 | 图为信息科技(深圳)有限公司 | Vehicle target recognition system based on edge calculation |
| CN119126103A (en)* | 2024-09-12 | 2024-12-13 | 华诺星空技术股份有限公司 | A radar and vision-integrated highway visibility detection method and system |
| Publication | Publication Date | Title |
|---|---|---|
| CN108983219B (en) | Fusion method and system for image information and radar information of traffic scene | |
| CN107202983B (en) | Automatic braking method and system based on image recognition and millimeter wave radar fusion | |
| Chiu et al. | Lane detection using color-based segmentation | |
| US20200406897A1 (en) | Method and Device for Recognizing and Evaluating Roadway Conditions and Weather-Related Environmental Influences | |
| CN102165493B (en) | Detection of vehicles in images | |
| CN112215306A (en) | A target detection method based on the fusion of monocular vision and millimeter wave radar | |
| US8773535B2 (en) | Adaptation for clear path detection using reliable local model updating | |
| CN110386065B (en) | Vehicle blind area monitoring method and device, computer equipment and storage medium | |
| CN114415171A (en) | A drivable area detection method based on 4D millimeter wave radar | |
| KR101891460B1 (en) | Method and apparatus for detecting and assessing road reflections | |
| CN108596081A (en) | A kind of traffic detection method merged based on radar and video camera | |
| US8681222B2 (en) | Adaptation for clear path detection with additional classifiers | |
| CN105460009A (en) | Automobile control method and device | |
| CN110796102B (en) | Vehicle target sensing system and method | |
| CN112101316B (en) | Target detection method and system | |
| KR20110001427A (en) | Lane Fast Detection Method by Extracting Region of Interest | |
| CN113611008B (en) | Vehicle driving scene acquisition method, device, equipment and medium | |
| CN113111707B (en) | Front car detection and ranging method based on convolutional neural network | |
| CN110371016B (en) | Distance estimation for vehicle headlights | |
| CN103903445A (en) | Vehicle queuing length detection method and system based on video | |
| CN115616557A (en) | Vehicle visibility detection method and system | |
| CN111414857A (en) | Front vehicle detection method based on vision multi-feature fusion | |
| CN117141470A (en) | A moral decision-making system for autonomous vehicles in complex collision scenarios | |
| CN117173666A (en) | Automatic driving target identification method and system for unstructured road | |
| Lee et al. | Ambient environment recognition algorithm fusing vision and LiDAR sensors for robust multi-channel V2X system |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |